Professional Documents
Culture Documents
20487B
Developing Windows Azure and Web
Services
xiv Developing Windows Azure and Web Services
Contents
Module 1: Overview of Service and Cloud Technologies
Lesson 1: Key Components of Distributed Applications 1-2
Lesson 2: Data and Data Access Technologies 1-6
Lesson 3: Service Technologies 1-9
Lesson 4: Cloud Computing 1-13
Lesson 5: Exploring the Blue Yonder Airlines Travel Companion Application 1-22
Lab: Exploring the Work Environment 1-26
Module 1
Overview of Service and Cloud Technologies
Contents:
Module Overview 1-1
Module Overview
This module provides an overview of service and cloud technologies using the Microsoft .NET Framework
and the Windows Azure cloud. The first lesson, Key Components of Distributed Applications, discusses
characteristics that are common to distributed systems, regardless of the technologies they use. Lesson 2,
Data and Data Access Technologies, describes how data is used in distributed applications. Lesson 3,
Service Technologies, discusses two of the most common protocols in distributed system and the .NET
Framework technologies used to develop services based on those protocols. Lesson 4, Cloud
Computing, describes cloud computing and how it is implemented in the Windows Azure platform.
Lesson 5, Blue Yonder Airlines Travel Companion Application", describes Blue Yonder Airlines Travel
Companion Application that you will develop throughout the labs in this course.
Note: The Management Portal UI and Windows Azure dialog boxes in Visual Studio 2012
are updated frequently when new Windows Azure components and SDKs for .NET are released.
Therefore, it is possible that some differences will exist between screen shots and steps shown in
this module, and the actual UI you encounter in the Management Portal and Visual Studio 2012.
Objectives
After completing this module, you will be able to:
Describe the architecture and working of the Blue Yonder Airlines Travel Companion application.
1-2 Overview of Service and Cloud Technologies
Lesson 1
Key Components of Distributed Applications
Users today expect applications to present and process information from varied data sources, which might
be geographically distributed. Modern applications must also support different platforms such as mobile
and desktop, in addition to providing up-to-date information and an appealing UI.
Designing such applications is not a trivial task, and involves collaboration and integration between
several groups of components.
This lesson describes the key components and architecture of modern distributed applications.
Lesson Objectives
After completing this lesson, you will be able to:
Data is distributed between data centers, private computers, and mobile devices. Data should be secured
and private, but at the same time available to its owners and legitimate customers. Today, both data and
the number of users have increased exponentially. Applications must provide services to access data and
maintain high-quality standards in terms of availability and performance.
The only way to achieve availability and performance is by collaboration and distribution of load. An
application can achieve its performance requirements by distributing the computing load across multiple
servers. By using a large number of web servers that are geographically distributed, you also increase the
high availability of your applications. Applications also consume data to provide a rich set of functionality
from a variety of data sources and share their data . Finally, applications replicate cache and centralize
data to provide the best user experience.
It is simply impossible to provide a modern, high-scale application within the borders of a traditional
single computer. Today, data and computing distribution is a necessity.
Availability
Latency
Reliability
Scalability
Distributed systems provide value by using the collaboration of a group of services and clients that are
geographically distributed. Each service has to serve a large number of requests originating from different
clients. A scalable service can provide service to a growing number of clients. Scalability is measured by
the ratio of the growth in the number of customers and the growth in the infrastructure required. You can
achieve scalability by using an appropriate design, such as designing stateless services so you can run
them on multiple computers, and integrating distributed cache solutions for services that need to share
their state between computers.
Availability
Todays systems serve a global audience, located around the world in different time zones. Services must
be available 100 percent of the time and be resilient to connectivity or performance issues. You can
achieve high-availability in a distributed environment by using design guidelines such as fail-over services
and appropriate decoupling between services.
Latency
Latency is the delay introduced by a system when responding to a single request. Users expect
applications to present valuable information without any unnecessary delays. The information must always
be available, the application must be responsive, and user experience must be smooth. To provide a
seamless user experience, services must have a short response time. If the service introduces a long delay,
the experience is not considered to be a smooth one. When designing a system to have low latency, you
should consider concepts such as caching data, parallelizing tasks, and reducing the size of payloads for
both requests and responses.
Reliability
Information is a valuable asset. Clients expect distributed applications to store their data reliably and
make sure that it is never lost or damaged. Keeping data consistent might not be trivial in a distributed
environment where multiple instances might handle the same piece of data concurrently. Data must be
replicated and geographically distributed to handle the risk of hardware failure of any kind.
The fact that the system is distributed means that data will be distributed as well. Yet the system has to
ensure that only legitimate stakeholders get access to it at any time. Often distributed systems have no
boundaries and are accessible to anyone through the Internet. This can include potential attackers who
wish to harm the system and disturb its normal behavior. Proper security design that incorporates
concepts such as communication encryption, authentication, and authorization, can reduce the risk of
information disclosure, denial of service, and data theft.
Data layer
Execution layer
Service layer
User Interface layer
Data Layer
The data layer is responsible for storing and accessing data. Data not only has to be stored, but also has to
be queried and updated. The data layer is responsible for storing, querying, updating, or deleting the data
as required, while maintaining a reasonable performance. This can be a complicated task when you are
dealing with a large set of data, distributed across a number of data sources.
The data manipulation policy depends on the data type and its properties. Data can be replicated,
distributed, and handled according to its characteristics. For example, client contacts can be replicated
across the data center because they change slowly. However, information about stocks must be always
accurate and therefore must be read from a single source.
Execution Layer
The execution layer contains the business logic and is responsible for carrying out the use-case scenarios
of the application. In other words, the execution layer implements the logic of the application. The
business logic uses the data layer to read and store data, and the UI layer to interact with the client. The
execution layer contains all the algorithms and logic of the application and is considered the brain of the
application.
Service Layer
The service layer exposes some of the capabilities of the application to the world as services. Other
applications might consume these services and use them as a data source or as a remote execution
engine.
The service layer acts as the interface for other applications, in contrast to the user interface layer, which
targets humans. The service layer drives collaboration of applications and enables distribution of
computing load and data. It is responsible for defining a contract that consumers must maintain to use
Developing Windows Azure and Web Services 1-5
the service. It enforces security policies, validates incoming requests, and maintains the application
resources.
Lesson 2
Data and Data Access Technologies
Our identities, financial status, commercial activities, professional, social relations and more, are persisted
as data, located across various data sources.
Applications access data, process it to provide value, and finally produce some more data for future use.
In this lesson, you will be introduced to various database technologies, along with .NET data access
technologies.
Lesson Objectives
After completing this lesson, you will be able to:
Relational Databases
SQL Server databases and the Windows Azure SQL Database are the traditional large-scale data sources.
They are designed to store relational data and can execute complex queries and user-defined functions.
Queries are written declaratively in languages such as T-SQL and can execute Create, Read, Update, and
Delete (CRUD) operations.
File System
A file system is used to store and retrieve unstructured data on disks. The basic unit of data is a file. Files
are organized in a tree of directories that have a volume as its root. Operating systems such as Windows
and Linux use file systems as their basic storage system.
Developing Windows Azure and Web Services 1-7
Distributed Caches
Data access from relational databases is considered a long operation. To reduce latency, some data can be
cached in-memory, yet the size of such a cache is limited. Distributed in-memory cache solves the size
limitation by using an arrangement of networked computers, which store in-memory data as key-value
pairs and provides an experience that mimics a single cache to the end user. Distributed caches will be
discussed in Module 12, "Scaling Services" of Course 20487.
NoSQL Databases
NoSQL databases are an umbrella for many types of data stores, all of which store data in a non-relational
fashion. NoSQL databases are often used to store large amounts of data. These data stores are schema-
free, but data can be organized in a variety of different models such as document database, key-value
store, or graph database.
Cloud-Storage
Infrastructures such as Windows Azure Storage enable cloud and on-premises applications to store their
data, which can be structured or unstructured, on a high-scale and persistent data store. Windows Azure
Storage exposes an interoperable API based on HTTP that can be used by any application running on any
platform.
Windows Azure Table Storage can be referred to as a key-value No-SQL database in the cloud, and
Windows Azure Blob Storage is similar to a huge file system in the cloud. Windows Azure Storage will be
discussed in Module 9, "Windows Azure Storage" of Course 20487.
In-Memory Stores
In-memory stores are the fastest data store but are limited in size, not persistent, and hard to use in a
multi-server environment. In-Memory stores are used to store temporary data, local volatile data, or
replication of data that was retrieved from an external data source.
database. ADO.NET provides several techniques for fetching and manipulating data by using self-
managed cursors and iterators, and relational and object-oriented models for storing the data in the
application memory.
Entity Framework (EF) is an Object Relational Mapper (ORM) infrastructure. Applications use the
object-oriented approach to represent data entities and thus collections of rows and columns are not
a natural representation for a running program. Data has to be converted from the relational model
to the object-oriented model. This is the role of an ORM infrastructure. EF was introduced in the .NET
Framework 3.5 and provides an infrastructure where queries are written in C#, executed against
relational databases, and produce results as collections of C# objects. At the core of the EF is a model
that represents the mapping between the relational and object-oriented representations. Entity
Framework will be discussed in Module 2, "Querying and Manipulating Data Using Entity Framework"
of Course 20487.
ASP.NET (the System.Web assembly) introduces a powerful in-memory cache that can be used by
any .NET application.
Distributed cache solutions, such as Windows AppFabric Cache and Windows Azure Caching, are an
in-memory store for almost any .NET Framework type, which negates the memory size limitation of
in-memory caches by distributing cache objects over several servers. Using distributed cache provides
scalability, and enhances the durability of cache items by saving copies of the cache items on
participating nodes and by avoiding the need to recreate the cache items on server temporary failure.
Distributed cache requires cached objects to be serialiazable for them to be transported to other
nodes in the cache cluster.
HTTP-Based APIs
A vast variety of technologies are used to create client applications that consume data from services. This
illustrates the importance of exposing data in standard and widespread protocols such as HTTP, which
provides an easy, standard, resource based access to data.
Both Windows Communication Foundation (WCF) Data Services and ASP.NET Web API provide the ability
to perform CRUD operations in services by using the OData open protocol. OData uses URIs and standard
HTTP requests. Using OData in ASP.NET Web API will be discussed in Lesson 2, "Creating OData Services",
of Module 4, "Extending and Securing ASP.NET Web API Services" in Course 20487.
Windows Azure Storage provides both HTTP and Managed APIs to access large unstructured data objects,
such as videos and images. Azure Table Service provides a NoSQL, key-value store for storing small
objects, up to 1 megabyte (MB) per entity. Objects can also be stored by using Blob Service as binary
blocks of data with a size limit of 200 gigabytes (GB) per object.
LINQ
LINQ is a .NET Framework infrastructure used for querying in a declarative fashion. LINQ technology can
be used to support any kind of data source, and provides a standard, consistent way to integrate data
from different sources.
Question: Why is it important for applications to support HTTP for data access?
Developing Windows Azure and Web Services 1-9
Lesson 3
Service Technologies
Services constitute a layer in application architecture, which exposes business logic capabilities to other
application components to improve component modularity and reusability.
Services are the core of distributed applications providing access to data and making it possible for users
to interact with other applications.
Services provide distributed applications with the ability to scale and meet the growing demands for
better performance, robustness, and interoperability for various consumers, whether it is a web
application, a mobile application, or even another service.
Using service as a layer for the application business logic also contributes to the maintainability and
testability of the application, therefore improving the application's quality. Separation of layers helps to
ensure the existence of Single Responsibility Principle (SRP), making it possible to test each layer as an
independent portion.
In this lesson, you will learn about services and how services are integrated in application architecture,
services technologies, and .NET services technologies.
Lesson Objectives
After completing this lesson, you will be able to:
Explain the differences between SOAP and HTTP-based services.
SOAP-Based Services
The SOAP protocol is based on XML and uses structured elements to assemble a message. Messages in
SOAP are XML documents enclosed by the <envelope> root element. SOAP request messages describe
the required action to be performed on the remote computer by name, and provide additional arguments
to be passed to the relevant action. SOAP response messages describe the result of the action initiated by
the request.
The following is an example of a SOAP message describing a request for a Calculator service that provides
an Add method for adding two numbers. The numbers 1 and 2 are provided as arguments.
1-10 Overview of Service and Cloud Technologies
The following is an example of a SOAP message describing the reply received by the calling the Add
method on the Calculator service with the arguments of 1 and 2.
SOAP-based web services provide the infrastructure for handling SOAP messages. SOAP-based web
services also provide both client and server an easy method to integrate their code and establish
communication between nodes. SOAP web services use standards Web service specifications to simplify
application development. Developers can use specifications such as Web Service Description Language
(WSDL) for describing service methods and arguments in a self-descriptive XML format that is published
along with the service itself.
By using WSDL, you enable other programming technologies to easily consume SOAP web services. This is
because WSDL automatically generates proxy classes that expose methods, which correspond to the web
service methods. Proxies take care of establishing a connection to the remote node, serialization and
deserialization of data, and additional properties.
SOAP-based web services support protocol extensions for security negotiation, reliable messaging,
transaction support, and more. The protocol extensions are referred to as the WS-* standard. You will
learn more about the SOAP specification in Module 5, "Creating WCF Services", Lesson 1, "Advantages of
Creating Services with WCF" in Course 20487. In the .NET Framework, you implement SOAP-based
services by using WCF.
Note: You can also implement SOAP-based web services with the ASP.NET Web Services
(ASMX) framework. However, this technology is considered deprecated, and is fully replaceable
by WCF.
Although web services are often considered to only use HTTP and run in the Internet, SOAP-based web
services are not required to use HTTP as their transport protocol. You can create SOAP-based web services
for both Internet and intranet environments. You can use the SOAP protocol over transports other than
HTTP, such as SMTP, UDP, and Advanced Message Queuing Protocol (AMQP). There are also proprietary
implementations of SOAP over other transports, such as TCP and Named Pipes, however these
implementation are not fully interoperable.
For additional information about SOAP and its implementation, see:
Understanding SOAP
Developing Windows Azure and Web Services 1-11
http://go.microsoft.com/fwlink/?LinkID=313726
HTTP-based Services
HTTP is an application-layer protocol, which defines a set of characteristics for establishing request-
response communication between two networked nodes. HTTP characteristics consist of methods, which
are usually referred to as Verbs that can be performed on a remote computer, security extensions (HTTPS),
authentication, status codes conventions, and more.
HTTP-based web services are mostly used to manage resources that are a part of the HTTP paradigm,
custom structured textual resources, images, and more. Managing resources by using HTTP web services is
natural, and is based on URI for resource identification and verbs for performing operations on the
selected resource.
You can use WCF to create both SOAP-based services and HTTP-based services, however, the ASP.NET
Web API offers a richer, testable, and customizable environment for creating HTTP-based services.
HTTP-based services are covered in Module 3, "Creating and Consuming ASP.NET Web API Services",
Lesson 1, "HTTP Services" in Course 20487.
Based on HTTP characteristics, the ASP.NET Web API uses HTTP headers to help consumers determine the
format of data they expect to get back from the service. Single service implementation can generate
responses in the JSON format, a human-readable text-based standard, XML and other encoding formats,
without special handling on the service side.
The ASP.NET web API is covered in depth in Module 3, "Creating and Consuming ASP.NET Web API
Services", and Module 4, "Extending and Securing ASP.NET Web API Services" of Course 20487.
WCF
WCF is a communication framework that was introduced in .NET Framework 3.0. WCFs primary goal is to
support SOAP and comply with the WS-* specification.
1-12 Overview of Service and Cloud Technologies
By using WCF, you can build robust services that can serve numerous types of client applications. WCF
provides support for various transport technologies that can be used in different scenarios. Some
examples are:
WCF also provides flexibility to support different technologies by separating the service logic, which is
implemented in .NET classes, from the communication technology, which is configured using
configuration files and code.
WCF supports hosting on various infrastructures, which include Microsoft Internet Information Service
(IIS), Windows Services, Console applications, and other .NET executables. WCF will be covered in Module
5, "Creating WCF Services", Appendix A, "Designing and Extending WCF Services", and in Appendix B,
"Implementing Security in WCF Services" of Course 20487.
Lesson 4
Cloud Computing
Cloud computing is revolutionizing the way you develop services and applications. The on-demand model
of cloud computing provides new ways to scale and provide better availability of services.
The continuous growth of data, platforms, and users requires a more robust and capacity-unlimited
platform to take on the expected load.
In this lesson, you will learn about cloud computing and its benefits, some architectural considerations for
setting up cloud computing, and the cloud computing products from Microsoft that are based on
Windows Azure.
Lesson Objectives
After completing this lesson, you will be able to:
Explain what cloud computing is.
Typically, cloud services consist of a group of servers and storage resources scattered in different physical
locations. Cloud services share resources to provide hosted application high-availability, flexibility, and
maximum utilization of hardware.
1-14 Overview of Service and Cloud Technologies
Trying to prepare for unpredicted load, which leads to adding more hardware that will be under-
utilized most of the time, thereby making the data center even less efficient.
The following illustration demonstrates the utilization of resources in hosting a service or an application
on a local data center compared to the cloud.
While cloud provisioning maintains a stable provisioning slightly above the application usage, as shown in
the preceding graph, on-premises provisioning fails to keep up with the application usage needs in two
scenarios. When the application grows rapidly, the static on-premises provisioning causes under-
provisioning. When the application usage drops, on-premises provisioning cannot scale down, and causes
over-provisioning.
Cloud computing provides unlimited scaling in case of unpredicted load and enhances high-availability
and performance by taking advantage of the large capacity of available bandwidth, storage, and
computing resources.
Hosting application and services on the cloud also improves utilization of resources by using an elastic
approach. An elastic approach is the scaling out of resources to meet the growing demand when needed,
and scaling down when the demand is down again. This improves flexibility and reduces operational costs.
The following illustration shows some of the growth patterns that are common in modern applications,
and can benefit from using cloud computing
Developing Windows Azure and Web Services 1-15
Cloud computing vendors, such as Windows Azure, also provide a wide range of features for hosted
services and applications, for data storage, caching, and more. Windows Azure features will be covered in
detail in later modules and lessons.
Platform as a Service (PaaS). With PaaS, the cloud platform provides a ready-to-use infrastructure,
which includes an operating system, storage, databases, auto-configured load-balancer, backup,
replication and more. The software vendor can focus on creating the required database schema and
data, and deploy the application. The platform will take care of the rest, providing on-demand
application-hosting environment that can be cloned and scales automatically.
Software as a Service (SaaS). With SaaS, software vendors can provide their users with a ready to
use on-demand software that benefits the inherent capabilities of a cloud platform. SaaS provides
business flexibility by enhancing the cloud platform features such as scalability, high availability,
self-managed, backup and more.
The following diagram shows the difference between the various cloud computing strategies.
1-16 Overview of Service and Cloud Technologies
As a complete cloud computing solution, Windows Azure provides an on-demand, scalable, self-service
computing and storage resource platform for hosting services and applications from a wide variety of
technologies, such as .NET Framework applications, Java applications, Python, PHP and others, using SQL
databases, MySQL, hosted on Windows or Linux operating systems.
Windows Azure supports a wide variety of platforms and technologies making it possible to host whole
solutions and not only standalone services.
Windows Azure also offers a set of building blocks services for managing identities, communication, and
media.
Windows Azure also includes inherent features for scalability, replication and backup, and advanced
storage types, which will be introduced in the following modules.
Developing Windows Azure and Web Services 1-17
Web Role
This is a role designed to run IIS-hosted applications and services in multi-tiered applications. Web roles
are exposed to the Internet and are located behind a load balancer, making them a prime choice to host
the front-end part of the application. Web roles support advanced management possibilities, which
include Remote Desktop access to the underlying virtual machine, network isolation, and running elevated
privileges code.
Because a web role hosts applications in IIS, it is possible to host applications written in various platforms
including ASP.NET, WCF, PHP, Node.js and more.
Worker Role
This is a role that meets the need for running background processing asynchronously and can also be
used as the web role backend. A worker role brings the same advanced management capabilities as
introduced in web role, differentiated by not having IIS pre-installed on the machine. Worker role can be
used without having a web role as front end and act as an unlimited scale computing capabilities on the
cloud. Extending web roles with a worker role as a back end better distributes application logic to
independent portions that can be scaled and load-balanced, making it possible for maximum utilization
of resources.
Because roles are designed to be stateless virtual machines, Windows Azure can automatically replace a
malfunctioned machine with a new machine, and then deploy the package containing your application to
the new machine. Therefore, a stateful application such as SQL Server, SharePoint and so on, are not
suitable to be hosted on various Windows Azure roles. For these types of applications, you should
consider using Windows Azure IaaS solutions.
Windows Azure Cloud Services is covered in depth in Module 8, "Hosting Services" in Course 20487.
1-18 Overview of Service and Cloud Technologies
Blob storage. This type of storage is a non-structured collection of objects that can be accessed by
using a resource identifier and can be used for storing files, such as images, videos, large texts, and
other non-structured data.
Table Storage. This type of storage is a semi-structured collection of objects that can have fields, but
cannot have relations between objects. The fields are not bound to a schema structure, and different
objects can have different fields within the same collection. Table storage also provides a queryable
API access to find objects.
Queue Storage. This type of storage provides a persistent messaging queue.
Windows Azure Storage is covered in depth in Module 9, "Windows Azure Storage" in Course 20487.
ACS has integrated support for Windows Identity Foundation (WIF) and industry standards such as OAuth
(Open Standard for Authorization), WS-Federation, SAML (Security Application Markup Language), and
more.
ACS leverages out-of-the-box identity providers, integrating both with corporate identity providers such
as Active Directory Federated Services (AD FS v2), and social identity providers such as Windows Live,
Facebook, and Google, and can be extended to support custom identity providers. Using ACS saves the
need to implement complicated mechanisms to support user management and identities, and integration
with other applications.
ACS is covered in depth in Module 11, "Identity Management with Windows Azure Access Control
Services" in Course 20487.
Developing Windows Azure and Web Services 1-19
Windows Azure caching simplifies migrating for applications that use on-premises in-memory or
distrusted cache solutions. You can also use Windows Azure caching to replace the session state and
output cache provider of ASP.NET.
Windows Azure Cache is covered in depth in Module 12, "Scaling Services" in Course 20487.
SQL Database
Windows Azure SQL Database is a cloud-based relational database as a service, based on Microsoft SQL
Server technologies. SQL Database is fully scalable and provides high-availability access, support for SQL
Reporting, and enables data replication between cloud and on-premises databases.
Windows Azure provides a set of operating system images to choose from while creating a virtual
machine, which can include Linux distributions and partners solutions. You can also create a custom
MCT USE ONLY. STUDENT USE PROHIBITED
1-20 Overview of Service and Cloud Technologies
virtual machine on-premises, upload, and then deploy it to the cloud. Windows Azure provides various
ways to host all kind of software and services.
You can migrate currently deployed applications by uploading a whole solution consisting of multiple
machines to the cloud for seamless continuation. Downloading virtual machines from Windows Azure to
be hosted on-premises is supported as well.
Windows Azure Virtual machine uses Virtual Hard Drives (VHD) that are stored on Window Azure Storage
solution. By storing the VHDs in Windows Azure Storage, you get durability, because the disks are
replicated to three copies and are saved on two different data centers.
Windows Azure provides an API for deployment and management capabilities, both in PowerShell
cmdlets (scripts), and programmatically using HTTP API, making it possible to create custom management
tools integrated in any software solution.
Demonstration Steps
1. Open Internet Explorer and browse to Windows Azure Management Portal at
https://manage.windowsazure.com.
2. Explore the management portal main screen. Observe the list of services you can manage on the
pane to the left.
4. Open the dashboard of the cloud service that you created in the previous step (the one that named
CloudServiceDemoYourInitials (YourInitials contains your initials). Currently the cloud service is
empty and has no roles, so none of the tabs is showing any content. In future modules, you will learn
how to deploy new roles to the cloud service and how to configure it.
a. DASHBOARD. Provides an overview for the state and configuration of the cloud service and its
roles.
b. MONITOR. Shows performance counter metrics for the roles, such as their CPU and memory
usage.
c. CONFIGURE. Control settings such as monitoring capabilities, remote access, and selection of
guest operating system.
d. SCALE. Control the number of instances (VMs) used for each role.
e. INSTANCES. Control the existing instances (shutdown/start, reboot, and remote desktop)
f. LINKED RESOURCES. Manage the list of dependent resources, such as storage and databases. By
linking to resources you can monitor and control the scale of all resources from the cloud service
configuration.
Developing Windows Azure and Web Services 1-21
g. CERTIFICATES. Manage the certificates used by the roles, for example for HTTPS communication.
Lesson 5
Exploring the Blue Yonder Airlines Travel Companion
Application
In this course, you will learn how to create services and deploy them to hybrid environments, by
developing parts of the Blue Yonder Airlines' Travel Companion application. The Travel Companion
application is a large system which includes, among other components, a set of databases, two back-end
services, a frontend service, and a client application. Before starting to develop the various parts of the
application, you need to be familiarized with the architecture of the system, to better understand the
purpose of each component.
In this lesson you will be introduced to the architecture of the Travel Companion application and learn
how the application works.
Lesson Objectives
After completing this lesson, you will be able to:
Describe the architecture of the Travel Companion application.
Back-end services that will handle the flight reservations with WCF
In addition, you will use several Windows Azure components, such as Windows Azure Storage, Windows
Azure Service Bus, and Windows ACS.
During the course, you will also deploy the server components to a hybrid environment that includes on-
premises servers and Windows Azure servers.
The following architectural diagram depicts the components that comprise the overall server solution:
Developing Windows Azure and Web Services 1-23
System Components
On-premises SQL Server. The on-premises SQL Server will hold all the reservation data that is
managed by the backend WCF service.
On-premises WCF service. The WCF reservation service, which is deployed to on-premises servers, will
control the booking process. Validation against the internal systems of the airline such as payment
approval.
SQL Database. The SQL Database in Windows Azure will hold all the data regarding traveling
destinations, flight schedules, frequent flyer members, and a history of booked flights.
Windows Azure Web Role. The Windows Azure Web Role will host the ASP.NET Web API front-end
services used by the Travel Companion client application. At the beginning of the course, these
services will be developed and hosted on-premises, and during the course they will be deployed to
Windows Azure. The following services will be created and hosted in the Windows Azure Web Role:
o Destinations service. Lists known destinations across the globe. This service supports basic search
operations.
o Flight schedules service. Returns a list of flight schedules according to origin and destination.
Supports different search criteria.
o Reservation service. Supports creating new flight reservations, linking reservations of frequent
flyer club members to their member account, and retrieving information for previously booked
flights.
o Photos services. Handles uploading photos to Windows Azure Storage and creating URL with
shared access signatures for private blob storage containers.
Windows Azure Web Site. The Windows Azure Web Site will host the companys flight management
administration web application. This application will enable operators to change flights departure
time. Changes made in this web application will be populated to the client application, by using push
notification.
Windows Azure Caching. The destinations service will cache its list of destinations in Windows Azure
Caching.
Windows Azure Service Bus Relays. The communication between the ASP.NET Web API reservation
service and the WCF reservation service will be through a Windows Azure Service Bus Relay.
1-24 Overview of Service and Cloud Technologies
Windows Azure Storage. The Windows Azure Storage is used for storing photos uploaded by users
from their trips and their related metadata. The blob storage holds both private and public photos, in
separate containers, whereas the table storage holds a list of all public photos, including information
such as location where the photo was taken, date and time it was taken, the name of the user who
uploaded the photo etc. Users can use the Travel Companion client application to upload their
photos (public and private), get a list of public photos taken in a specific destination, or view their
own private photos. Uploading the photos is done by using the ASP.NET Web API photos service.
Windows Azure Service Bus Queue. Changes made to flights departure in the flight management
administration web application will not automatically populate to the clients, but rather queued using
Windows Azure Service Bus Queues.
Windows Azure Worker Role. The worker role is used to host a background process that pulls flight
update messages from the queue, locate which clients are affected by the changed flight departure
time, and populate the flight updates to the client's device by using Windows Push Notifications
(WNS).
Access Control Service. The ACS will enable users to login to the Travel Companion client application
with their Windows Live ID. After they are logged in, users will automatically authenticate against the
ASP.NET Web API services, and will be able to book flights and access their reservations.
Demonstration Steps
1. In virtual machine 20487B-SEA-DEV-A, run the setupIIS.cmd file from
D:\AllFiles\Mod01\DemoFiles\BlueYonderDemo\Setup.
This script builds the server solutions and deploys them to the local IIS machine
4. Open the app bar and search for flights to New York by typing New. The app now communicates
with the front-end service to retrieve a list of flights to a location that begins with New, for example,
New York. You will implement this search future labs.
5. Select the trip from Seattle to New York and purchase it. Complete the purchase by entering your
personal information, and then click Purchase. The app will now send the purchase request to the
front-end service. The front-end service will save the purchase information and then send a separate
purchase request to the back-end service for additional processing. After the back-end and front-end
services complete their task, the client app will show a confirmation message. You will implement the
purchase feature in future labs, including the back-end service purchase.
6. Close the confirmation window to return to the Blue Yonder Companion page. Observe the weather
forecast which is shown under the New York at a Glance. The weather forecast is retrieved from the
front-end service. You will implement the weather service in future labs.
7. Select the current trip from Seattle to New York, and then select Media from the app bar.
Developing Windows Azure and Web Services 1-25
8. Open the app bar and observe the available options. You can upload images and videos to Windows
Azure Storage, and share them with other clients. In future labs, you will implement both the upload
and download features.
Note: Do not click the upload buttons, as you have not created any Windows Azure
Storage accounts yet. If you click any of the upload buttons, the app will fail and close.
9. Close the client app, return to the 20487B-SEA-DEV-A virtual machine, and then run the
CleanIIS.cmd script from the D:\AllFiles\Mod01\DemoFiles\BlueYonderDemo\Setup folder.
1-26 Overview of Service and Cloud Technologies
Objectives
After completing this lab, you will be able to:
Lab Setup
Estimated Time: 30 minutes
Virtual Machine: 20487B-SEA-DEV-A
1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.
3. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.
4. In the Action pane, click Connect. Wait until the virtual machine starts.
5. Sign in using the following credentials:
6. Verify that you received credentials to log in to the Azure portal from you training provider, these
credentials and the Azure account will be used throughout the labs of this course.
2. From the SERVERS tab on the SQL DATABASES page, add a Windows Azure SQL Database server in
a region closest to you. Use the login name SQLAdmin and password Pa$$w0rd. Wait for the server
to appear in the list of servers and its status changed to Started. Write down the name of the newly
created server.
Task 2: Manage the Windows Azure SQL Database Server from the SQL Server
Management Studio.
1. On the Configure tab of the newly created SQL Database server, add a rule to allow access from any
IP address (0.0.0.0-255.255.255.255). Click Save to save the new rule.
Note: As a best practice, you should allow only your IP address, or your organization's IP
address range to access the database server. However, in this course, you will use this database
server for future labs, and your IP address might change in the meanwhile, therefore you are
required to allow access from all IP addresses.
2. Open Microsoft SQL Server Management Studio 2012 and connect to the new server. Use the server
name SQLServerName.database.windows.net, and the login name and password you used in the
previous task (Replace SQLServerName with the server name you wrote down in the previous task).
3. In Object Explorer, right-click the Databases node, and then click Import Data Tier Application.
Import the BlueYonder.bacpac file from the D:\AllFiles\Mod01\LabFiles\Assets folder. Verify that
the BlueYonder database is created.
Results: After completing this exercise, you should have created a Windows Azure SQL Database in your
Windows Azure account.
o Make sure to check the option to include the database password in the connection string.
o Save the EDMX file after it opens and then close it.
Results: After completing this exercise, you should have created Entity Framework wrappers for the
BlueYonder database.
2. Add a Web API Controller with CRUD Actions, Using the Add Controller Wizard
Task 2: Add a Web API Controller with CRUD Actions, Using the Add Controller
Wizard
1. In the BlueYonder.MVC project, add a reference to the BlueYonder.Model project.
2. Copy the connection string from the BlueYonder.Model project to the BlueYonder.MVC project
(from the App.config to the Web.config).
3. Build the solution, then open Server Explorer, and then refresh the Data Connections.
4. In Solution Explorer, in the BlueYonder.MVC project, right-click the Controllers folder, and add a
new controller named LocationsController.
o Create the new controller using the API controller with read/write actions, using Entity
Framework template.
Note: You now have a Web API controller for the Location model.
5. Run the BlueYonder.MVC Web application and in the browser, append the api/locations string to
the URL to get the list of locations. Open the downloaded file and verify you see the list of locations
in a JSON format.
Results: After completing this exercise, you will have a website that exposes the Web API for CRUD
operations on the BlueYonder database.
2. On the WEB SITES page, click NEW, and then click QUICK CREATE to create a Windows Azure Web
Site. Name the web site BlueYonderWebSiteYourInitials (Replace YourInitials with your initials) and
select the region closest to your location. After you create the web site, wait until its status changes to
Running.
3. On the web site's DASHBOARD page, click the link to download the web site's publish profile file.
Task 2: Deploy the Web Application to the Windows Azure Web Site
1. In Visual Studio 2012, right-click the BlueYonder.MVC project in Solution Explorer, and click Publish.
Import the profile settings file you downloaded, and use the wizard to deploy the Web application to
the Windows Azure Web Site you created. Wait for the deployment to finish, and for a browser to
open.
Note: At this point, you can simply click Next at every step of the wizard, and then click
Publish to start the publishing process. Later in the course you will learn how the deployment
process works and how to configure it.
2. In the browser, append the api/locations string to the URL to get the list of locations. Open the
downloaded file and verify you see the list of locations in a JSON format.
2. Open the SQL DATABASES page and on the SERVERS tab, click the STATUS column of the server
you created in the first exercise, and then click DELETE. Follow the instructions in the Delete Server
Confirmation dialog box to delete both the database and the server.
Note: Windows Azure free subscriptions have a resource limitation and a restriction on the
total working hours. To avoid exceeding those limitations, you have to delete the Windows Azure
SQL Databases.
Results: After completing this exercise, you should ensure that all your products will be hosted on the
Windows Azure cloud by using Windows Azure SQL Database and Windows Azure Web Site.
Question: Why did you have to allow your machine IP when creating a new Windows Azure SQL server?
1-30 Overview of Service and Cloud Technologies
Best Practices: Plan your application architecture to be appropriate with the technical
requirements, while understanding the limitations of distributed architecture.
Choose the database technology that will let you scale according to your application usage
combine different approaches when appropriate (Relational DB, NoSQL)
Think of your consumers while choosing service technology. Choose HTTP services for high-
compatibility and resource-based communications Choose SOAP services when transaction
management support, reliable messaging, security negotiations and extended WS-* support is
needed.
Describe your software deployment and configuration with details before choosing Cloud Computing
strategy (IaaS, PaaS)
Review Question(s)
Question: In which scenarios will you use HTTP services and in which scenarios will you use
SOAP?
2-1
Module 2
Querying and Manipulating Data Using Entity Framework
Contents:
Module Overview 2-1
Module Overview
Typically, all applications store some data in a database. Some examples of data include configuration
settings, application data, user information, documents, and many others.
The .NET Framework provides a set of tools that helps you access and manipulate data that is stored in a
database. In this module, you will learn about the Entity Framework data model, and about how to create,
read, update, and delete data. Entity Framework is a rich object-relational mapper, which provides a
convenient and powerful application programming interface (API) to manipulate data.
This module focuses on the Code First approach with Entity Framework, but explains other options for
creating the data model also.
Objectives
After completing this module, you will be able to:
Describe basic objects in ADO.NET and explain how asynchronous operations work.
Create an Entity Framework data model.
Lesson 1
ADO.NET Overview
ADO.NET is the original low-level data access API in the .NET Framework. Although this module does not
focus on ADO.NET, understanding basic objects and operations from the ADO.NET library is essential for
using higher-level approaches, such as Entity Framework.
This lesson describes fundamental ADO.NET operations and its asynchronous support.
Lesson Objectives
After completing this lesson, you will be able to:
For other databases, you can often find third-party data providers online, or you can implement your own
data provider.
The rest of this topic focuses on fundamental ADO.NET concepts and classes. Each data provider has its
own classes, which implement a set of common interfaces.
Connection
Use the ADO.NET connection object to connect to your database. There are four types of ADO.NET
connection objects (one for each provider), which implement the IDbConnection interface:
SqlConnection
OleDbConnection
Developing Windows Azure and Web Services 2-3
OdbcConnection
OracleConnection
A connection object is responsible for connecting to the database and initiating additional operations,
such as executing commands or managing transactions. Typically, you create a connection object with a
connection string, which is a locator for your database and may contain connection-related settings, such
as authentication credentials and timeout settings.
Command
Use the ADO.NET command object to send commands to the database. Commands can either return data,
such as the result of a select query or a stored procedure, or have no data returned, such as when you use
an insert or delete statement, or a DDL (Data Definition Language) query. There are four types of ADO.NET
command objects, which implement the IDbCommand interface:
SqlCommand
OleDbCommand
OdbcCommand
OracleCommand
A command object can represent a single command or a set of commands. Query commands return a set
of results, as a DataReader object or a DataSet object, or a single value, usually the result of an
aggregated action, such as a row count, or calculation of an average.
DataReader
Use the ADO.NET data reader to dynamically iterate a result set obtained from the database. If you use a
data reader to access data, you must maintain a live connection while you read from the database.
Additionally, data readers can only move forward while iterating the data. This data-access strategy is also
referred to as the connected architecture. There are four types of ADO.NET data reader objects, which
implement the IDataReader interface:
SqlDataReader
OleDbDataReader
OdbcDataReader
OracleDataReader
The following code example demonstrates how to query a database with a data reader.
if (reader.HasRows)
{
while (reader.Read())
{
Console.WriteLine("{0}\t{1}",
reader.GetInt32(0),
reader.GetString(1));
}
2-4 Querying and Manipulating Data Using Entity Framework
}
else
{
Console.WriteLine("No data found.");
}
reader.Close();
}
When using a data reader, you can access only one database record at a time, as shown in the preceding
example. If you need multiple records at once, it is your responsibility to store them as you move along to
the next record. Although this seems like a major inconvenience, data readers are very efficient in terms of
memory utilization, because they do not require the entire result set to be fetched into memory.
DataAdapter
Use the ADO.NET data adapter to load a result set obtained from a database into the memory. After
loading the entire result set and caching it in the memory, you can access any of its rows, unlike the data
reader, which only provides forward iteration. You should use this data-access strategy, referred to as the
disconnected architecture, when you do not want to maintain a live connection to the database while
processing the data.
Data adapters store the results in a tabular format. You can also change the data after it is loaded, and use
the data adapter to apply the changes back to the database. There are four types of ADO.NET data
adapter objects, which implement the IDataAdapter interface:
SqlDataAdapter
OleDbDataAdapter
OdbcDataAdapter
OracleDataAdapter
Although data adapters are convenient to use (especially in conjunction with the DataSet class, which is
explained in the next section), they impose a larger overhead than data readers because the entire result
set must be fetched into memory before you can perform any operations.
DataSet
The DataSet class is one of the most frequently used objects in ADO.NET. You use it to retrieve tabular
data from a database. Although you can fill a DataSet object manually with data, you typically load it by
using the DataAdapter class.
The following code example demonstrates how to load data to a DataSet object by using a data adapter.
You can use DataSet objects to hold information from more than one table at one time, and maintain
relationships between tables inside a DataSet object.
Question: Why would you prefer using data readers to data adapters, and vice versa?
Developing Windows Azure and Web Services 2-5
To execute a command asynchronously, you use the ExecuteXXAsync methods. For example, the
ExecuteReaderAsync is the asynchronous version of the ExecuteReader method. The asynchronous
methods return a Task<T> object, where the generic type parameter T is the type returned by the
corresponding synchronous method. For example, the ExecuteReaderAsync method returns a
Task<DbDataReader> object, whereas the corresponding synchronous method, ExecuteReader, returns
a DbDataReader object.
In addition to the ExecuteXXAsync methods, you can also use the DbConnection.OpenAsync method
to open a database connection asynchronously. You can also use the DbDataReader.ReadAsync method,
as shown in the preceding example, to advance the reader asynchronously to the next row.
Note: The code in the preceding example uses the await keyword introduced in C# 5 to
schedule a continuation when the operation completes. You can also use the
Task.ContinueWith method to provide a delegate as the continuation of the task.
http://go.microsoft.com/fwlink/?LinkID=298749&clcid=0x409
2-6 Querying and Manipulating Data Using Entity Framework
Lesson 2
Creating an Entity Data Model
This module describes how to create an Entity Framework model. You will learn about the different
approaches for accessing data with Entity Framework, including Model First and Code First.
Lesson Objectives
After completing this lesson, you will be able to:
Entity Framework is an ORM that provides a one-stop solution to interact with data that is stored in a
database. Instead of writing stored procedures and plain text SQL statements, you work with your own
domain classes, and you do not have to parse the results from a tabular structure to an object structure.
The Entity Data Model (EDM) is how Entity Framework maps database tables and relationships to objects
and properties. Visual Studio provides a visual designer for EDMs, the Entity Designer, which you can use
to modify the mapping. This module does not focus on EDM and the Entity Designer.
Developing Windows Azure and Web Services 2-7
Model-first
Database-first
Code-first
If you do not have a database already, after you design your data model, you can generate database
scripts from the model by using the Entity Designer tool. You can then run the script in a new database to
create the tables. On the other hand, if your database already existed prior to creating the data model,
you can use the Entity Designer to reverse engineer the data model from the database tables. This
procedure is also referred to as Database-First approach.
Code-First
In this approach, you do not use an .edmx file to design your model, and do not rely on the Entity
Designer tool. The domain model is simply a set of classes with properties that you provide.
In the code-first approach, Entity Framework scans your domain classes and their properties, and tries to
map them to the database based on naming conventions. Tables are named in the plural form of your
class name and columns should have names identical to those of your class properties. For example, for a
class named Car with a property named Model, its mapped table will be named Cars and it will have a
column named Model. There are several other conventions used by Entity Framework. For example, if the
class has a property named Id, it will be assigned as the tables primary key column. If you need to
customize these mappings, you can use special data annotation attributes or the Fluent API. These
customization options will be discussed later in this module.
You can use the code-first approach both with new databases, and with existing ones. If you do not have
a database, the default behavior of code-first will be to create the database for you the first time you run
your application. If your database already exists, Entity Framework will connect to it and use the defined
mappings between your model classes and the existing database tables.
In this course, you will focus on the code-first approach with Entity Framework.
Creating a DbContext
In Entity Framework, a context is how you access
the database, without the need for additional
wrappers or abstractions. Context is the glue
between your domain model (classes) and the
underlying framework that connects to the
database and maps object operations to database
commands.
Provides basic create, read, update, and delete (CRUD) operations, and simplifies the code that you
must write to execute these operations.
Handles the opening and closing of database connections.
Provides a change tracking mechanism.
Note: The DbContext class was introduced as a lightweight context instead of the
ObjectContext class. The ObjectContext class was also found to be less accommodating when
used in unit testing, because it lacked a base class or interface that you can use to create your
own implementation, as a mockup of the database. The DbContext class, which was created as a
wrapper around ObjectContext, was created with unit testing in mind, and offers a set of
interfaces that you can implement to create a mockup context for unit tests. Today, it is common
to use DbContext for both code-first and model-first approaches.
Note: SQL Express is the free, lightweight version of SQL Server that can be installed on
development machines and ships with Visual Studio 2010. LocalDb is an extension of SQL Express
that offers an easier way to create multiple database instances by using SQL Express. LocalDb
ships with Visual Studio 2012.
You can use a different database (that is not SQL Express or LocalDb) by providing a connection string in
your application configuration file (app.config or web.config). If you pass the name of that connection
string to the DbContext class constructor, it will use the connection string instead of the default database
engine.
The following code demonstrates how to put a connection string in your application configuration file,
and how to use it when creating an instance of the DbContext class.
Developing Windows Azure and Web Services 2-9
C#
DbContext context = new DbContext("StudentsDB");
The following example demonstrates how to create a custom class that derives from the DbContext class.
When you create an instance of the StudentsContext class depicted in the preceding code example,
Entity Framework will connect to the database and map the Students and Courses properties according
to the mapping information provided by the Student and Course classes.
Note: If you do not pass a database name or connection string name to the DbContext
class constructor, it will use the fully-qualified name of your custom DbContext-derived class as
the database name. For example, if the StudentsContext class depicted in the preceding code
example were in the StudentsManagement namespace, the database name would be
StudentsManagement.StudentsContext.
The following example illustrates how to create the database, if it does not already exist, by using the
CreateDatabaseIfNotExists<T> generic class and a custom DbContext-derived class.
You can use Code First Migrations to update the database schema automatically to match the changes
you made in your classes without having to recreate the database.
With Code First Migrations, you define the initial state of your classes and your database. After you
change your classes and execute the Code First Migrations in design time, the set of changes you
performed over your classes is translated to the required migration steps for the database, and then those
steps are generated as database instructions in code. You can apply the changes to the database in
design-time, before deploying the version of the application. Alternatively, you can have the application
execute the migration code after it starts. Code First Migrations is outside the scope of this course, but
you can read more about it on MSDN:
In the preceding code example, the context.Students property returns an instance of the DbSet<T>
generic class. The DbSet<T> generic class represents a set of entities that you can use to perform CRUD
operations. You can think of it as the object representation of a database table. This class provides the
Find method, which can locate an object based on the database primary key. The example concludes by
calling the SaveChanges method of the DbContext class, which propagates the changes to the database.
Best Practice: It is very important to keep the number of concurrent DbContext objects in
your application low. Each object can open a connection to the database and keep it open for
some time. Too many open connections can cause performance issues, both in your application
and your database. When declaring an instance of DbContext object, use a using statement.
This will ensure that the database connection is closed, and that any in-memory caches for
objects you recently queried are purged from memory.
Developing Windows Azure and Web Services 2-11
Change Tracking
When you query the database and retrieve objects by using Entity Framework, the DbContext class can
track changes you make to these objects to facilitate saving them back into the database easily. The Entity
Framework change tracking system supports two modes of operation:
Active change tracking. Every property informs the context if it was changed.
Passive change tracking. The context attempts to detect changes before it determines which
property to save.
When you call the SaveChanges method of the DbContext class, the context checks if active change
tracking is enabled. If only passive change tracking is available, the DbContext object calls the
DetectChanges method. This method enumerates all entities retrieved by the context and compares
every property of every entity to the original value it had when it was retrieved. Any changed properties
are updated in the database.
To support active change tracking, you should mark all your properties on your domain classes (such as
the Student class in the preceding code example) with the virtual keyword. If you do so, Entity
Framework will create proxies at run time that derive from your class and track assignments to the virtual
properties of your model.
If you need to map classes and properties manually to database schema objects such as tables, columns,
and keys, you can do so by using data annotation attributes. You can also use these attributes to specify
validation rules for your domain classes. Validation rules are outside the scope of this module. To use data
annotation attributes, add a reference to the System.ComponentModel.DataAnnotations assembly.
Note: With the new Entity Framework 6, you will be able to create custom conventions
instead of manually applying attributes to each class and property you want to map. This feature
is very useful for companies where the database administrators (DBAs) have their own set of
naming conventions. This module was written prior to the release of Entity Framework 6, and
therefore this topic is not covered in the module.
To map a class to a database table, add the [Table] attribute to the class declaration and specify the table
name. For example, [Table("Products")] maps the class to the Products table. To map a property to a
2-12 Querying and Manipulating Data Using Entity Framework
database column, add the [Column] attribute to the property declaration. For example,
[Column("ProductName")] maps the property to the ProductName column.
Note: By default, Entity Framework will use the plural form of the class name when
mapping a class to a database table. For example, the class Product will be mapped to a table
named Products), and properties will be mapped to database columns of the same name. You
should use the [Table] and [Column] attributes only if you want to customize these defaults.
The following example shows how to map a class to a database table by using code-first data annotations.
[Table("GlobalProducts")]
public class Product
{
public int Id { get; set; }
[Column("ProductName")]
public string Name { get; set; }
}
In the preceding code example, the Product class is mapped to a database table named GlobalProducts,
the Id property is mapped implicitly to a database column named Id, and the Name property is mapped
to a database column named ProductName.
The Id property in the preceding code example will be set as the primary key of the table, because the
convention for primary key is that either the property is named Id (or ID, the casing is ignored) or named
after the class, followed by Id, for example, ProductID.
When you map a property to a primary key column, by default, Entity Framework will set the value of the
column to be generated by the database automatically. For integer columns, the value will be auto-
incremented; for columns of type GUID, the database will generate a new GUID for each row. If you do
not want to use generated primary keys, and instead you want to provide the primary key value yourself
when creating the entity object, configure the primary key property with the
[DatabaseGenerated(DatabaseGeneratedOption.None)] attribute. To use the DatabaseGenerated
attribute, add a using directive to the System.ComponentModel.DataAnnotations.Schema
namespace.
annotations. You use data annotations to add foreign keys and to map object relationships between
instances of your classes to the database relationships. Specifically, you define two properties to express
the foreign key relationship: a foreign key property, whose type matches the database type of the foreign
key column, and an entity property, whose type is a class from your domain model.
You can map a foreign key relationship in two ways by using data annotations:
The following code example shows how to set a foreign key of a nested object to a property of your class
by using two approaches.
Note: The preceding code example illustrates a to-one relationship (either one-to-one or
many-to-one) from the enclosing entity to the Course entity. To specify a to-many relationship
(either one-to-many, or many-to-many), change the type of the entity property to
ICollection<T> or IEnumerable<T>.
By having both the foreign key property and entity property for each foreign key relationship, you gain
flexibility. If necessary, you can ask Entity Framework to fetch the referenced entity (as shown in the
Course class in the preceding code example) along with the enclosing entity, or you can refrain from
fetching it and rely only on its key, for performance reasons.
Note: If you do not use data annotations, and instead rely on the Code First convention for
foreign keys, you will need to make sure that the foreign key property is named as the entity
property, followed by Id (casing is ignored). In the preceding example, the entity property is
named Course and the foreign key property is named CourseId, therefore the data annotation
attributes are not required.
Demonstration Steps
1. Open Visual Studio 2012, create a new Console Application project, and name it MyFirstEF.
3. Add a new class to the project, name it Product, and make it public. The class should include:
4. Add a new class to the project, name it Store, and make it public. The class should include:
5. Add a new class to the project, name the class MyDbContext, and make it public. The class should
inherit from DbContext, and include the following:
o A public property Products of type DbSet<Product>.
6. In the Main method, use the CreateDatabaseIfNotExists generic class to create a new database
initializer, and use it to initialize the database.
7. Remove the <entityFramework> element and its content from the App.config file.
Note: This demonstration requires you to use SQL Server Express and not LocalDb, because with
LocalDb the newly created database will not show in the SQL Server Management Studio (LocalDb
detaches the application's database after the application stops). The SqlConnectionFactory class uses
LocalDb, so by deleting the <entityFramework> element, the creation of the database will be in the
local SQL Server Express instance.
8. Run the application. The application creates a new database; this might take a couple of seconds.
9. Open SQL Management Studio, connect to local SQL Express database, and make sure you see the
newly created database named MyFirstEF.MyDbContext.
10. Explore the database structure and observe the newly created Product and Store tables, and related
columns.
Note: Database tables are usually named in the plural form, which is why Entity Framework
changed the names of the generated tables from Store and Product to Stores and Products.
TPT
In the TPT approach, a separate table represents each class. The derived class table has a foreign key
property that associates it with the base class table. The derived class table contains columns only for
properties declared in that class.
To create such an object-relational mapping, use data annotations to give each class a different table
name.
TPH
In the TPH approach, a single table represents the entire inheritance hierarchy. All the inherited types are
represented in the same table. When you map the table to domain classes (such as the Teacher and
Student classes), you only map the relevant properties for each class. This means that the database
2-16 Querying and Manipulating Data Using Entity Framework
representation of a Teacher object will have a null value for the Grade column, which only the Student
class has.
To create such an object-relational mapping, use data annotations to give all classes the same table name.
You can also remove the [Table] attribute from the classes, because this is the default behavior of Code
First for handling inheritance mapping.
Note: When creating the Person table, Entity Framework Code First will add a
discriminator column to the table and use the type names (Person, Student, and Teacher) to
indicate which object type is stored in each row. You need not be aware of the discriminator
column or use it directly.
TPC
In the TPC type approach, each concrete (non-abstract) class is represented in the database as its own
table. As a result, the database schema is not normalized, but mapping the tables to classes is much
easier.
This example shows how to implement inheritance by using the TPT approach. The code defines three
classes named Person, Student, and Teacher. Student and Teacher inherit from Person, and every class
is mapped to a different database table.
TPT Example
public class MyDbContext : DbContext
{
public DbSet<Person> Persons { get; set; }
public DbSet<Student> Students { get; set; }
public DbSet<Teacher> Teachers { get; set; }
}
[Table("Person")]
public abstract class Person
{
public int Id { get; set; }
public string Name { get; set; }
public DateTime DateOfBirth { get; set; }
}
[Table("Student")]
public class Student : Person
{
public int Grade { get; set; }
}
[Table("Teacher")]
public class Teacher : Person
{
public decimal Salary { get; set; }
}
The following code example shows how to map a class to a database table, then map the key field of the
class, and then map a property of the class to a database column.
In the preceding code example, the DbModelBuilder object is used to map the Product class to the
GlobalProducts table to declare that its Id property is the primary key and to associate the Name
property with the ProductName database column. This achieves the same result as the data annotations
example illustrated in Topic 4, "Mapping Classes to Tables with Data Annotations".
You can also use the Fluent API by using a class that derives from the EntityTypeConfiguration class for
each domain class you have. You still need to associate the configuration classes with your DbContext-
derived class by using the OnModelCreating method.
Best Practice: You should consider creating a separate class that derives from the
EntityTypeConfiguration class if your model mapping is complex. By separating the mapping
into several types, you make your mapping layer more readable, and avoid littering the
OnModelCreating method with hundreds of lines of mapping code.
The following code example illustrates how to use the Fluent API with a class derived from the
EntityTypeConfiguration class.
The ProductMapping class in the preceding example derives from the EntityTypeConfiguration generic
class, and calls numerous methods in its constructor to associate the Product class with the
GlobalProducts table. This again achieves the same result as using data annotations.
For additional examples of Configuring/Mapping Properties and Types with the Fluent API,
see
http://go.microsoft.com/fwlink/?LinkID=313730
Developing Windows Azure and Web Services 2-19
Question: Why would you use the Fluent API as opposed to data annotations?
2-20 Querying and Manipulating Data Using Entity Framework
Lesson 3
Querying Data
So far, you learned how to map domain classes in your application to database tables. This lesson explains
how to query data from a database by using SQL and Entity Framework.
Lesson Objectives
After completing this lesson, you will be able to:
LINQ to Objects queries execute in memory on a collection of items, whereas LINQ to Entities queries
are translated to SQL statements and executed in the database.
Note: Every LINQ to Entities query is translated to SQL statements and executed at the
database level as a plain SQL statement by using ADO.NET. This is extremely important for
performance reasons. Executing a LINQ to Objects query on a table with millions of records
requires fetching the entire table into memory, whereas executing a LINQ to Entities query on the
same table can be extremely fast because the query executes on the database server.
This example shows how to retrieve a list of students from the database and filter it by the name of the
student. The context variable is a reference to a custom DbContext-derived class instance, and its
Students property returns a reference to a DbSet<Student> object.
where s.Name.ToLower().Contains("a")
select s;
There are some limitations as to which operators and methods you can use in your LINQ to Entities
queries. Because every LINQ to Entities query is translated to SQL and executed on the database server,
some LINQ features and.NET Framework methods are not supported by Entity Framework. For example,
you cannot use the String.IsNullOrWhiteSpace method and the Last LINQ query operator.
Best Practice: As with LINQ to Objects, queries written with LINQ to Entities are not
executed until they are enumerated, for example, by using foreach, or by calling the ToList or
FirstOrDefault extension methods. If you enumerate a LINQ to Entities query for the second
time, it will execute again in the database. For example, if you invoke the Count method of the
query several times, each invocation will execute the SQL statement again in the database.
Therefore, as a best practice, if you need to use the result of the query more than once, you
should store the result in a local variable.
Demonstration Steps
1. Using Visual Studio, open the EF_CodeFirst solution located in
D:\Allfiles\Mod02\Democode\UsingLINQtoEntities\Begin folder.
2. In Program.cs, within the Main method, initialize a new SchoolContext object to access data in the
database.
Note: The context uses unmanaged resources, such as a database connection, so do not
forget to dispose the context when you finish using it.
3. Using LINQ to Entities, select all courses and save the results in a variable.
4. Print the courses list and the students in each course to the console window.
5. Run the project and observe the console windows for the course and student lists. Use the IntelliTrace
window in Visual Studio 2012 to view the list of queries executed by Entity Framework.
Note: IntelliTrace will be covered in Module 10, "Monitoring and Diagnostics" in Course
20487.
Question: What are the SQL statements executed by Entity Framework in the preceding
demonstration?
2-22 Querying and Manipulating Data Using Entity Framework
In the preceding code example, calling the CreateQuery<T> generic method does not execute the query
yet; this method only prepares the query for execution. The query is executed and objects are returned
only when you enumerate the query variable by using the foreach statement or the ToList method.
and executing them with Entity Framework is that with Entity Framework, the result is automatically
translated to the domain classes, instead of returning a DbDataReader object.
The following code example demonstrates how to execute an SQL query statement with Entity Framework
to retrieve objects from the database.
Finally, you can also execute SQL statements that return a single value or no value at all. For example, you
could execute an insert statement to insert a new entity into the database, or execute a stored procedure
that returns a scalar value. To execute an SQL statement that returns a scalar value, use the
ExecuteSqlCommand method.
The following example demonstrates how to execute an SQL statement that returns a scalar value by
using the ExecuteSqlCommand method.
Question: Why would you use Entity SQL or direct SQL instead of LINQ to Entities?
Demonstration Steps
1. In the D:\Allfiles\Mod02\Democode\StoredProcedure\Begin folder, open the EF_CodeFirst
solution using Visual Studio.
2. In Program.cs, navigate to the Main method, and notice that a SchoolContext instance has been
created.
3. Observe the query being executed and assigned to the averageGradeInCourse local variable for
calculating the WCF course average grade, and printed to the console.
4. View the ExecuteSqlCommand statement, and observe that it executes a stored procedure for
updating the grade for the students in a course and providing it the course name and required grade
change as parameters.
5. Run the client application and notice that the average grade in the WCF course is recalculated and
printed to the console, and the average grade is changed by 10 points.
2-24 Querying and Manipulating Data Using Entity Framework
Question: When would you invoke stored procedures from your application instead of
performing object manipulations by using Entity Framework?
When issuing a query, call the Include method to specify which entities should be eagerly loaded with the
containing entity. This is the most flexible way to instruct Entity Framework when you want to use eager
loading, and is recommended.
The following code example demonstrates how to use eager loading with the Include method to retrieve
the property contents of the Courses entity along with the Student entity.
To enable lazy loading of your related entities, you need to declare your relationship properties, which
contain references to other entities, as virtual. If you reference a list of related entities, your virtual
property must be of type ICollection<T> or a derivative of it, such as IList<T>. You cannot use lazy
loading with IEnumerable<T>. By setting the properties to virtual, you ensure that Entity Framework
derives a new proxy class from the original class and adds the lazy load logic to the property.
If you have non-virtual properties, you can explicitly load them at run time by using the Load method.
The following code example shows how to load a non-virtual referenced entity explicitly.
In the preceding example, the Entry method returns a DbEntityEntry object, which you can use to access
information about the entity type, such as its original values and its state such as unmodified, deleted, and
so on. The DbEntityEntry provides information about the referenced entities and collections through
which you can explicitly load each relation. Similar to the Include method, the Collection and Reference
methods can also use a string parameter instead of the lambda expression.
If you have defined your reference and collection properties as virtual, and you want at some point to
momentarily turn off lazy loading on an entire context, set the LazyLoadingEnabled property of the
DbContext instance to false.
The following code example shows how to turn off lazy loading for the entire context. The context
variable refers to a DbContext object.
Lesson 4
Manipulating Data
Until this point, you learned how to query data from a database by using LINQ to Entities, Entity SQL, and
even direct SQL statements. However, querying data is not the whole story. This lesson explains how to
manipulate data by using Entity Framework.
Lesson Objectives
After you complete this lesson, you will be able to:
Added. The entity was added to the context and did not exist in the database.
Modified. The entity was changed since it was retrieved from the database.
Unchanged. The entity was not changed since it was retrieved from the database.
Detached The entity was detached from the context, so that changes to it will not be reflected in the
database.
Deleted. The entity was deleted since it was retrieved from the database.
You can inspect the state of all the entities that have been changed in some way by using the
DbContext.ChangeTracker.Entries method. This could be useful for logging purposes or for reverting
certain changes in an overridden implementation of the SaveChanges method of the DbContext class.
The following code example demonstrates how you can enumerate all the objects that have been added,
modified, or deleted in an overriding implementation of the SaveChanges method.
{
public override void SaveChanges()
{
var changes = this.ChangeTracker.Entries().Where(entry => entry.State !=
EntityState.Unchanged);
foreach (var change in changes)
{
var entity = change.Entity;
//Inspect the object, the change, and possibly introduce additional changes
}
base.SaveChanges();
}
}
Furthermore, from an instance of the DbContext class you can retrieve and modify state information for
any entity that has been loaded into the context by using the Entry method. One use of this would be to
mark an entity as deleted; another use would be to replace the values of an entity with new values
provided externally to your API.
The following code example illustrates how you can modify state information for an entity and how you
can copy the values from one entity to another.
Finally, you can turn change tracking on and off globally by using the AutoDetectChangesEnabled
property of the Configuration property of the DbContext class.
The following code example shows how you can turn change tracking on and off.
If you use the preceding code to turn off automatic change tracking, you will have to call the
DbContext.ChangeTracker.DetectChanges method manually before you save any changes.
Note: Automatic change tracking is enabled by default, but only applies to properties
marked as virtual. Non-virtual properties cannot be derived and therefore Entity Framework
cannot detect when the property's value changes.
2-28 Querying and Manipulating Data Using Entity Framework
Adding an Entity
using (var context = new MyDbContext())
{
context.Persons.Add(
new Person
{
DateOfBirth = new DateTime(1978, 7, 11),
Name = "John Doe"
});
context.SaveChanges();
}
Deleting Entities
To delete an entity from the database, you use the
DbContext object. When you delete an entity
from a database, the context marks the change
tracking status of the entity as Deleted. When you
call the SaveChanges method, the DbContext
object deletes the entity from the database.
Deleting an Entity
using (var ctx = new ProductsContext())
{
var product = (from m in ctx.Products where m.Name == "Orange Juice" select
m).Single();
ctx.Products.Remove(product);
ctx.SaveChanges();
}
If you already know the primary key of the entity that you want to delete, you do not need to retrieve it
from the database to delete it. You can manually add an entity with the desired primary key to the
Developing Windows Azure and Web Services 2-29
context, use the Entry method of the DbContext to access the state of the entity, and then mark it as
deleted.
The following code example shows how to delete an entity from a database without first retrieving it from
the database.
Updating Entities
To update an entity in the database, you can use
the DbContext object and make changes in an
incremental fashion. When you update an entity,
the context marks the change tracking status of
the entity as Modified. When you call the
SaveChanges method, the DbContext object
updates the entity in the database. The exact
procedure of how these incremental updates are
performed depends on the change tracking status.
Note: For more information about change tracking, see Lesson 2, "Creating and Entity Data
Model", Topic 3, "Context and Entities" in Course 20487.
The following code example shows how to retrieve and update an entity by using the DbContext object.
Updating an Entity
using (var context = new MyDbContext())
{
var student = (from s in context.Students where s.Name.ToLower().Contains("john")
select s).Single();
student.Name = "Jonathan";
context.SaveChanges();
}
You can update an entity that is not tracked by the context, such as an entity you received as a method
parameter, by attaching the entity to the context, and then manually setting the entity's state to
Modified.
2-30 Querying and Manipulating Data Using Entity Framework
Note: Updating a detached entity is a common scenario when working with services,
because the updated entity is sent to the service and not loaded from the context.
The following code example shows how to update an entity that is not tracked by the context.
The preceding code uses the Entry method to attach the updatedStudent object to the context, and
then sets the entity's state to Modified. When the context tries to save the attached entity, it cannot
detect which properties were changed, because it does not know the original values of the properties.
Therefore, in this scenario, the SQL statement will update all the columns, even those that have not
changed.
If you are not sure whether the entity you want to update is already tracked or not by the context you are
using, such as when you receive the context as a parameter, do not use the Entry method. If your context
already tracks an instance of an entity, and you call the Entry method with a different instance of the
same entity, an exception will be thrown because the context cannot track two instances of the same
entity. If you do not know whether an entity is tracked or not, you have two options:
1. Use the Find method to load the entity to the context, and then use the
DbEntityEntry<T>.CurrentValues.SetValues method to update the loaded entity with the values of
the updated entity instance. The Find method will first search the context for the entity and if not
found, will load the entity from the database.
2. Search only the entities already loaded by the context for the entity to update, by using the Local
property of the DbSet. If it is found, use the DbEntityEntry<T>.CurrentValues.SetValues method
to update the entity according to the values of the updated entity. If it is not found, use the Entry
method to attach the entity to the context, and then set its state to Modified. By using the Local
property, you can avoid accessing the database if the entity is not found in the context.
This example shows the two ways to update a detached entity, if you do not know whether the context
already tracks the entity or not.
// Option 2
var existingStudent = context.Students.Local.FirstOrDefault(r => r.StudentId ==
updatedStudent.StudentId);
if (existingStudent == null)
{
context.Entry(originalStudent).State = EntityState.Modified;
}
else
{
context.Entry(existingStudent).CurrentValues.SetValues(updatedStudent);
}
context.SaveChanges();
Developing Windows Azure and Web Services 2-31
Demonstration Steps
1. In the D:\Allfiles\Mod02\Democode\CRUD\Begin folder, open the EF_CodeFirst solution using
Visual Studio.
2. To access data in the database, in Program.cs, within the Main method, initialize a new
SchoolContext object. Add the code after the call to the InitializeDatabase method.
3. Using the context object select a course named WCF.
4. Create two new students and add them to the WCF course.
5. Give the teacher of the WCF course a $1000 salary raise.
6. Query a student named Student_1 from the WCF course and remove it from the course students.
7. Save the changes in the context and write the WCFCourse object to the console window.
8. Run the console application and observe the changes to the salary of the teacher and the list of
students: the salary is now 101000, there are two new students, and student 1 is missing from the list.
Use the IntelliTrace window in Visual Studio 2012 to view the list of SQL statements executed by
Entity Framework.
Question: How do you create or modify a relationship (based on a foreign key) by using
Entity Framework?
For example, when you insert an order of a customer into the database, it may consist of multiple update
and insert operations. You might have to insert a record into the Orders table, a record into the Shipping
table, and modify the Inventory table to reflect the inventory changes as a result of fulfilling the order. If
any of these updates failfor instance, if the Inventory table update fails because the item is no longer
2-32 Querying and Manipulating Data Using Entity Framework
available in stockyou need to carefully roll back the changes to the Orders and Shipping tables to
make sure you do not have an orphaned order that cannot be fulfilled. Similarly, if the Inventory table
update succeeds but an error occurs while inserting a record into the Shipping table, you must undo the
change in the Inventory table to make sure you do not lose inventory items. To aggravate the matter,
any updates you performed to the Inventory table may have been made visible to other applications, so
another process may have decided that an item is no longer in stock although your order has not been
successfully fulfilled.
Transactions
Transactions address the compensation and visibility issues by providing a scope of operations. A
transaction is a set of operations that runs in a sequence, and if one of the operations fails, the transaction
rolls back, and no operations are committed. You should use transactions if one operation depends on a
previous operation and cannot be committed without verifying that the previous operation was
successful. Also, you should use transactions when visibility is a concern, and you do not want to make a
change visible to other applications until the entire transaction completes.
By default, Entity Framework is transactional. When you call the SaveChanges method, it translates the
change set to SQL statements and starts with the BEGIN TRANSACTION SQL declaration. The SQL
transaction is not committed unless all the items are added, updated, or deleted successfully.
The following code example shows how to use the TransactionScope class with Entity Framework.
In the preceding code example, the changes made by the three SaveChanges method calls are
committed to the database (or databases) only when the TransactionScope block ends, and only because
the entire scope was marked as complete by calling the Complete method.
Developing Windows Azure and Web Services 2-33
Best Practice: Use the TransactionScope class inside a using block to make sure that it is
disposed of. If the object is disposed of before you call the Complete method (for example, if an
exception occurs within the using block), the transaction is aborted automatically and its changes
are rolled back.
Objectives
After completing this lab, you will be able to:
Query an Entity Framework model by using LINQ to Entities and Entity SQL.
Lab Setup
Estimated Time: 60 minutes
Virtual Machine: 20487B-SEA-DEV-A, 20487B-SEA-DEV-C
1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.
3. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.
4. In the Action pane, click Connect. Wait until the virtual machine starts.
5. Sign in using the following credentials:
In this lab, you will install NuGet packages. It is possible that some NuGet packages will have newer
versions than those used when developing this course. If your code does not compile, and you identify
the cause to be a breaking change in a NuGet package, you should uninstall the NuGet package and
instead, install the old version by using Visual Studio's Package Manager Console window :
1. In Visual Studio, on the Tools menu, point to Library Package Manager, and then click Package
Manager Console.
2. In Package Manager Console, enter the following command and then press Enter.
(The project name is the name of the Visual Studio project that is written in the step where you were
instructed to add the NuGet package).
3. Wait until Package Manager Console finishes downloading and adding the package.
The following table details the compatible versions of the packages used in the lab:
Developing Windows Azure and Web Services 2-35
EntityFramework 5.0.0
In this exercise, you will create data model classes to represent trips and reservations, implement a
DbContext-derived class, and create a new repository class for the Reservation entity.
The main tasks for this exercise are as follows:
Locate the FlightScheduleId field and explore the use of the DatabaseGenerated attribute.
Locate the Flight property and explore the ForeignKey attribute.
3. Locate the FlightRepository class under the Repositories folder, in the BlueYonder.DataAccess
project, and explore its contents.
Note: The FlightRepository class implements the Repository pattern. The Repository
pattern is designed to decouple the data access strategy from the business logic layer that
handles the data. The repository exposes the data access functionality and implements it
internally by using a specific data access strategy, which in this case is Entity Framework. By using
repositories, you can easily create a mock, replacing the repository, and improve the testability of
the business logic.
For more information about the Repository pattern and its related patterns, see
http://go.microsoft.com/fwlink/?LinkID=298756&clcid=0x409.
In Lab 4, "Extending Travel Companions ASP.NET Web API Services", Module 4, "Extending and
Securing ASP.NET Web API Services", you will see how to increase testability by using mocked
repositories.
Set the attribute to use the FlightScheduleID property as the foreign key property.
Note: In addition, Entity Framework will detect the virtual property in the Trip class and
will create a new derived proxy class that implements lazy loading for the FlightInfo property.
When you load trip entities from the database, the entity object will be of the derived trip proxy
type, and not of the Trip type.
2. Open the Reservation class from the BlueYonder.Entities project, and make the following changes:
o Declare the DepartureFlight property to be virtual, and add a [ForeignKey] attribute to it. The
attribute should use the DepartFlightScheduleID property for the foreign key.
o Declare the ReturnFlight property to be virtual, and add a [ForeignKey] attribute to it. The
attribute should use the ReturnFlightScheduleID property for the foreign key.
Note: Setting the ReturnFlightScheduleID foreign key property to a nullable int indicates
that this relation is not mandatory (0-N relation, meaning a reservation does not require a return
flight). The DepartFlightScheduleID foreign key property is not nullable and therefore indicates
the relation is mandatory (1-N relation, meaning every reservation must have a departing flight)
2. Implement the Edit method. The method should make sure that the reservation is loaded in the
context by calling the Find method, and then apply the new values to the loaded entity.
Note: You can refer to Lesson 4, "Manipulating Data", Topic 4, "Updating Entities" in
Course 20487, for an example of how to update a detached entity by using the Find method.
o Make sure you only dispose the context member if the object was initiated (not null).
o Set the context member to null after calling its Dispose method.
Developing Windows Azure and Web Services 2-37
Note: If you want to see examples of the implementation of the Dispose method, refer to
its implementation in other repositories.
o GetAll
o Add
o Delete
Note: Review the implementation of the Delete method to understand how cascade delete
was implemented, so that when a Reservation is deleted, its DepartureFlight and ReturnFlight
objects are deleted as well.
Results: After you complete this exercise, the Entity Framework Code First model is ready for testing.
6. Run the tests, and explore the database created by Entity framework
The TestInitialize static method is responsible for initializing the database and the test data, and all
the other methods are intended to test various queries with lazy load and eager load.
2. Explore the insert, update, and delete tests in the FlightActions class.
Observe the use of the Assert static class to verify the results of the test.
Write a LINQ to Entities query that retrieves a Reservation entity having confirmation code 1234,
and performs an eager loading of its departure and return flights.
Use the repository's GetAll method to get a data source for the query.
For the eager load, use the Include method.
Use the Assert static class to verify the reservations entity was loaded, as well as its departing and
returning flights.
To prevent any lazy load operations, use the Assert static class outside the using block scope.
2. In the GetReservationWithFlightsLazyLoad test method, add two Assert tests to verify that lazy
load works.
Use the Assert static class to verify that the departing and returning flights of the reservation are not
null.
Place the call to the Assert static class in the using block, after the comment.
Note: By examining the value of the navigation properties, you are invoking the lazy load
mechanism.
Add the code to turn off lazy loading before the repository is created.
Note: You can refer to Lesson 3, "Querying Data", Topic 4, "Load Entities by Using Lazy and
Eager Loading" in Course 20487, for an example of how to turn off lazy load.
Note: Refer to Lesson 3, "Querying Data", Topic 2, "Query the Database By Using Entity SQL" in
Course 20487, for an example of how to write Entity SQL queries and execute them with the
ObjectContext object.
Call the repository's Edit method and then the Save method to update the Flight entity in the
database.
In a new repository, after the using block, search for the updated flight by its new flight number.
Note: Most of the boilerplate code for creating a repository, saving the entity, and then
locating the entity in a new repository can be found in the DeleteFlight test method in the
FlightActions class.
Each repository is created with a separate context, meaning each repository will use a separate
transaction when saving changes.
Locate the code for loading and updating the flight and location objects.
Each entity is updated and saved in a separate transaction, but because both transactions are located
in the same transaction scope, both transactions are not yet committed.
2. In the UpdateUsingTwoRepositories method, locate the query below the comment //TODO: Lab
02, Exercise 2 Task 5.2 : Review the query for the updated flight that is inside the transaction scope.
Note: When querying from inside a transaction scope, you will get the updated values of
entities, while other users, not participating in the transaction, will see the old values, until the
transaction commits.
Note: Without setting the transaction as complete, both transactions will roll back after the
transaction scope closes.
4. Locate the query below the comment //TODO: Lab 02, Exercise 2 Task 5.4 : Review the query for the
updated flight that is outside the transaction scope.
Note: After the transaction is rolled back, attempts to locate the updated entity will fail.
Task 6: Run the tests, and explore the database created by Entity framework
1. In the TravelCompanionDatabaseInitializer class, complete the implementation of the Seed
method by adding the two reservations to the context and saving the changes.
Add the reservation1 and reservation2 variables to the Reservations collection of the context.
2. Run all the tests in the solution and verify they pass.
To run all tests, open the Test Explorer window from the Test menu, and then click Run All.
3. Open SQL Server Management Studio, connect to the .\SQLEXPRESS database server, then locate the
BlueYonder.Companion.Lab02 database in Object Explorer, and browse the tables that were
created by Entity Framework.
Results: The Entity Framework data model works as designed and is verified by tests.
2-40 Querying and Manipulating Data Using Entity Framework
Question: What is the advantage of using LINQ to Entities as opposed to Entity SQL or raw
SQL statements?
Developing Windows Azure and Web Services 2-41
Best Practices: Always use transactions when performing multiple operations that depend on each
other, and may require compensation when they fail in isolation.
Prefer using LINQ to Entities and not Entity SQL or raw SQL to query the database. This makes your
code less fragile and easier to refactor.
Beware of lazy loading behavior when you return an entity to a higher layer in your application. If the
DbContext object is disposed and the entity has not been fully loaded, accessing its nested
properties may cause an exception.
Use the Entity Framework Fluent API (instead of data annotations) when you map an existing object
model to a database, and when the object model should not change as a result of the mapping.
Review Question(s)
Question: Why should you use Entity Framework and not direct database manipulation with
SQL statements in ADO.NET?
Tools
Visual Studio 2012
SQL Server 2012
Module 3
Creating and Consuming ASP.NET Web API Services
Contents:
Module Overview 3-1
Module Overview
ASP.NET Web API provides a robust and modern framework for creating HTTP-based services. In this
module, you will be introduced to the HTTP-based services. You will learn how HTTP works and become
familiar with HTTP messages, HTTP methods, status codes, and headers. You will also be introduced to the
REST architectural style and Hypermedia.
You will learn how to create HTTP-based services by using ASP.NET Web API. You will also learn how to
host the services in IIS and how to consume them from various clients. At the end of this module, in the
lab "Creating the Traveler ASP.NET Web API Service", you will build an ASP.NET Web API service
application and host it in Internet Information Services (IIS).
Objectives
After you complete this module, you will be able to:
Lesson 1
HTTP Services
Hypertext Transfer Protocol (HTTP) is a communication protocol that was created by Tim Berners-Lee and
his team while working on WorldWideWeb (later renamed to World Wide Web) project. Originally
designed to transfer hypertext-based resources across computer networks, HTTP is an application layer
protocol that acts as the primary protocol for many applications including the World Wide Web.
Because of its vast adoption and also the common use of web technologies, HTTP is now one of the most
popular protocols for building applications and services. In this lesson, you will be introduced to the basic
structure of HTTP messages and understand the basic principles of the REST architectural approach.
Lesson Objectives
After you complete this lesson, you will be able to:
Introduction to HTTP
HTTP is a first class application protocol that was
built to power the World Wide Web. To support
such a challenge, HTTP was built to allow
applications to scale, taking into consideration
concepts such as caching and stateless
architecture. Today, HTTP is supported by many
different devices and platforms, reaching most
computer systems available today.
HTTP also offers simplicity, by using text messages
and following the request-response messaging
pattern. HTTP differs from most application layer
protocols because it was not designed as a
Remote Procedure Calls (RPC) mechanism or a Remote Method Invocation (RMI) mechanism. Instead,
HTTP provides semantics for retrieving and changing resources that can be accessed directly by using an
address.
Developing Windows Azure and Web Services 3-3
HTTP Messages
HTTP is a simple request-response protocol. All
HTTP messages contain the following elements:
Start-line
Headers
An empty line
Body (optional)
Request Messages
Request messages are sent by the client to the server. Request messages have a specific structure based
on the general structure of HTTP messages.
An HTTP Request
GET http://localhost:4392/travelers/1 HTTP/1.1
Accept: text/html, application/xhtml+xml, */*
Accept-Language: en-US,en;q=0.7,he;q=0.3
User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; WOW64; Trident/6.0)
Accept-Encoding: gzip, deflate
Host: localhost:4392
DNT: 1
Connection: Keep-Alive
The first and the most distinct difference between request and response messages is the structure of the
start-line which are called request-lines.
Request-line
This HTTP request messages start-line has a typical request-line with the following space-delimited parts:
HTTP method This HTTP request message uses the GET method, which indicates that the client is
trying to retrieve a resource. Verbs will be covered in-depth in the topic Using Verbs later in this
lesson.
Request URI This part represents the URI to which the message is being sent.
HTTP version This part indicates that the message uses HTTP version 1.1.
Headers
This request message also has several headers that provide metadata for the request. Although headers
exist in both response and request messages, some headers are used exclusively by one of them. For
example, the Accept header is used in requests to communicate the kinds of responses the clients would
prefer to receive. This header is a part of a process known as content negotiation that will be discussed
later in this module.
Body
The request message has no body. This is typical of requests that use the GET method.
3-4 Creating and Consuming ASP.NET Web API Services
Response Messages
Response messages also have a specific structure based on the general structure of HTTP messages.
The HTTP Response returned by the Server for the above Request
HTTP/1.1 200 OK
Server: ASP.NET Development Server/11.0.0.0
Date: Tue, 13 Nov 2012 18:05:11 GMT
X-AspNet-Version: 4.0.30319
Cache-Control: no-cache
Pragma: no-cache
Expires: -1
Content-Type: application/json; charset=utf-8
Content-Length: 188
Connection: Close
{"TravelerId":1,"TravelerUserIdentity":"aaabbbccc","FirstName":"FirstName1","LastName":"L
astName1","MobilePhone":"555-555-5555","HomeAddress":"One microsoft
road","Passport":"AB123456789"}
Status-Line
HTTP response start-lines are called status-lines. This HTTP response message has a typical status-line with
the following space-delimited parts:
HTTP version This part indicates that the message uses HTTP version 1.1.
Status-Code Status-codes help define the result of the request. This message returns a status-code
of 200, which indicates a successful operation. Status codes will be covered in-depth later in this
lesson.
Reason-Phrase A reason-phrase is a short text that describes the status code, providing a human
readable version of the status-code.
Headers
Similar to the request message, the response message also has headers. Some headers are unique for
HTTP responses. For example, the Server header provides technical information about the server software
being used. The Cache-Control and Pragma headers describe how caching mechanisms should treat the
message.
Other headers, such as the Content-Type and Content-Length, provide metadata for the message body
and are used in both requests and responses that have a body.
Body
A response message returns a representation of a resource in JavaScript Object Notation (JSON). The
JSON, in this case, contains information about a specific traveler in a traveling management system. The
format of the representation is communicated by using the Content-Type header describing what is
known as media type. Media types are covered in-depth later in this lesson.
Developing Windows Azure and Web Services 3-5
Port (optional). The port defines a specific port to be addressed. If not present, a default port will be
used. Different schemas can define different default ports. The default port for HTTP is 80.
Absolute path (optional). The path provides additional data that together with the query describes
a resource. The path can have a hierarchical structure similar to a directory structure, separated by the
slash sign (/).
Query (optional). The query provides additional nonhierarchical data that together with the path
describes a resource.
Different URIs can be used to describe different resources. For example, the following URIs describe
different destinations in an airline booking system:
http://localhost/destinations/seattle
http://localhost/destinations/london
When accessing each URI, a different set of data, also known as a representation, will be retrieved.
Using Verbs
HTTP defines a set of methods or verbs that add
an action like semantics to requests. HTTP 1.1
defines an extensible set of eight methods, each
with different behavior. For example, the
following request uses the GET method to retrieve
information about a specific traveler in an airline
traveler system.
Accept-Language: en-US,en;q=0.7,he;q=0.3
User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; WOW64; Trident/6.0)
Accept-Encoding: gzip, deflate
Host: localhost:4392
DNT: 1
Connection: Keep-Alive
In the above example, a method is defined in the first segment of the request-line and communicates
what the request is intended to perform. For example, the GET method used in the request above
communicates that the request is intending to retrieve data about an entity and not trying to modify it.
This behavior makes GET compatible with both of the properties an HTTP method might have: it is both
safe and idempotent.
Safe verbs. These are verbs that are intended to have any side effects on the resource state by the
server other than retrieving data.
Idempotent verbs. These are verbs that are intended to have the same effect on the resource state
when the same request is sent to the server multiple times. For example, sending a single DELETE
request to delete a resource should have the same effect as sending the same DELETE request
multiple times.
Verbs are a central mechanism in HTTP and one of the mechanisms that make HTTP the powerful
protocol it is. Understanding what each verb does is very important for developing HTTP-based services.
The following verbs are defined in HTTP 1.1:
HEAD Requests intended to have the identical Safe, Used to check request
result of GET requests but without Idempotent validity and retrieving
returning a message body. headers information
without having the
message body.
PUT Requests intended to store the entity Idempotent Used to create and update
sent in the request URI, completely resources.
overriding any existing entity in that
URI.
DELETE Requests intended to delete the entity Idempotent Used to delete resources.
identified by the request URI.
For more information about HTTP methods, see the HTTP 1.1 Request For Comments (RFC 2616).
Methods definition in the HTTP 1.1 Request For Comments (RFC 2616)
http://go.microsoft.com/fwlink/?LinkID=298758&clcid=0x409
3xx Codes that indicate that additional action 301 Moved Permanently
Redirection should be taken by the client (usually in
302 Found
respect to a different network addresses)
in order to achieve the result that you 303 See Other
want.
4xx Client Codes that indicate an error that is caused 400 Bad Request
Error by the clients request. This might be
401 Unauthorized
caused by a wrong address, bad message
format, or any kind of invalid data passed 404 Not Found
in the clients request.
5xx Server Codes that indicate an error that was 500 Internal Server
Error caused by the server while it tried to
505 HTTP Version Not Supported
process a seemly valid request.
For more information about HTTP status codes, see the HTTP 1.1 Request For Comments (RFC 2616).
3-8 Creating and Consuming ASP.NET Web API Services
HTTP Status-Codes definition in the HTTP 1.1 Request For Comments (RFC 2616)
http://go.microsoft.com/fwlink/?LinkID=298759&clcid=0x409
Introduction to REST
Until now in this module, you have learned how
HTTP acts as an application layer protocol. HTTP is
used to develop both websites and services.
Services developed by using HTTP are generally
known as HTTP-based services.
State management
In this lesson, you will learn about these capabilities. For more information about REST, see Roy Fieldings
dissertation, Architectural Styles and the Design of Network-based Software Architectures.
Architectural Styles and the Design of Network-based Software Architectures by Roy Fielding
http://go.microsoft.com/fwlink/?LinkID=298760&clcid=0x409
Services that use the REST architectural style are also known as RESTful services. A simple way to
understand what makes a service RESTful is using a taxonomy called the Richardson Maturity Model, first
suggested by Leonard Richardson during his talk during the QCon San Francisco Conference in 2008.
Level zero services. Use HTTP as a transport protocol by ignoring the capabilities of HTTP as an
application layer protocol. Level zero services use a single address, also known as endpoint and a
single HTTP method, which is usually POST. SOAP services and other RPC-based protocols are
examples of level zero services.
Level one services. Identify resources by using URIs. Each resource in the system has its own URI by
which the resource can be accessed.
Level two services. Uses the different HTTP verbs to allow the user to manipulate the resources and
create a full API based on resources.
Level three services. Although the first two services only emphasize the suitable use of HTTP
semantics, level three services introduce Hypermedia, an extension of the term Hypertext as a means
for a resource to describe their own state in addition to their relation to other resources.
Developing Windows Azure and Web Services 3-9
For more information about the Richardson Maturity Model, see Leonard Richardsons presentation and
notes.
Hypermedia
When the World Wide Web started, it strongly affected the way humans consume data. Alongside
abilities, such as remote access to data and the ability to search a global knowledge base, the World Wide
Web also introduced Hypertext. Hypertext is a nonlinear format that enables readers to access data
related to a specific part of the text by using Hyperlinks. The term Hypermedia describes a logical
extension to the same concept. Hypermedia-based systems use Hypermedia elements, known as
hypermedia controls, such as links and HTML forms, to enable resources to describe their current state
and other resources that are related to them.
the user additionally book because of any number of reasons (it is fully booked, canceled, and so on), the
Hypermedia control should not be returned in the resources representation.
This response represents a flight that enables booking in its current state.
{
"Source":{"Country":"Italy","City":"Rome"},
"Destination":{"Country":"France","City":"Paris"},
"Departure":"2014-02-01T08:30:00",
"Duration":"02:30:00",
"Price":387.0},
FlightNumber":"BY001",
"links":[
{
"rel": "booking",
"Link": "http://localhost/flights/by001/booking"
}
]
}
Hypermedia is what differentiates REST from HTTP-based services. It is a simple but powerful concept that
enables a range of capabilities and patterns including service versioning, aspect management, and more
which are beyond the scope of this course. Today, more and more formats and APIs are created by using
Hypermedia.
One of the media types supporting Hypermedia is the Hypertext Application Language (HAL). The HAL
media type offers link-based Hypermedia. For more information about HAL, see the HAL format
specifications.
Media Types
HTTP was originally designed to transfer
Hypertext. Hypertext is a nonlinear format that
contains references to other resources, some of
which are other Hypertext resources. However,
some resources contain other formats such as
Image files and videos, which required HTTP to
support the transfer of different types of message
formats. To support different formats, HTTP uses
Multipurpose Internet Mail Extensions (MIME)
types, also known as media types. MIME types
were originally designed for use in defining the
content of email messages sent over SMTP.
Media types are made out of two parts, a type and a subtype, optionally followed by type-specific
parameters. For example, the type text indicates a human-readable text and can be followed by subtypes
such as HTML, which indicates HTML content and plain indicates a plain text payload.
In addition, the text type gives a charset parameter, so that the following declaration is also valid.
In HTTP, media types are declared by using headers as part of a process that is known as content
negotiation. Content negotiation is not restricted for media type and includes support for language
negotiation, encoding, and more. The following section shows how content negotiation is used for
handling media types.
This request message uses the Accept header in order to communicate to the server what media types it
can accept.
Although the server should try to fulfill the request for content, this is not always possible. Be aware that
in the previous request, the type */* indicates that if text/html and application/xhtml+xml are not
available, the server should return whatever type it can.
3-12 Creating and Consuming ASP.NET Web API Services
This request message uses the Content-Type header in order to declare what media types it uses for the
entity-body.
HTTP/1.1 200 OK
Server: ASP.NET Development Server/11.0.0.0
Date: Sat, 17 Nov 2012 13:27:20 GMT
X-AspNet-Version: 4.0.30319
Cache-Control: no-cache
Pragma: no-cache
Expires: -1
Content-Type: application/json; charset=utf-8
Content-Length: 188
Connection: Close
{"TravelerId":1,"TravelerUserIdentity":"aaabbbccc","FirstName":"FirstName1","LastName":"L
astName1","MobilePhone":"555-555-5555","HomeAddress":"One microsoft
road","Passport":"AB123456789"}
Media types give the structuring of the HTTP messages. Content negotiation enables servers and clients to
set the expectation for what content they should expect during their HTTP transaction. Content
negotiation is not limited to media types. For example, content negotiation is used to negotiate content
compression by using the Accept-Encoding header, localization by using the Accept-Language header,
and more.
Content negotiation in the HTTP 1.1 Request For Comments (RFC 2616)
http://go.microsoft.com/fwlink/?LinkID=298763&clcid=0x409
Lesson 2
Creating an ASP.NET Web API Service
ASP.NET Web API is the first full-featured framework for developing HTTP-based services in the .NET
Framework. Using ASP.NET Web API gives developers reliable methods for creating, testing, and
deploying HTTP-based services. In this lesson, you will learn how to create ASP.NET Web API services and
how they are mapped to the different parts of HTTP. You will also learn how to interact directly with HTTP
messages and how to host ASP.NET Web API services.
Lesson Objectives
After you complete this lesson, you will be able to:
Describe ASP.NET Web API and how it is used for creating HTTP-based services.
Create routing rules.
In 2009, Microsoft released the WCF REST Starter Kit. This added the new WebServiceHost class for
hosting HTTP-based services, and also new capabilities like help pages and Atom support. When the .NET
Framework version 4 was released most of the capabilities of the WCF REST Starter Kit were already rolled
into WCF. This includes support for IIS hosting in addition to new Visual Studio templates available
3-14 Creating and Consuming ASP.NET Web API Services
through the Visual Studio Extensions Manager. But even then, WCF was still missing support a lot HTTP
scenarios.
The need for a comprehensive solution for developing HTTP services in the .NET Framework justified
creating a new framework. Therefore, in October 2010, Microsoft announced the WCF Web API, which
introduces a new model and additional capabilities for developing HTTP-based services. These capabilities
included:
Better support for content negotiation and media types.
Testability.
Integration with other relevant frameworks like Entity Framework and Unity.
The WCF Web API team released 6 preview versions until in February 2012, the ware united with the
ASP.NET team, forming the ASP.NET Web API.
Routing Tables
ASP.NET uses the
System.Web.Routing.RouteTable class to hold a
data structure that contains different routes that
were configured before the initialization of the
host. A route contains a URI template and default
values for the template. ASP.NET uses routes to map HTTP requests based on their request-URI and HTTP
method to the correlating code in the server.
Defining Routes
ASP.NET Web API routes are defined by using the MapHttpRoute extension method as is shown in the
following code.
This example shows the configuration of a simple route based on the name of the controller.
The following headings discuss controllers and actions in-depth, because understanding what controllers
and actions are is important for understanding routes.
When ASP.NET Web API receives a request that matches the template in the route, it looks for a controller
that matches the value that was passed in the controller placeholder of the URI template by name. For
example, a URI with the following URI relative path, "api/flights/by001", will be evaluated against the
template defined in the earlier example ("api/{controller}/{id}"). ASP.NET Web API will look for a
controller that is named FlightsController.
This controller maps when the flights value is passed as the value for the {controller} placeholder.
An Action definition
public class FlightsController : ApiController
{
public HttpResponseMessage Get(string id)
{
// Place code here to return an HttpResponseMessage object
}
}
Note: This convention only supports the GET, HEAD, PUT, POST, OPTION, PATCH and
DELETE methods. However, actions also support attribute-based routing described later in this
lesson.
For parameter bindings, simple types include all .NET Primitives with the addition of DateTime, Decimal,
TimeSpan, String, and Guid.
3-16 Creating and Consuming ASP.NET Web API Services
Defining Controllers
To create a controller, you have to do the following:
Applying Filters. ASP.NET Web API filters let developers extend the request/response pipeline.
Before executing an action method, the ApiController class is in charge of applying and executing
the filters in the correct order before and after the execution of the action methods.
The Request property. This API provides access to the HttpRequestMessage representing the HTTP
request for the operation. HttpRequestMessage class is discussed in-depth in lesson 3, Handling
HTTP Requests and Responses, of this module.
Developing Windows Azure and Web Services 3-17
The Configuration property. The Configuration property exposes the configuration that is being
used by the host.
Additional Reading: Filters are discussed in-depth throughout Module 4. Action filters are
discussed in Module 4, Lesson 1, The ASP.NET Web API Request Pipeline; Exception filters are
discussed in Module 4, lesson 2, The ASP.NET Web API Response Pipeline; and Authorization
filters are discussed in Module 4, lesson 4, Implementing security in ASP.NET Web API
Services.
The method selection can be done based on the requests HTTP method has used and on the request-URI.
There are several techniques for mapping Actions:
Mapping to HTTP methods based on convention.
In addition to matching the HTTP method or request-URI to the method name or attribute, ASP.NET Web
API takes the parameters that are passed to the method into consideration and makes sure that they
match.
{
}
[AcceptVerbs(AcceptVerbs.Get)]
public HttpResponseMessage AirTrips()
[HttpDelete]
public HttpResponseMessage Flights(int id)
Note: This convention and HttpVerb enum support only the GET, HEAD, PUT, POST,
OPTION, PATCH, and DELETE methods.
Demonstration Steps
1. Open Visual Studio 2012 and create a new ASP.NET 4 MVC Web Application project, by using the
Web API template. Name the new project MyApp.
2. Review the content of the WebApiConfig.cs file that is under the App_Start folder.
Developing Windows Azure and Web Services 3-19
3. Review the content of the ValuesController.cs file that is under the Controllers folder. The
parameterless Get action method can be invoked by using HTTP (for example, using the /api/values
relative URI).
4. Run the project without debugging, and access the parameterless Get action method from the
browser.
In the browser, append the api/values to the end of the address, and press Enter.
5. In the ValuesController class, decorate the parameterless Get action with the [ActionName]
attribute, and set the action name to List.
6. In the WebApiConfig class, add a new route to support MVC-style invocation.
Parameter Value
name ActionApi
routeTemplate api/{controller}/{action}/{id}
Place the call to the method before the existing routing code.
7. Run the project and access the parameter less Get action method from the browser, using the MVC-
style routing.
In the browser, append the api/values/list to the end of the address, and press Enter.
Open the list.json file in Notepad and observe its content.
Question: How ASP.NET Web API knows which method to invoke when received a request
from the client?
3-20 Creating and Consuming ASP.NET Web API Services
Lesson 3
Handling HTTP Requests and Responses
Creating an instance of a class and finding the method to execute is not always enough. In order to
provide a real solution for HTTP-based services, ASP.NET Web API has to provide additional functionality
for interacting with HTTP messages. This functionality includes mapping parts of the HTTP request to
method parameters in addition to a comprehensive API for processing and controlling HTTP messages.
Using that API, you can now easily interact with headers in the requests and response messages, control
status codes, and more.
Lesson Objectives
After completing this lesson, you will be able to:
The Entity-body. In some HTTP messages, the message body passes data.
Note: Headers are also used to pass metadata and not as part of the business logic.
Headers data is not bound to methods parameters by default and is accessed by using the
HttpRequestMessage class described later in this lesson.
By default, ASP.NET Web API differentiates simple and complex types. Simple types are mapped from the
URI and complex types are mapped from the entity-body of the request. For parameter bindings, simple
types include all .NET primitive types (int, char, bool, and so on) with the addition of DateTime, Decimal,
TimeSpan, String, and Guid.
Developing Windows Azure and Web Services 3-21
Retrieve the value of the Accept-Language header by using the Request property
public string Get(int id)
{
switch (bestLang)
{
case "en":
return "Hello";
case "da":
return "Hej";
}
return string.Empty;
}
The System.Net.Http.HttpResponseMessage
class enables programmers to define every aspect
of the HTTP response message the action returns.
3-22 Creating and Consuming ASP.NET Web API Services
In order to control the HTTP response, you must create an action with HttpResponseMessage as its
return type. Inside the action, you have to use the Request.CreateResponse or
Request.CreateResponse<T> methods to create a new HttpResponseMessage.
This code example creates a new flight reservation and returns an HTTP message that has two important
characteristics: a 201 created status and a Location header with the URI of the newly created resource.
This code example throws HttpResponseException to return a 404 Not found response.
Throwing an HttpResponseException
if (flight == null)
{
throw new HttpResponseException(
new HttpResponseMessage(HttpStatusCode.NotFound);
}
Developing Windows Azure and Web Services 3-23
Demonstration Steps
1. In Visual Studio 2012, open the
D:\Allfiles\Mod03\Democode\ThrowHttpResponseException\start\start.sln solution.
2. Open the DestinationsController.cs from the Controllers folder, and review the contents of the Get
method.
3. Change the Get method so that it returns a Destination object and not an HttpRequestMessage
object.
Initialize the exception with a new HttpResponseMessage, and set the status code of the message to
HttpStatusCode.NotFound.
5. Run the project without debugging, and verify the Get method returns an HTTP 404 for unknown
destinations.
In the browser, append the api/destinations/1 to the end of the address, and press Enter. Open the
file in Notepad and verify you see information for Seattle.
In the browser, append the api/destinations/6 to the end of the address, and press Enter. Verify you
get an HTTP 404 response.
Question: In which case should you use HttpResponseException?
3-24 Creating and Consuming ASP.NET Web API Services
Lesson 4
Hosting and Consuming ASP.NET Web API Services
As with any other application, ASP.NET Web API services need a process to give them a run time
environment. This run time must accommodate code that potentially serves a large amount of clients.
When developing services, hosting environments provide the majority of the capabilities needed to
service client requests and maintain a quality of service. After hosting the service, you will learn how to
consume the service from various client environments including HTML, JavaScript, and the .NET
Framework.
Lesson Objectives
After you complete this lesson, you will be able to:
Introduction to IIS
Internet Information Services (IIS) is a web server
service that is a part of Microsoft Windows. IIS was
first released in 1995 as an update to Windows NT
3.51 and has been included in every version of
Microsoft server operating systems since. IIS
provides a hosting environment for applications
and services and also a set of utilities and
extensions for managing different aspects of the
applications and serves life cycle.
Extensibility. Provides a processing pipeline for messages that is built out of an extensible set of
components called modules. Each model is in charge of performing a specific action based on the
messages passing through the pipeline.
Security. Provides built-in modules for handling security. This includes capabilities for managing
secured conversation with Secure Socket Layer (SSL), handling different kinds of HTTP authentication
(Basic, Digest, and so on), and IP restriction.
Reliability. Provides a set of worker processes for application and services. These worker processes
are called application pools and provide process management for one or more applications.
Application pools are monitored and managed by IIS and provide benefits like isolation between
services and applications, resources, and fault management.
Manageability. Provides management tools including an MMC-based UI and a set of Microsoft
PowerShell commands that give managing IIS core functionality and extensions. IIS also provides
built-in logging and diagnostics capabilities that simplify managing production environments.
Developing Windows Azure and Web Services 3-25
Performance. Provides built-in caching and compression modules to improve HTTP applications
performance. IIS also uses the HTTP.SYS kernel mode driver to listen to HTTP traffic. HTTP.SYS also
provides HTTP caching mechanisms that enable the users to handle cached requests completely in
kernel mode providing a high-performance caching mechanism.
Scalability. Provides a set of tools to manage multiple servers. These include centralized
configuration, sharing application files between server, and a remote administration module that
enables the users to manage servers in a centralized manner. IIS provides infrastructure for load
balancing by using extensions, such as Microsoft Application Request Routing (ARR), that provide
HTTP-based message routing. It also provides infrastructure for load balancing and also Network
Load Balancing (NLB), which provides infrastructure for load balancing at the network layer.
By default, when you create an ASP.NET Web API project, Visual Studio 2012 uses IIS Express to host your
project, and not the regular IIS. IIS Express is a lightweight, self-contained version of IIS optimized for the
development environment. IIS Express can be installed on computers that do not have IIS installed on
them, or on computers that cannot be installed with the latest version of IIS. For example, if you are
developing on a computer that is running Windows Server 2008, you have IIS 7 and you cannot upgrade
it to IIS 8. However, you can install IIS 8.0 Express on that computer.
The following link describes in details the differences between IIS and IIS Express.
The main difference between IIS and IIS Express is in the security context. IIS Express uses the security
context of the logged-on user to start the hosting process, whereas IIS use the identity defined in the
application pool. This is usually a non-privileged built-in account.
3-26 Creating and Consuming ASP.NET Web API Services
Note: Using a different security context can lead to differences in the behavior of the
application. For example, when you host your application in IIS Express, it might be able to access
the database because it uses your logged on identity which has administrative permissions in the
database. However, when the application is hosted in IIS, it might fail trying to access the
database because the identity used by the application pool does not have the required
permissions to log on to the database.
Ideally, after you have verified your application is running correctly, use IIS to host your application,
instead of IIS Express. To instruct Visual Studio 2012 to use IIS, right-click the ASP.NET Web API project in
the Solution Explorer window, and then click Properties. On the Web tab, scroll to the Servers group,
remove the selection from the Use IIS Express, and then click Create Virtual Directory to create a
directory for your web application in IIS.
This image is a snapshot of the ASP.NET web project properties in Visual Studio 2012, where you set the
kind of hosting server (IIS or IIS Express).
DNT: 1
Connection: Keep-Alive
Another way to start HTTP requests from a browser is using HTML forms. HTML forms are HTML elements
that create a form-like UI in the HTML document that lets the user insert and submit data to the server.
HTML forms contain sub elements, called input elements, and each represents a piece of data both in the
UI and in the resulting HTTP message.
This HTML form lets users submit a new location to the server from a web browser, generating a POST
request.
This HTTP message was generated by submitting the newLocation HTML form.
LocationId=7&Country=Belgium&State=&City=Brussels
The most flexible mechanism to start HTTP from a browser environment is using JavaScript. Using
JavaScript provides two main capabilities for that are lacing in other browser-based techniques:
Complete control over the HTTP requests (including HTTP method, headers, and body).
Asynchronous JavaScript and XML (AJAX). Using AJAX, you can send requests from the client after the
browser completed loading the HTML. Based on the result of the calls, you can use JavaScript to
update parts of the HTML page.
Demonstration Steps
1. In Visual Studio 2012, open the
D:\Allfiles\Mod03\Democode\ConsumingFromJQuery\Begin\JQueryClient\JQueryClient.sln
solution.
3-28 Creating and Consuming ASP.NET Web API Services
2. In the JQueryClient project, expand the Views folder, then expand the Home folder. Review the
content of the Index.cshtml file. Locate the script section and observe how the code uses jQuery to
retrieve the data from the server.
3. Add the following JavaScript code to the <script> element, to override the default behavior for
submitting a DELETE request to the server.
$("#deleteLocation").submit(function (event) {
event.preventDefault();
var desId = $(this).find('input[name="LocationId"]').val();
$.ajax({
type: 'DELETE',
url: 'destinations/' + desId
});
});
In the JQueryClient project, expand the Controllers folder, open the DestinationsController.cs file,
and place a breakpoint in the Delete method.
Press F5 to debug the application. In the browser, type 1 in the Location id box, and then click
Delete.
Although this code provides a simple asynchronous API, it is not common for the client to require string
representation of the data. A more useful approach is to obtain a de-serialized object based on the entity
body.
To support serializing and de-serializing objects, HttpClient uses a set of extensions defined in the
System.Net.Http.Formatting.dll that is a part of the Microsoft ASP.NET Web API Client Libraries
NuGet package. The System.Net.Http.Formatting.dll adds the extension methods to the
System.Net.Http namespace so that no additional using directive is needed.
This code example uses the ReadAsAsync<T> extension method to deserialize the content of the HTTP
message into a list of Destinations.
Demonstration Steps
1. Open Visual Studio 2012 as an administrator and open the
D:\Allfiles\Mod03\Democode\ConsumingFromHttpClient\begin\HttpClientApplication\HttpClientApp
lication.sln solution.
2. Add the Microsoft ASP.NET Web API Client Libraries NuGet package to the
HttpClientApplication.Client project.
3. Add code to perform a GET request for the destinations resource inside the CallServer method and
print the responses content as string to the console window.
Call the client.GetAsync method with the relative URI api/Destinations, and use the await keyword
to call the method asynchronously.
Store the return value of the GetAsync method in a variable of type HttpResponseMessage. Name
the new variable message.
After the code you added in the previous step, call the message.Content.ReadAsAsync method with
the generic type List<Destination> to deserialize the response message to a list of Destination
objects. Use the await keyword to call the method asynchronously.
Question: What are the benefits of HttpClient that makes it more useful than
HttpWebRequest and WebClient?
Developing Windows Azure and Web Services 3-31
Objectives
After you complete this lab, you will be able to:
Lab Setup
Estimated Time: 30 Minutes
Virtual Machine: 20487B-SEA-DEV-A, 20487B-SEA-DEV-C
1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.
3. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.
4. In the Action pane, click Connect. Wait until the virtual machine starts.
Password: Pa$$w0rd
6. Return to Hyper-V Manager, click 20487B-SEA-DEV-C, and in the Action pane, click Start.
7. In the Action pane, click Connect. Wait until the virtual machine starts.
Password: Pa$$w0rd
9. Verify that you received credentials to log in to the Azure portal from you training provider, these
credentials and the Azure account will be used throughout the labs of this course.
3-32 Creating and Consuming ASP.NET Web API Services
2. Change the access modifier of the class to public, and derive it from the ApiController class.
3. Create a private property named Travelers of type ITravelerRepository and initialize it in the
constructor.
Create a new property named Travelers of type ITravelerRepository.
Initialize the Travelers property with a new instance of the TravelerRepository class.
4. Create an action method named Get to handle GET requests.
The method receives a string parameter named id and returns an HttpResponseMessage object.
Call the FindBy method of the ITravelerRepository interface to search for a traveler using the id
parameter. The ID of the traveler is stored in the traveler's TravelerUserIdentity property.
If the traveler was found, use the Request.CreateResponse to return an HTTP response message with
the traveler. Set the status code of the response to OK.
If a traveler was not found, use the Request.CreateResponse to return an empty message. Set the
status code to NotFound (HTTP 404).
The method receives a Traveler parameter called traveler and returns an HttpResponseMessage
object.
Implement the method by calling the Add and then the Save methods of the Travelers repository.
Set the Location header of the response to the URI where you can access the newly created traveler.
The new URI should be a concatenation of the request URI and the new traveler's ID.
Note: You can refer to the implementation of the Post method in the
ReservationsController class for example of how to set the Location header.
The method receives a string parameter called id and a Traveler parameter called traveler. The
method returns an HttpResponseMessage object.
If the traveler does not exists in the database, use the Request.CreateResponse method to return an
HTTP response message with the HttpStatusCode.NotFound status.
Note: To check if the traveler exists in the database, use the FindBy method as you did in
the Get method.
If the traveler exists, call the Edit and then the Save methods of the Travelers repository to update
the traveler, and then use the Request.CreateResponse method, to return an HTTP response
message with the HttpStatusCode.OK status.
Note: The HTTP PUT method can also be used to create resources. Checking if the
resources exist is performed here for simplicity.
Note: To check if the traveler exists in the database, use the FindBy method as you did in
the Get method.
If the traveler exists, call the Delete and then the Save methods of the Travelers repository, and then
use the Request.CreateResponse method, to return an HTTP response message with the
HttpStatusCode.OK status.
Results: After you complete this exercise, you will be able to run the project from Visual Studio 2012 and
access the travelers service.
2. In the BlueYonder.Companion.Client project, open the DataManager class from the Helpers folder
and implement the GetTravelerAsync method.
Build the relative URI using the string format "{0}travelers/{1}". Replace the {0} placeholder with the
BaseUri property and the {1} placeholder with the hardwareId variable.
Call the client.GetAsync method with the relative address you constructed. Use the await keyword
to call the method asynchronously. Store the response in a variable called response.
Check the value of the response.IsSuccessStatusCode property. If the value is false, return null.
If the value of the response.IsSuccessStatusCode property is true, read the response into a string by
using the response.Content.ReadAsStringAsync method. Use the await keyword to call the
method asynchronously.
4. Review the CreateTravelerAsync method. The method sets the ContentType header to request a
JSON response. The method then uses the PostAsync method to send a POST request to the server.
5. Insert a breakpoint at the beginning of the CreateTravelerAsync method.
6. Review the UpdateTravelerAsync method. The method uses the client.PutAsync method to send a
PUT request to the server.
7. Insert a breakpoint at the beginning of the UpdateTravelerAsync method.
3. Debug the client app and verify that you break before sending a GET request to the server. Press F5
to continue running the code.
4. Go back to the virtual machine 20487B-SEA-DEV-A and debug the service code.
The breakpoint you have set in the Get method of the TravelersController class should be
highlighted.
The breakpoint you have set in the Post method solution should be highlighted.
7. Go back to the virtual machine 20487B-SEA-DEV-C and use the client app to purchase a flight from
Seattle to New York.
Display the app bar search for the word New. Purchase the trip from Seattle to New York.
Fill in the traveler information according to the following table and then click Purchase.
Field Value
Passport Aa1234567
8. Go back to the virtual machine 20487B-SEA-DEV-A and debug the service code.
The breakpoint you have set in the Put method solution should be highlighted.
Inspect the contents of the traveler parameter.
10. Go back to the virtual machine 20487B-SEA-DEV-A and stop the debugging in Visual Studio 2012.
Results: After you complete this exercise, you will be able to run the BlueYonder Companion client
application and create a traveler when purchasing a trip. You will also be able to retrieve an existing
traveler and update its details.
Question: Why did you need to return an HttpRequestMessage from the Post action
method?
3-36 Creating and Consuming ASP.NET Web API Services
Review Question(s)
Question: What are ASP.NET Web API controllers used for?
Tools
IIS
4-1
Module 4
Extending and Securing ASP.NET Web API Services
Contents:
Module Overview 4-1
Module Overview
ASP.NET Web API provides a complete solution for building HTTP services, but services often have various
needs and dependencies. In many cases, you will need to extend or customize the way ASP.NET Web API
executes your service. Handling needs such as applying error handling and logging integrate with other
components of your application and support other standards that are available in the HTTP world.
Understanding the way ASP.NET Web API works is important when you extend ASP.NET Web API. The
division of responsibilities between components and the order of execution are important when
intervening with the way ASP.NET Web API executes.
ASP.NET Web API also includes built-in extensions you can use. In this module, you will learn how to
extend your services to support OData.
Finally, with ASP.NET Web API, you can also extend the way you interact with other parts of your system.
With the dependency resolver mechanism, you can control how instances of your service are created,
giving you complete control on managing dependencies of the services.
Objectives
After completing this module, students will be able to:
Lesson 1
The ASP.NET Web API Pipeline
Based on your organizations needs and requirements, you need to customize and extend the ASP.NET
Web API pipeline. This module describes the ASP.NET Web API architecture. The module also covers
various tools and functionalities such as filters, asynchronous actions, and media type formatters that you
can use to customize and extend the ASP.NET Web API architecture.
Lesson Objectives
After completing this lesson, students will be able to:
Architecture Overview
The ASP.NET Web API processing architecture is made of three layers:
Hosting
Message handlers
Controllers
Hosting
The hosting layer is in charge of interacting with the underlying communication infrastructure, creating an
HttpRequestMessage object from the request and sending the object down through the message
Developing Windows Azure and Web Services 4-3
handling pipeline to the messages handler layer. The hosting layer is also in charge of converting
HttpResponseMessage objects received from the message handlers to HTTP messages sent through the
underlying communication infrastructure.
ASP.NET Web API has two implementations for the hosting layer:
Web-hosting implemented in the System.Web.Http.WebHost.dll uses the HttpControllerHandler
class, which is an asynchronous IIS handler. This provides a hosting layer for hosting in Internet
Information Services (IIS).
Note: WCF will be introduced in Module 5, Creating WCF Services in Course 20487.
Message Handlers
Message handlers are objects that are chained to each other to form a pipeline. Every handler receives an
HttpRequestMessage object, returns an HttpResponseMessage object and performs some processing
on the message before passing it to the next handler in the pipeline. This allows ASP.NET Web API to
separate the concerns for different processing that should be applied on every message, and provides an
extensibility point for developers. Message handlers are covered later in this lesson.
After the hosting layer has completed creating the HttpRequestMessage, it creates a new instance of the
System.Web.Http.HttpServer class, which is a message handler. When an instance of the HttpServer
class is initialized, it creates a chain of message handlers in the following order:
Custom Message Handlers. With ASP.NET Web API, you can create your own message handlers and
configure the host to execute them. When HttpServer starts processing a message, custom message
handlers are processed first. Custom message handlers are covered in depth, in the next topic
Message Handlers.
HttpRoutingDispatcher. After the custom message handlers, ASP.NET Web API adds a message
handler of type HttpRoutingDispatcher. The HttpRoutingDispatcher class is in charge of finding
the route that matches the HttpRequestMessage.
Controllers
The final layer in the ASP.NET Web API is executed by the controllers themselves. When the
ExecuteAsync method of a controller is called, it starts a process that should result in execution of an
action method processing the request and returning a response. The process is made out of the following
steps:
Action Selection. The first step for executing an action method is identifying which action should be
executed. Action selection is covered in Module 3, Creating and Consuming ASP.NET Web API
Services, Lesson 2, Creating and Consuming ASP.NET Web API Services in Course 20487.
Creating the Filters Pipeline. Each action can have a set of components called filters associated with
it. Similar to message handlers, filters also provide a way to create a pipeline of processing units but
only for an action and not for the entire host. ASP.NET Web API has three types of filters executed in
the following order:
4-4 Extending and Securing ASP.NET Web API Services
HttpActionBinding. The HttpActionBinding class performs the process of parameter binding and is
executed after the authorization filters. Parameter binding is covered in Module 3, Creating and
Consuming ASP.NET Web API Services, Lesson 2, Creating and Consuming ASP.NET Web API
Services in Course 20487.
The following code shows a simple handler created by deriving from the DelegatingHandler class.
}
}
You can add custom message handlers to the ASP.NET Web API pipeline by using configuration. ASP.NET
Web API hosts (both HttpSelfHostServer and HttpControllerHandler) can be configured by using the
System.Web.Http.HttpConfiguration class that is passed to their constructor.
Self-hosting
When using a self-host you need to create a new instance of the
System.Web.Http.SelfHost.HttpSelfHostConfiguration class. The HttpSelfHostConfiguration class
derives from the HttpConfiguration class and adds configuration for capabilities for things that in web-
hosting are managed by Internet Information Services (IIS) like managing certificates and timeouts. After
you have created a new instance of the HttpSelfHostConfiguration class, you create a new instance of
the handler and call the Add method of the MessageHandlers property.
Web-hosting
In web-hosting, the HttpServer is created by the HttpControllerHandler when receiving the first
request. To configure the HttpServer, you need to set the GlobalConfiguration.Configuration static
property in the same way you configured the HttpSelfHostConfiguration object in the preceding
example. The initialization should be performed in the Application_Start method of the global.asax,
before the first request is handled.
4-6 Extending and Securing ASP.NET Web API Services
Filters
Delegating handlers are applied early on in the
ASP.NET Web API pipeline. This is done before
reaching the controller. This means that any
delegating handler that is configured for a host
will be executed for every request and response
the host handles.
Authorization Filters. These are classes that implement the IAuthorizationFilter interface.
Authorization filters are the first type of filters to be executed in the filters pipeline and are in charge
of validating if the request is authorized. A common use is to return a 401 (Unauthorized) response if
the request is not authenticated, or 403 (Forbidden) if the request is authenticated but users have no
permission to execute the action.
The following code is an example of an authorization filter that searches for an ASP.NET session variable
called user for authorizing a user.
Func<Task<HttpResponseMessage>> continuation)
{
if (HttpContext.Current.Session["user"] == null)
throw new HttpResponseException(
new HttpResponseMessage(HttpStatusCode.Unauthorized)
);
Note: The usage of the Authorization filter is explained in Lesson 3, Implementing Security
in ASP.NET Web API.
Action Filters These are classes that implement the IActionFilter interface. Action filters are
executed later on in the filter pipeline, this is done after the authorization filters are executed and
after parameter binding takes place. You can use action filters to extend the ASP.NET Web API
pipeline in a similar way to delegating handlers. There are two main differences between action filters
Developing Windows Azure and Web Services 4-7
and delegating handlers. The first is that action filters can be applied to specific actions or controllers.
The second difference from delegating handler is the fact that action filters do not receive an
HttpRequestMessage as a parameter. Instead, action filters receive a parameter of type
HttpActionContext. The HttpActionContext provides a more complete object model, which
includes access to APIs such as actions arguments, model state, the request, and response and more.
The following code sample shows an action filter the uses the System.Debugging.Trace call to omit
traces.
Func<Task<HttpResponseMessage>> continuation)
{
Trace.WriteLine("Trace filter start");
Exception Filters These are classes that implement the IExceptionFilter interface and are used to
handle exceptions. Exception filters are executed after the completion of other filters and only if the
Task<HttpResponseMessage> returned by the filters pipeline is in faulted state.
Demonstration Steps
1. Open the RequestResponseFlow.sln solution from the
D:\Allfiles\Mod04\DemoFiles\RequestResponseFlow\end\RequestResponseFlow folder.
2. Add a trace handler to the RequestResponseFlow.Web project by creating a new class called
TraceHandler.
3. Ensure that the TraceHandler class is a Delegating Handler by deriving from the DelegatingHandler
class.
4. Implement the SendAsync method by writing a start message and the request to the trace log,
calling the base method, writing an end message and finally returning the response from the base
method.
5. Add the trace handler class to the message handlers pipeline by adding a new instance of the
TraceHandler class to the MessageHandlers collection of the configuration object.
6. Add a new filter to the RequestResponseFlow.Web project by creating a new class called
TraceFilterAttribute.
7. Ensure that the TraceFilterAttribute class is an action filter by deriving it from the ActionFilter class
and requiring the IActionFilter interface.
8. Implement the ExecuteActionFilterAsync method by writing a start message to the trace log,
followed by each of the individual elements in the ActionArguments collection, calling the
continuation and wait for it to finish and finally by writing an end message to the trace log.
9. Implement the AllowMultiple property by returning true. This property indicates if the attribute can
be applied multiple times on the same action or controller.
Asynchronous Actions
One of the most powerful capabilities of ASP.NET
Web API is the support for building asynchronous
actions. Asynchronous actions provide a simple to
use mechanism that you can use to improve
services scalability when performing I/O bound
operations.
more high level APIs such as ADO.NET and HttpClient provide both synchronous and asynchronous
operations.
The following code shows a synchronous call using the WebRequest API.
The preceding code is relatively easy to follow. However, there is one line that you should pay close
attention to. When calling the Client.GetResponse method, the executing thread is blocked waiting for
the response. This blocking behavior is unecessary, considering the fact that the most of
Client.GetResponse method execution is carried out by the network card and the remote server.
The preceding code uses the await keyword to simplify the call to the asynchronous
HttpClient.GetAsync method. While this code seems sequential during the execution, it is actually
divided into the following steps:
When calling the HttpClient.GetAsync method, the method immediately returns a Task representing
its asynchronous execution and the current thread returns.
When using the await keyword, the C# compiler generates a continuation method that includes all
the code following the await statement. This code will be used as the continuation of the task
returned by the HttpClient.GetAsync method, which is invoked by the Input/Output Completion
Port (IOCP).
The following code sample shows an asynchronous service call executed from inside an asynchronous
action.
Demonstration Steps
1. Open the AsynchronousActions.sln solution from the
D:\Allfiles\Mod04\DemoFiles\AsynchronousActions\begin\AsynchronousActions folder.
2. Observe the code in the CountriesController class. The Get method calls the GetCountries method,
which uses a synchronous web request call to retrieve the list of countries from an external web
service. To better utilize the thread pool, the Get method and the GetCountries method should both
run asynchronously.
3. Change the GetCountries method to asynchronous by adding the async keyword to the method
declaration and returning a Task<XDocument> instead of XDocument.
4. Replace the code that creates an HttpWebRequest with a code that creates a new HttpClient
object. Store the object in the client variable.
5. Replace the client.Accept property set with the matching HttpClient code. Use the
DefaultRequestHeaders.Accept property to access the Accept HTTP header, and add the
application/xml media type.
6. Replace the client.GetResponse method call with the matching HttpClient code. Use the GetAsync
method to call the service asynchronously, and add the await keyword before calling the method to
ensure the response variable is set after the list of countries is retrieved.
7. Replace the response.GetResponseStream method call with the matching HttpResponseMessage
code. Use the Content property to get the HTTP message, and then use the ReadAsStreamAsync
method to get the body of the response asynchronously. Add the await keyword before calling the
method to ensure the Load method is called after the body is read asynchronously.
8. In the Get method, add the await keyword before calling the GetCountries method to ensure the
result variable is set after the list of countries is retrieved and loaded to the XDocument object.
9. Change the Get method to asynchronous by adding the async keyword to the method declaration
and returning a Task<IEnumerable<string>> instead of IEnumerable<string>.
10. Start the DataServices web application, and then start the AsynchronousActions.Web web
application. Verify you see the list of countries in the browser.
Developing Windows Azure and Web Services 4-11
Sometime the same media type can be supported only by specific types. For example images might be a
valid media-type when requesting a resource for an employee in a company, but not for a department.
The MediaTypeFormatter class has the CanReadType and CanWriteType abstract methods that can be
used to define which types can be read or written using the specific media type formatter.
Finally, you can implement the actual process of reading or writing the data using the
ReadFromStreamAsync and WriteToStreamAsync methods.
The following code demonstrates the use of the WriteToStreamAsync method to provide a list of
employees using the CSV file format.
}
}
}
Demonstration Steps
1. Open the ImagesWithMediaTypeFormatter.sln solution from the
D:\Allfiles\Mod04\DemoFiles\ImagesWithMediaTypeFormatter folder.
2. Explore the content of the ValuesController class. The controller handles Value objects.
3. Explore the content of the Value class in the ValuesController.cs file. The [IgnoreDataMember]
attribute prevents the serialization of the Thumbnail property.
4. Open the ImageFormatter.cs file from the Formatters folder and examine its content. The
constructor of the ImageFormatter class uses the SupportedMediaTypes collection to specify the
mime types supported by the media type formatter.
5. Observe the code in the CanWriteType method. The formatter is used when the content is a Value
object.
6. Observe the code in the WriteToStream method. The method uses the Thumbnail property of the
Value object to locate the image file and return it instead of the object returned by the controllers
action.
7. Open the UriFormatHandler.cs file from the Formatters folder and examine its content. The
method checks the requests URL. If the extension in the URL matches one of the image types, the
extension is removed from the URL, a matching mime type is added to the request, and the request is
send to the next component in the pipeline.
8. Run the web application without debugging, open the developer tools window by pressing F12, and
in the Network tab, click Start capturing.
9. Enter a value from 0 to 9, and then click Get default. Use the Network tab to view the requests
Accept header, and the responses Content-Type header. When the Accept header is set to */*, the
default content type of ASP.NET Web API is JSON. Click Clear to clear the list of requests.
10. Click Get JSON and use the Network tab to view the requests Accept header, the responses
Content-Type header, and the responses body. When the Accept header is set to application/json,
the result is a JSON string. Observe that the Thumbnail property is not present because it was
omitted from the serialization.
11. Click Get XML and use the Network tab to view the requests Accept header, the responses
Content-Type header, and the responses body. When the Accept header is set to application/xml,
the result is an XML string.
12. Click Get Image and use the Network tab to view the requests Accept header, the responses
Content-Type header, and the responses body. When the Accept header is set to an image type,
the result is an image instead of the Value objects content.
Note: Close the developer tools window before closing the browser.
Developing Windows Azure and Web Services 4-13
Lesson 2
Creating OData Services
This lesson describes the purpose and functionality of the OData services. The topics show you how to
create queryable actions with OData. You will also see how to create OData models and consume OData
services.
Lesson Objectives
After completing this lesson, students will be able to:
$orderby. You can use the $orderby query option to specify an expression that enables you to
control the order of the entries in the result feed returned by the OData service.
The following OData service uses the $orderby query option to order flights by their date
$top. You can use the $top query option to specify the maximum amount of entries to be returned
by your query.
The following OData service uses the $top query option to return the first 10 flights from the service
4-14 Extending and Securing ASP.NET Web API Services
$skip. You can use the $skip query option you can specify a number of entries to be omitted from
the beginning of the feed.
The following OData service uses the $skip query option to omit the first 10 flights from the service.
$filter. You can use the $filter query option to specify an expression containing logical operators. The
expression defines which subset of entries will be retrieved from the sum total of the services entries.
The following code uses the $filter query option to select flights to New-York.
For more information about OData Query String Options, see the OData documentation, URI Conventions
section.
The following action provide support for OData query string options by returning an IQueryable<Flights>.
A Queryable Action
[Queryable]
public IQueryable<Flights> Get()
{
return FlightsRepository.Flights;
}
OData Models
OData query strings options are powerful but
OData is built to expose complete data models.
This includes hierarchies and the relations
between them using links, and exposing metadata
regarding the data models structure. ASP.NET
Web API has built-in mechanisms for creating and
exposing OData models. Most of these
mechanisms can be derived from the
Microsoft.AspNet.WebApi.OData NuGet
package.
OData Controllers
Developing Windows Azure and Web Services 4-15
To deal with OData formatting, ASP.NET Web API introduces the ODataController base class. After
deriving from the ODataController class, you can implement OData actions.
[Queryable]
public IQueryable<Flight> Get()
{
return context.Flights;
}
}
A more convenient option for implementing OData controllers is deriving from the
EntitySetController<TEntity, TKey> class, which provides a set of virtual methods you can override for
exposing an entity set using OData.
OData Routes
After you have your OData controllers in place and a corresponding EDM, you need to provide a route to
your OData service, which will expose the different feeds as well as the service metadata document for
your model. To do so, you can use the MapODataRoute extension method.
The following code uses the MapOdataRoute extension method to expose an OData service
2. In the Add Service Reference dialog box, enter the address of the service, and then click Go. Visual
Studio 2012 will try to connect with the service and request the services metadata document.
3. Optionally, you can replace the default namespace with your own namespace for the services classes.
Using the Container Class and LINQ to Consume the OData Service
After Visual Studio 2012 generates the local classes, you can use the Container class to consume the
service. The Container class exposes properties representing the different feeds in the OData service. Each
property is of type DataServiceQuery<T> where T is the entity generated locally for the entity exposed
by the OData service. The DataServiceQuery<T> class provides a LINQ based API for querying OData
feeds. You can use LINQ to query the property, and the DataServiceQuery<T> class will translate the
query to an HTTP request using OData query options.
The following code uses a LINQ query to search for the WCF course in an OData Courses feed.
Demonstration Steps
1. Open the ConsumingODataService.sln solution from the
D:\Allfiles\Mod04\DemoFiles\ODataService\begin\ConsumingODataService folder.
2. Review the content of the CoursesController.cs file located in the Controllers folder in the
ConsumingODataService.Host project. Observe the Get action, which returns an
IQueryable<Course> and is also decorated with the [Queryable] attribute. This is done to enable
OData queries. Observe the CoursesController class declaration. Note that the CoursesController
derives from the ODataController base class which handles the formatting.
3. Open the Global.asax file, and review the content of SetupOData method. Observe the use of the
ODataConventionModelBuilder class to create an entity data model, which is used to create the
OData metadata, and the use of the MapODataRoute method to create routes for the OData
metadata as well as the various controllers in the model.
5. In the ODataService.Client project, use the Add Service Reference dialog box to add a service
reference to the OData model.
6. Create a new instance of the OData.Container class, and use a LINQ query to select the WCF course
from the containers Courses property. Print the name and id of the course.
Lesson 3
Implementing Security in ASP.NET Web API Services
Many of the aspects of Web service security in ASP.NET Web API relies on the inherent security features of
HTTP, such as the HTTP authorization header and HTTPS (HTTP Secured), and how they are implemented
by Windows and IIS.
The three common aspects of security, which you need to handle when creating HTTP-based services with
ASP.NET Web API, are:
Securing the communication channel: Encrypting the data transferred from the client to the service
and the data from the service to the client is crucial to protect it from theft and alterations by
attackers. You can encrypt yhe HTTP communication channel by using HTTPS instead of HTTP.
Authenticating clients: If you do not want your service to be publicly accessible, and you only want
certain people or applications to access it, you will need to secure your service by authenticating you
clients. HTTP supports passing client credentials in the message headers, which can then be
authenticated by IIS and the Windows operating system.
Authorizing clients: If some of your authenticated clients have permissions to invoke actions that
other clients cannot invoke, you will also need to authorize your clients before allowing them to
invoke the actions in the controllers. HTTP does not support authorization of clients, but the ASP.NET
user identities infrastructure can assist you in accessing the users information so you can authorize
them.
This lesson describes how HTTPS works, how you can use HTTP and IIS to authenticate and authorize your
clients, and how to implement custom user authentication and authorization in ASP.NET Web API.
Lesson Objectives
Secure service communication with HTTPS.
Authenticate clients.
Authorize clients.
1. Obtain and install a certificate for server authentication. You can create a self-signed certificate,
purchase a certificate from a known certificate authority (CA), or if you have a local certificate
authority, request it from your domain controller.
2. Configure your IIS Web sites bindings to HTTPS, and assign the server certificate to the SSL port (the
default port for HTTPS in IIS is 443).
3. Optionally, configure your Web application to require the client to authenticate with a client
certificate.
The following article describes how to apply the preceding steps for setting up SSL with IIS.
To start a secured session, the client sends a special request to the server, asking it to start a secured
session (step 1 in the diagram). In return, the server responds by sending its X.509 certificate, which it was
issued by a CA (step 2 in the diagram).
The client receives the certificate, and uses it to validate the server (step 3 in the diagram). X.509
certificates hold information about the server's address, which the client can check to verify that the server
is authentic and not an impersonator. In addition, the certificate also holds information about the issuer of
the certificate, which the client can use to verify that the certificate was issued by a trusted CA, and is not
a fake.
After validating the certificate, the client generates a random symmetric encryption key, places it in a
message, encrypts the message using the server's public key that was supplied with the server's certificate,
and sends it to the server (step 4 in the diagram). Public key encryption can only be decrypted by using
the server's private key, which only the server has. After the server decrypts the message (step 5 in the
diagram), both sides have the symmetric key. They can now start exchanging encrypted messages using
the symmetric key to encrypt the message on one side and decrypt it on the receiving side (step 6 in the
diagram).
SSL uses symmetric encryption for message exchange instead of public key encryption, because public key
encryption is slower than symmetric encryption, and the created message is larger than when created with
a symmetric key encryption.
Although not required, the SSL handshake can also require the client to authenticate with a certificate. If
you configure IIS to also require client authentication, the client will send the generated symmetric
encryption key along with its certificate to the server. After receiving the clients certificate, the server will
validate it and will continue with the handshake only if it is a valid certificate.
Note: Validation of the client certificate is part of the SSL handshake since Windows Server
2003 Service Pack 1. SSL does not perform authorization, and cannot be used to authenticate
other client credential types.
4-20 Extending and Securing ASP.NET Web API Services
Authenticating Clients
When you create a service, you need to decide
whether you want clients to identify themselves
when they call the service, or whether the service
is accessible to any client, without requiring an
identity. Requiring a client to pass its identity to
the service has several uses:
Credential Types
Users can have multiple identities they can use to identify themselves. For example, a user might be
logged in to the local network and have a Windows identity, or they might have a smart card they use to
access restricted content (smart card chips have client certificates installed in them). If your service
requires authentication, you will need to find which type of identities your client have, and configure the
service, or the hosting environment accordingly.
The following table lists some of the known identity types, and how to authenticate them.
Identity Authorized
Description
type by
Windows An identity used when working within a domain. The client sends a IIS
user token to the service, and the token is authenticated against the
domain controller.
Basic With this type of identity, the users username and password are sent IIS
(Username as plain text to the service, and the server authenticates the identity
+ against the local domain controller. This type of identity is used when
Password) the client has a domain account, but they are not connected to the
domain controller. When using this type of identity, it is advisable to
use HTTPS to encrypt the username and password.
Certificate Certificates hold information about the client and the certificates IIS
issuer, making it simple to verify the certificates authenticity, without
connecting to another server for authentication. With IIS, you can also
map certificates to Windows identities to provide the service additional
information about the user. Certificates are commonly used when your
client is a service.
Identity Authorized
Description
type by
sent with the request, identifying the logged in user. ASP.NET has
built-in support for authentication against SQL Server or Active
Directory. You can also create a custom Membership provider if you
store the usernames and passwords somewhere else.
Issued Issued tokens, such as OpenID, are mostly used in the Internet. With Identity
token issued tokens, the service informs the client which public identity Provider +
providers it trusts, such as Twitter, Facebook, and Widows LiveID. The your code
client then authenticates against the identity provider, usually by using
a username and a password. After authenticating against the identity
provider, the client sends the authentication token to the service,
which then verifies the token is authentic. To use issued tokens, you
need to register your service with the providers you wish to trust, and
write the code to verify the providers token. Issued token
implementation in ASP.NET Web API is covered in Module 11, Identity
Management and Access Control.
Custom If you have your own authentication mechanism, you can use it instead Your code
of the standard identities. To use your custom authentication, turn on
anonymous authentication in IIS and disable all other authentication
types, to prevent IIS from authenticating the client. Then add your own
authentication code, as explained later on in this topic.
Note: When using Windows credentials, the user sends a special token to the server which
it receives from the domain controller. IIS then sends this token to the domain controller to
authenticate the client. The token does not include the users password, similar to basic
authentication.
If you want to use authentication types that are managed by IIS or ASP.NET, such as Windows
authentication or Forms authentication, you need to first configure IIS for those authentication types. For
a description of the steps for configuring the various authentication types, see:
If you want to use Forms authentication, you will also need to create a login page in your web application,
either by using either ASP.NET Web Forms or ASP.NET MVC. For information on how to use Forms
authentication with ASP.NET Web API and ASP.NET MVC, see:
Forms Authentication
http://go.microsoft.com/fwlink/?LinkID=298768&clcid=0x409
Custom Authentication
If you have your own credential type that you wish to use in your service, you will need to write a custom
authenticator.
Note: Before you test your custom authentication code, make sure that IIS is configured for
anonymous authentication so it will pass through the request to your service, without
authenticating it.
To create your own custom authenticator in ASP.NET Web API, you need to create a new message handler
by deriving from the DelegatingHandler class (delegating handlers were discussed in Lesson 1, The
ASP.NET Web API Pipeline). In the SendAsync method, you will need to handle the following questions:
Should all requests be authenticated? If not, you can ignore requests that dont have
authentication information and send them directly to the controllers action. If all messages should be
authenticated, you will need to return an HTTP 401 (Unauthorized) response if the request does not
carry authentication headers.
Which authentication scheme are you using? If you are using a specific authentication scheme,
such as Basic, make sure you inform the client which scheme you expect by adding the WWW-
Authenticate header with the supported schemes when you send unauthorized responses.
How do you authenticate your clients? Add the code to retrieve the clients identity from the
request, and the custom code for the authentication process.
The following code demonstrates how to create a delegating handler for username/password
authentication.
if (!AuthenticateUser(username, password))
{
// Authentication failed
response =
request.CreateResponse(System.Net.HttpStatusCode.Unauthorized);
response.Headers.Add("WWW-Authenticate", "Basic");
Developing Windows Azure and Web Services 4-23
return response;
}
}
return response;
}
Which authentication scheme are you using? This code sample uses the Basic authentication
scheme. Basic authentication means that the username and password are passed in the HTTP
Authorization header as plain text, delimited by a semi-colon, and encoded into a Base64 string. The
code decodes the encoding and breaks down the string to the two strings: a username and a
password.
How do you authenticate your clients? The authentication shown in the AuthenticateUser is a
simple authentication which only compares the username and the password. If the authentication
fails, the method returns false, causing the SendAsync method to return an unauthorized response,
which will require the user to enter their credentials again.
If the user is authenticated, the AuthenticateUser method sets the executing threads principal to a
GenericPrincipal object and assigns the user to two roles, Users and Admins. The usage of roles for the
purpose of authorization will be discussed in the next topic.
After you create the delegating handler, you need to register it with the ASP.NET Web API configuration.
You can register the delegating handler to all the controllers by adding it to the configurations
MessageHandlers collection, or you can register it to a specific route by setting the handlers of the route.
The following code demonstrates how to attach the delegating handler to a specific route.
routeTemplate: "api/salaries/{id}",
defaults: new { controller = "salaries", id = RouteParameter.Optional },
constraints: null,
handler: new AuthenticationMessageHandler()
);
Authorizing Clients
If you do not use any authentication, any client
will be able to call the services actions. On the
other hand, if you authenticate every client, only
authenticated clients can access your service. But
what if you want to have both? If some of your
controllers actions are public and other require
authentication, you will need to allow anonymous
users to access the public actions, but if those
users try to call private actions, they will be
required to authenticate.
To support both anonymous and authenticated
users, you need to do the following:
If you are using IIS or ASP.NET for authentication, turn on both anonymous authentication and the
required authentication type, such as Windows or Forms authentication.
If you are using a custom delegating handler for authentication, make sure you do not respond with
an HTTP 401 response for anonymous requests.
Apply the [Authorize] attribute according to the required component requiring authentication: an
action, a controller, or the entire application.
Note: Refer to the previous topic in this lesson for instructions how to enable
authentication in IIS and ASP.NET.
The following code demonstrates how to apply the [Authorize] attribute to controllers and actions.
When you apply the [Authorize] attribute to a controller, every user activating the controllers action
must be authenticated. If an anonymous user tries to invoke one of the actions, an HTTP 401
(Unauthorized) response will be returned automatically, and the action will not be invoked. You can also
decorate action methods with the [Authorize] attribute instead of decorating the controller, to specify
that only the marked actions require authentication.
You can also use the [Authorize] attribute to specify which authenticated users can invoke the action,
whereas all other users will not be permitted to invoke it. To specify the authorized users, you can use the
[Authorize] attributes Roles and Users parameters. Each of these parameters accepts a comma-
Developing Windows Azure and Web Services 4-25
separated string that contains the list of authorized roles and users, accordingly. Roles are a way to group
users that have the same set of permissions; this way you only need to specify the role name, instead of
hard-coding the names of all the authorized users.
Users roles are retrieved after the user is authenticated. For example, when you use Windows
authentication in IIS, ASP.NET populates the users roles from the users Active Directory groups. If you use
custom authentication, you will need to populate the users list of roles yourself, as shown in the sample
code for the AuthenticationMessageHandler class in the previous topic.
Note: If you use Forms authentication, you can control where roles are stored by
configuring the ASP.NET Role Manager. For example, you can authenticate your users with
ASP.NET Membership against Active Directory Domain Services (AD DS), and load the users roles
from SQL Server. Role manager configuration is outside the scope of this course.
The following code demonstrates how to authorize specific roles with the [Authorize] attribute.
In the preceding example, the [Authorize] attribute is used on the Delete method to specify that only
the users belonging to the Admins role can invoke the method.
If your controller requires authorization, and you want to exclude an action from being authorized, you
can do so by decorating the method with the [AllowAnonymous] attribute. You can use this attribute to
override the use of the [Authorize] attribute, either when the [Authorize] attribute is used to decorate
the containing controller, or when it is added to the ASP.NET Web API filters list.
[AllowAnonymous]
public HttpResponseMessage GetSpecific(int id) { ... }
[Authorize(Roles="Admins")]
public HttpResponseMessage Delete() { ... }
}
In the preceding code, the ProductController class does not have the [Authorize] attribute. Instead, the
AuthorizeAttribute class is added to the ASP.NET Web API Filters collection, which applies it to all the
controllers in the application. The only method that permits anonymous access is the GetSpecific
method, which is decorated with the [AllowAnonymous] attribute. When adding the
AuthorizeAttribute class in the configuration level, you can also use the [AllowAnonymous] attribute
on a controller, to specify that all the controllers actions are accessible for anonymous users.
If you require a level of authorization that is not handled by the [Authorize] attribute. For example, if you
want to authorize according to roles, but also be able to specify users that are excluded from the role, you
can create your own authorization filter. To create your own authorization filter, derive from the
AuthorizationFilterAttribute class and override the OnAuthorization method with your custom
authorization code.
The ExtendedAuthorize class implements authorization logic similar to that described before; if the user
belongs to the required role, and is not blocked, they will be authorized. The preceding code throws two
security responses either an HTTP 401 (unauthorized) response, if the user is anonymous, or an HTTP
403 (forbidden), if the user has been authenticated, but is not permitted to execute the required code.
Developing Windows Azure and Web Services 4-27
Note: As shown in Lesson 1, The ASP.NET Web API Pipeline, in the Filters Topic, you can
also implement custom authorization logic by implementing the IAuthorizationFilter interface.
The difference between the IAuthorizationFilter and AuthorizationFilterAttribute types is that
the attribute provides a synchronous authorization check, whereas the interface provides an
asynchronous authorization check. Asynchronous authorization checks are useful when the
authorization logic is more I/O-bound then CPU-bound. For example, if the code performs a
network call to retrieve user information required by the authorization process.
For additional information on authentication and authorization in ASP.NET Web API, see:
Demonstration Steps
1. Open the WebAPISecurity.sln solution from the D:\Allfiles\Mod04\DemoFiles\WebAPISecurity folder.
2. Open the AuthenticationMessageHandler.cs file from the WebAPISecurity project, and examine
the code in the SendAsync method. The code first checks if the request contains Basic authentication
information by checking the HttpRequestMessage.Headers.Authorization.Scheme property. If the
request does not contain the Authorization header, it is sent to the next handler without checking.
Note: If some of the actions are accessible by anonymous users, the authentication handler
should not require that every request contains authentication information. This is the case in this
demo.
3. Examine the code in the first if statement. If the request contains Basic authentication information,
the code retrieves the identity from the HTTP Authorization header, parses it into the username and
password, and then sends the identity to be verified in the AuthenticateUser method. If the
authentication fails, an Unauthorized response is send back to the client.
Note: In Basic authentication, the username and password are encoded to a single Base64
string.
4. Examine the code in the last if statement in the SendAsync method. In ASP.NET Web API, an action
can return an unauthorized response if it requires authentication and the user did not supply it, or if it
requires the user to have a specific role which the user does not have. If an unauthorized response is
returned from the action, the code will add the Basic authentication type to notify the client of the
expected authentication type.
5. Locate the AuthenticateUser method, and examine its code. After the identity if authenticated, the
code creates a GenericIdentity and GenericPrincipal objects to identify the user and its roles. The
4-28 Extending and Securing ASP.NET Web API Services
principal is then attached to the Thread.CurrentPrincipal property to have it available for the
authorization process.
6. Open the ValuesControllers.cs file from the projects Controllers folder, and examine the use of the
[Authorize] and [AllowAnonymous] attributes. The [Authorize] attribute verifies that the user is
authenticated before invoking any action in the controller. The [AllowAnonymous] attribute
decorating the second Get method skips the authentication check, allowing anonymous users to
invoke the decorated action.
7. Open the WebApiConfig.cs file from the projects App_Start folder, and examine the call to the
MessageHandler.Add method. This is how the authentication message handler is attached to the
message handling pipeline.
8. Run the web application and in the browser, append the suffix api/values/1 to the address. Verify
you see an XML with the response of the action.
9. Browse to api/values/, enter non-matching username and password, and verify you are asked again
for credential. Enter matching username and password and then verify you see an XML with the
response of the action.
Developing Windows Azure and Web Services 4-29
Lesson 4
Injecting Dependencies into Controllers
Most applications usually consist of several components that are dependent on each other. It is important
to be able to replace the implementation of a dependent module without having to change the code that
uses the dependency. To do this, you first need to decouple the software components from the other
components they are dependent on. This lesson describes how to decouple dependent components from
their dependencies. The lesson also explains how you can use the IDependencyResolver interface in ASP.
NET Web API to implement dependency injection.
Lesson Objectives
After completing this lesson, students will be able to:
Dependency Injection
Modern software systems are built out of different
software components. For example many
distributed applications use a layered architecture
that separate different responsibilities to different
components (Logical Layers of Distributed
Applications are discussed in Module 1 Overview
of Service and Cloud Technologies Lesson 1 Key
Components of Distributed Applications).
Dependency Injection is a common software
design pattern that is used to decouple software
components from other components they are
dependent on. This is done so that dependencies
could be easily replaced if needed, for example it is common to replace the dependencies during tests
with mock object in order to control the result they return.
At the core of the Dependency Injection design pattern, there are three types of components:
Dependencies. These are software components that the dependent component is depended upon.
Injector. A component that obtains or creates instances of the dependencies and passes them to the
dependent component.
In order for the dependent component to be decoupled from its dependencies, it should only define
them as interfaces. The dependencies should be passed into the dependent component as method or
constructor parameters by the injector allowing the injector to replace the dependencies concrete
implementation at runtime.
4-30 Extending and Securing ASP.NET Web API Services
Demonstration: Creating a
Dependency Resolver
In this demonstration, you will use a dependency resolver that provides instances of controllers.
Demonstration Steps
1. Open the DependencyResolver.sln solution from D:\Allfiles\Mod04\DemoFiles\DependencyResolver.
2. Explore the code in the CoursesController class. Note that the ISchoolContext constructor
argument is supplied to the class by its caller and can therefore have different concrete
implementations.
3. Explore the code for the ManualDependencyResolver class. Note that the serviceType parameter
determines which concrete implementation is returned.
4. Explore the code for the WebApiConfig class. Note that the config.DependencyResolver property
determines how ASP.NET Web API will determine which dependency resolver to use for matching
concrete types with interfaces.
5. Run the project without debugging, append api/courses to the address bar, and verify you can see
the list of courses.
Developing Windows Azure and Web Services 4-31
Objectives
After completing this lab, students will be able to:
Lab Setup
Estimated Time: 75 Minutes.
Virtual Machine: 20487B-SEA-DEV-A, 20487B-SEA-DEV-C
1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.
3. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.
4. In the Action pane, click Connect. Wait until the virtual machine starts.
5. Sign in using the following credentials:
6. Return to Hyper-V Manager, click 20487B-SEA-DEV-C, and in the Action pane, click Start.
7. In the Action pane, click Connect. Wait until the virtual machine starts.
Password: Pa$$w0rd
In this lab, you will install NuGet packages. It is possible that some NuGet packages will have newer
versions than those used when developing this course. If your code does not compile, and you identify
the cause to be a breaking change in a NuGet package, you should uninstall the NuGet package and
instead, install the old version by using Visual Studio's Package Manager Console window:
4-32 Extending and Securing ASP.NET Web API Services
1. In Visual Studio, on the Tools menu, point to Library Package Manager, and then click Package
Manager Console.
2. In Package Manager Console, type the following command and then press Enter.
install-package PackageName -version PackageVersion -ProjectName ProjectName
(The project name is the name of the Visual Studio project that is written in the step where you were
instructed to add the NuGet package).
3. Wait until Package Manager Console finishes downloading and adding the package.
The following table details the compatible versions of the packages used in the lab:
Microsoft.AspNet.WebApi.OData 4.0.30506
In the following exercise, you will decouple the controller and repository using dependency injection
technique to inject the repository interface as a parameter in the controller constructor.
You will start by creating a dependency resolver class that is responsible for creating the repositories. You
will then register the dependency resolver class in the HttpConfiguration to automatically create a
repository when a controller is used. Finally, you will use Microsoft Fakes and create a stub for a
repository, and then use it in a unit test project to test the location controller.
Note: The same pattern was already applied in the begin solution for the rest of the
controller classes (TravelersController, FlightsController, ReservationsController and
TripsController).
Open those classes to review the constructor definition.
Developing Windows Azure and Web Services 4-33
Use the DependencyResolver property of the config object to set the dependency resolver.
2. Test the application and the DependencyResolver injection:
Return to Visual Studio 2012 and verify the code breaks on the breakpoint and that the constructor
parameter is initialized (not null).
4. Test the application using the Fakes mock framework, by running the test project.
On the Test menu, point to Run, and the click All Tests.
Results: You will be able to inject data repositories to the controllers instead of creating them explicitly
inside the controllers. This will decouple the controllers from the implementation of the repositories.
To add support for OData protocol you will install a NuGet Microsoft.AspNet.WebApi.OData package,
and then decorate the methods that you want to support OData with [Queryable] attribute.
4-34 Extending and Securing ASP.NET Web API Services
2. Handle the search event in the client application and query the flight schedule service by Using OData
filters
3. Change the method implementation to use the repository's GetAll method and return an IQueryable
instead of IEnumerable.
Remove the parameters from the method declaration. You will not need them anymore because the
OData infrastructure will take care of the query filtering.
Task 2: Handle the search event in the client application and query the flight schedule
service by Using OData filters
1. Log on to the virtual machine 20487B-SEA-DEV-C as Admin with the password Pa$$w0rd.
2. Open the BlueYonder.Companion.Client solution from the
D:\AllFiles\Mod04\LabFiles\begin\BlueYonder.Companion.Client folder.
3. Open the Addresses.cs file from the BlueYonder.Companion.Shared project and change the
GetLocationsWithQueryUri property to use OData querying instead of standard query string.
Replace the current query string with the $filter option using the expression
substringof(tolower('{0}'),tolower(City)).
The resulting code should resemble the following code.
GetLocationsUri + "?$filter=substringof(tolower('{0}'),tolower(City))";
Results: Your web application exposes OData protocol that supports Get request of the locations data.
You will apply validation rule to your server to verify that all required fields of a model are sent from the
client before handling the request.
You will decorate Travel model with attributes that define the required fields, then you will derive from
ActionFilter class and implement the validation of the model. Finally, you will add the validation to the
Post action.
3. Apply the custom attribute to the PUT and POST actions in the booking service
Developing Windows Azure and Web Services 4-35
FirstName, LastName and HomeAddress should use the [Required] validation attribute.
MobilePhone should use the [Phone] validation attribute.
2. In the new class, override the OnActionExecuting method and implement it as follows:
Check if the model state is valid by using the actionContext.ModelState.IsValid property.
Note: The CreateErrorResponse is an extension method. To use it, add a using directive
for the System.Net.Http namespace.
For the error response, use the overload that expects an HttpStatusCode enum and an HttpError
object.
Use the HttpStatusCode.BadRequest, and initialize the HttpError object with the
actionContext.ModelState and the true Boolean value for the second constructor parameter.
Task 3: Apply the custom attribute to the PUT and POST actions in the booking
service
1. In the BlueYonder.Companion.Controllers project, open the TravelersController class, and
decorate the Put and Post methods with the [ModelValidation] attribute.
Results: Your web application will verify that the minimum necessary information is sent by the client
before trying to handle it.
You need to add to the service support for HTTPS binding that uses a certificate and set the client to work
with the secured connection.
Note: The setup script creates a server certificate to be used for HTTPS communication.
2. Open IIS Manager. Open the Server Certificates feature from the SEA-DEV12-A (SEA-DEV12-
A\Administrator) features list.
Verify you see a certificate issued to SEA-DEV12-A. This certificate was created by the script you ran
in the previous task.
3. Select the Default Web Site from the Connections pane, open the Bindings list, and add an HTTPS
binding. Select the SEA-DEV12-A certificate that was created by the setup script.
Note: When you add an HTTPS binding to the Web site bindings, all web applications in
the Web site will support HTTPS.
In the project's Properties window, click the Web tab, and change the server from IIS Express to the
local IIS Web server.
Create the virtual directory and save the changes to the project.
2. Browse the web application using HTTP to get the location data.
Return to IIS Manager, refresh the Default Web Site, and then select the
BlueYonder.Companion.Host application.
Explore the contents of the file. It contains the Location database content in the JSON format.
3. Change the scheme of the URL from HTTP to HTTPS to get the location data again, this time through
a secured channel.
Make sure you also change the computer name in the URL from localhost to SEA-DEV12-A.
Explore the contents of the file. It should contain the same locations as before.
Note: If you use localhost instead of the computer's name, the browser will display a
certificate warning. This is because the certificate was issued to the SEA-DEV12-A domain, not
the localhost domain.
2. Run the client app, search for New, and purchase a flight from Seattle to New-York. Provide an
incorrect value the email field and verify you get a validation error message originating from the
service.
Results: The communication with your web application will be secured using a certificate.
Question: Why does using dependency injection make it easier to change your code?
4-38 Extending and Securing ASP.NET Web API Services
Module 5
Creating WCF Services
Contents:
Module Overview 5-1
Module Overview
The previous two modules are about ASP.NET Web Application Programming Interface (API), which is the
.NET technology to create Hypertext Transfer Protocol (HTTP)-based services. In the first module, we also
explained about another type of service, Simple Object Access Protocol (SOAP)-based services. In the first
lesson of this module, you will learn about the differences between HTTP-based and SOAP-based services.
The rest of this module will be about implementing SOAP-based services with the Windows
Communication Foundation (WCF) framework.
When you develop an application that has client/server architecture, the technologies that you are likely
to use is the Windows Communication Foundation (WCF) framework. WCF is the most up-to-date
communication infrastructure made by Microsoft, and is designed for building distributed applications
that use service-oriented architecture (SOA).
WCF is a very flexible and extensible framework. You can customize and configure WCF to match different
application scenarios. You can control almost every aspect of client/server communication, either through
configuration or by implementing various extensions.
Hosting environment, such as Internet Information Services (IIS) and Windows Services.
This module describes how to create a simple WCF service, host it using self-hosting, and consume the
service from a client application. After you get familiar with WCF and its various configurations and
extensions, you will be able to build robust, flexible, and scalable services.
Objectives
After you complete this module, you will be able to:
5-2 Creating WCF Services
Lesson 1
Advantages of Creating Services with WCF
SOAP is a protocol specification for exchange of structured information between peers in a decentralized,
distributed environment. SOAP uses Extensible Markup Language (XML) for its message formatting, and
usually relies on HTTP for message negotiation and transmission, although there are implementations of
SOAP over other transports, such as SOAP-over-UDP, which is used for Universal Plug and Play (UPnP).
SOAP-based services technology has existed for more than a decade. It was created by Microsoft and was
later adopted by W3C as a standard. SOAP is used as the underlying layer for ASP.NET Web Services
(ASMX) and for WCF. However, unlike Web Services, which are limited to SOAP over HTTP, WCF can use
SOAP over TCP and even Named pipes. WCF is also not limited to SOAP - it can be configured to use
standard XML (known as plain XML) and JavaScript Object Notation (JSON) also.
Note: SOAP over TCP is a WCF proprietary implementation and is not part of the World
Wide Web Consortium (W3C) standards. Therefore, it can be used only between a WCF client and
a WCF service.
This lesson briefly reviews the history of SOAP-based services, starting with Web Services and ending with
WCF. It discusses the advantages WCF offers as an infrastructure service, and reviews some of these
features of WCF and its characteristics, including:
Complex application scenarios, such as transactions, reliable messaging, and service discovery.
Finally, this lesson explains the features of WCF that are not supported by the ASP.NET Web API.
Lesson Objectives
After you complete this lesson, you will be able to:
Explain the benefits of using SOAP-based services.
List the features of WCF that are not supported by ASP.NET Web API.
Lightweight protocol
XML-based
SOAP is not the only RPC protocol. There are many others, such as Common Object Request Broker
Architecture (CORBA) and the Distributed Component Object Model (DCOM) RPC protocol. There are
several benefits to consider over other protocols:
Versatility. DCOM and CORBA have not been adopted by many platforms. DCOM is supported only
by Windows and CORBA is mainly used in Java.
Security. CORBA, DCOM, and similar protocols are usually not firewall/proxy friendly and might be
blocked.
The following code presents a sample SOAP request message sent to a service, followed by a sample
SOAP response message returned by the service. The service multiplies any number sent in the request
message by two.
<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">
<s:Body>
<MultiplyByTwo xmlns="http://tempuri.org/">
<value>123</value>
</MultiplyByTwo>
</s:Body>
</s:Envelope>
Response
<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">
<s:Body>
<MultiplyByTwoResponse xmlns="http://tempuri.org/">
<MultiplyByTwoResult>246</MultiplyByTwoResult>
</MultiplyByTwoResponse>
</s:Body>
</s:Envelope>
SOAP is used as an underlying protocol in ASP.NET Web Services and in WCF services. Both technologies
are used to build RPC services.
Developing Windows Azure and Web Services 5-5
But this is not to say that ASP.NET Web API is less useful than WCF. Services that you create with ASP.NET
Web API can take advantage of HTTP and create services that have cacheable responses in clients, built-in
concurrency mechanisms, and other features that are part of the HTTP application protocol. In addition,
ASP.NET Web API is supported by browsers, which do not support SOAP.
Question: When would you prefer using ASP.NET Web API over WCF?
5-6 Creating WCF Services
Lesson 2
Creating and Implementing a Contract
When you create a service, you must answer the following questions:
This lesson explains the service contract, which provides the answer to the first question, What can the
service do?
The service contract is one of the fundamentals of WCF. It is a definition of the operations supported by
the service. Additionally, the service contract defines other aspects of the service and its operations, such
as error handling.
This lesson describes how to create and implement a service contract. It also explains how to control the
service behavior by using the [ServiceContract] attribute, the [OperationContract] attribute, and the
[FaultContract] attribute.
Lesson Objectives
After you complete this lesson, you will be able to:
Define the WCF service contract.
These attributes do more than merely mark the interface methods as service operations. They also control
various aspects of the service behavior and characteristics, such as session support, exception handling,
and callback contracts on duplex services.
Developing Windows Azure and Web Services 5-7
Another important aspect of the service contract is the data contract. Data contracts define the data that
is exchanged between the service and the client. Data contracts define data that is either returned by a
service operation or received by a service operation as a parameter.
Similar to the service contract, the data contract is defined by using special attributes:
The [DataContract] attribute. Applied to the class to mark it as a data contract.
The [DataMember] attribute. Applied to those class properties that will be included in the data
contract.
The following code example depicts a service contract declaration alongside a data contract
(Reservation). The contract exposes basic hotel reservation operations.
[DataMember]
public DateTime CheckinDate {get; set; }
[DataMember]
public int NumberOfDays {get; set; }
[DataMember]
public string GuestFirstName {get; set; }
[DataMember]
public string GuestLastName {get; set; }
}
The [DataContract] and [DataMember] attributes are optional. If a class is not decorated with the
[DataContract] attribute, WCF will automatically serialize every public property and field that it
encounters in your class.
You can therefore choose whether to use an inclusive approach or an exclusive approach to mark
properties and fields to be serialized:
Inclusive approach. Use [DataContract], and apply [DataMember] to each member that needs to
be serialized.
Exclusive approach. Do not use [DataContract], and apply the [IgnoreDataMember] attribute to
each public property and field that you do not want to be serialized.
5-8 Creating WCF Services
Choosing the approach to use depends on the characteristics of your class: how large it is, how many
serialized and non-serialized members it contains, and whether you can change how it is declared. It is
preferable to choose one technique and apply it to all of your classes to prevent confusion.
return response;
}
return booking;
}
As you can see, implementation of a service requires you to implement the service contract interface and
provide your business logic. Apart from providing concrete implementation of the service contract, you
can control other aspects of the service behavior, such as the instantiation and concurrency model of the
service:
Instantiation. When you send a request to a service, the request is executed in an instance of the
service class. The service instantiation controls when new instances of your service class are created.
Concurrency. Each request in WCF runs in its own thread. But when several requests running in
different threads are executing in parallel, they might attempt to use the same service instance
Developing Windows Azure and Web Services 5-9
depending on the instantiation mode. The concurrency setting controls how many requests can use
the same service instance concurrently.
The following code example demonstrates how to set the instantiation and concurrency of a service.
The instancing and concurrency modes are controlled by the [ServiceBehavior] attribute. You can
control the instancing mode by adding the InstanceContextMode parameter, and the concurrency
mode by adding the ConcurrencyMode parameter.
Per Session. A new instance is created per client connection (session) and is destroyed when the
client disconnects or is idle for too long (the default idle time is 10 minutes). This is the default
setting.
When you use the Per Session or Single instancing modes, you can use one of the following concurrency
modes to control the number of requests the same service instance can use at the same time:
Single. Only a single request can execute in an instance at a time. This is the default setting.
Reentrant. As with Single, only a single request can execute in an instance at a time. However, if your
instance method calls another service, then instead of the instance being blocked while waiting for
the call to return, the instance is released, and a waiting request can start using it.
Note: When you use the Per Call instancing mode, each request executes in its own service
class instance eliminating the need to manage concurrency issues.
WCF Sessions are not covered in depth in this course. For more information about instances, concurrency,
and sessions in WCF, see:
Handling Exceptions
Applications must handle errors. These can
include, for example, failed validation of user
input or exceptions that may be thrown when the
application tries to access a resource such as a file
or a database.
The client may not understand the exception. The client may be running on a different platform, in
which .NET Framework exceptions have no meaning.
A .NET Framework exception exposes implementation details (stack trace, error codes). This is a bad
practice because the client should not be familiar with the service implementation. For example,
hackers can take advantage of information included in exceptions, such as database name, and file
paths on your server.
If an exception cannot be thrown back to the client, what therefore is the solution? The answer is SOAP
fault messages. Fault messages are special messages, in XML format, that contain data about an error that
occurred on the service and where it originated. By default, an exception that is thrown back to the client
from WCF, whether unhandled or thrown deliberately by the service developer, will result in the service
returning a message with a SOAP fault message, containing a general error message.
Fault messages are part of the SOAP standard. They can be processed and handled by all compliant
platforms, making them interoperable.
The following message shows the default fault message that is returned for unknown errors.
If you read the message in the faultstring element, you will notice that WCF provides an option to
include the actual exception information in the message, if you set the IncludeExceptionDetailsInFaults
Developing Windows Azure and Web Services 5-11
parameter to true in the [ServiceBehavior] attribute. You also have the option of configuring this
behavior in the service configuration file. You will learn about service configurations in the next lesson,
and see how to use the serviceDebug configuration behavior in Lesson 3, "Configuring and Hosting WCF
Services" in Course 20487.
Note: Including the content of an exception in a SOAP fault message is recommended for
debugging. As mentioned, exception messages can expose sensitive information about your
service implementation, and non-.NET platforms will not know how to handle this type of
content.
Instead of turning on the IncludeExceptionDetailInFaults flag, you can throw a FaultException that
describes the reason for the problem.
return response;
}
When you throw a FaultException instead of a standard exception, the service includes the exception
text in the fault section of the SOAP envelope without disclosing sensitive exception information such as
the stack trace of the exception. This will make the exception text available to your client, no matter which
platform it is written in.
However, sometimes just writing a fault string is not enough, and you will want the fault to contain more
information, such as the problematic data that caused the exception or a unique error ID number that
your customers can use to contact the service administrator for further investigation. In such a case, you
can use the FaultException<T> generic exception class. The generic argument T specifies the data
contract that holds extra information about the error.
The following code returns a fault with extra details contained in a ReservationFault object.
return response;
}
5-12 Creating WCF Services
The ReservationFault class used in the above example is a simple data contract, and does not derive
from any Exception class.
[DataMember]
public string ErrorCode { get; set; }
}
Because clients only recognize data contracts if their classes are used in the method signatures of
operations, they will not be aware of data contract used in a throw statement within the code. Therefore,
you must add the data contract of the fault to the service contract manually. You can include information
about fault contracts (the data contracts used in faults), by adding the [FaultContract] attribute to the
operations that can throw such faults.
The following code shows how to include a fault contract in a service contract.
[OperationContract]
Booking GetExistingReservation(string bookingReference);
[OperationContract]
string CancelReservation(string bookingReference);
}
For an example of how to call the BookHotel service operation from a client application, and how to
handle the ReservationFault fault exception, refer to Lesson 4 in this module, "Consuming WCF Services,"
and look at the code example shown in the first topic, "Generating Service Proxies with Visual Studio
2012."
Best Practice: It is advised that all service operations use the [FaultContract] attribute to
inform the client of every kind of fault it can expect to receive.
Question: Why should you avoid returning the entire Exception object from a service?
Developing Windows Azure and Web Services 5-13
Demonstration Steps
1. In Visual Studio 2012, open
D:\Allfiles\Mod05\DemoFiles\CreatingWCFService\begin\CreatingWCFService.sln. Explore the
code in the files of the Service project, and notice the use of service contract and data contract
attributes.
3. In the IHotelBookingService interface, decorate the interface with the [ServiceContract] attribute,
and decorate the BookHotel method with the [OperationContract] attribute.
4. Decorate the BookingResponse and Reservations classes with the [DataContract] attribute.
Decorate each of their properties with the [DataMember] attributes.
5. Run the service in debug and test it by using the built-in WCF Test Client. Run the BookHotel
operation with the HotelName set to HotelA, and verify that the response shows the booking
reference AR3254.
Lesson 3
Configuring and Hosting WCF Services
In the previous lesson, we discussed the logical aspects of WCF services: SOAP messaging, service
contracts, data contracts, and service implementation. However, these only explain what the service can
do. There are many other questions to be answered:
What are the security characteristics of the service (such as authentication and encryption)?
These are some of the questions answered in this lesson. The features of WCF that control these aspects
are the service host, service configuration, and endpoint configuration.
This lesson explains how to host a WCF service, how to configure and control the behavior of a service
host, and how to define and configure the service endpoints.
Lesson Objectives
After you complete this lesson, you will be able to:
Self-hosting. Create your own application, such as a Windows Presentation Foundation (WPF)
application or a Windows Service. The host will start after the application starts, and shutdown when
the application shuts down.
Web hosting. Hosts the service in IIS. The host will start after IIS receives the first request to the
service, and shut down when the web application shuts down.
This lesson focuses on the basics of self-hosting with a simple Console application. Module 6, "Hosting
Services," will explain in depth how you can self-host WCF services in Windows Services, and how you can
host WCF services with IIS.
The base class that manages WCF hosts is the ServiceHostBase type, but this class is an abstract class. The
concrete class that you will use to host your services is the ServiceHost type, which is declared in the
System.ServiceModel assembly.
Note: There are other technology-specific service host classes that derive from the
ServiceHostBase class, such as WorkflowServiceHost, which is used to host services that
execute Windows Workflow (WF) activities.
The following code demonstrates how to host a WCF service with the ServiceHost class.
hotelServiceHost.Open();
hostelServiceHost.Close();
A ServiceHost instance can manage a single service type, but it can open many listeners for that service
type, each with a different configuration. For example, a single ServiceHost can listen to both HTTP and
UDP communication, and invoke a service method when a request is received on either of these
transports.
Note: If you have more than one service, you will need a different instance of ServiceHost
for each service.
Before the service is opened by using the Open method, the host requires configuration that instructs it
which communication transports it needs to listen to and at which addresses. You can set this
configuration either in the code itself, before calling the Open method, or in the application configuration
file (app.config). You will learn to create this configuration, both in code and in the configuration file, later
in this lesson.
After the service host has opened, you can close it at any time by calling the Close method. The Close
method will stop the host from listening to any communication, making the service unavailable for clients.
Best Practice: The Open and Close methods can throw an exception if the service host has
difficulties listening to ports, such as if a port is already opened by another application. Although
the code sample shown above does not demonstrate this, it is advised to wrap the Open method
and Close method calls with a try/catch block.
5-16 Creating WCF Services
A service endpoint defines how the service is exposed to the clients. A service endpoint answers the
following three questions:
Address. Specifies where the service resides. The address is a Uniform Resource Locator (URL) that is
used by the client applications to locate the service.
Binding. Specifies how clients should communicate with the service. The binding specifies the
message encoding, transport type, security modes, session support, and other protocols.
Contract. Specifies the operations supported by the endpoint. The contract needs to match one of the
contract interfaces implemented by your service class.
These endpoint settings are called the ABCs of an endpoint.
hotelServiceHost.AddServiceEndpoint(
typeof(Contracts.IHotelBookingService),
new BasicHttpBinding(),
"http://localhost:8080/booking/");
hotelServiceHost.Open();
The above code adds a single endpoint to the service with the following configuration:
Contract. The IHotelBookingService service contract. If a service has more than one contract, you
must create several endpoints, one for each contract.
Binding. The endpoint is configured to use BasicHttpBinding. This binding listens to HTTP
communication, and expects XML messages with SOAP envelopes. You can have multiple endpoints
with different bindings. Bindings will be explained in detail in the next topic.
Address. The address http://localhost:8080/booking is the listening address of the endpoint. When
a client sends a message to the service, it will send the message to
http://ServerName:8080/booking/, where ServerName is the DNS or IP address of the server
hosting the service.
Developing Windows Azure and Web Services 5-17
The following XML configuration demonstrates how to add an endpoint in the application configuration
file.
The WCF configuration shown in the above example is contained in the <system.serviceModel>
configuration section group. The <services> section contains the list of services, each in its own
<service> element. The name attribute in the <service> element is set to the fully qualified name of the
service implementation class. Each <service> element can contain <endpoint> elements, and each such
element contains its ABC settings (address, binding, and contract).
Port (optional)
In each case, you can specify the address both in code and in configuration.
The following example demonstrates how to use relative endpoint addresses in the configuration file.
<baseAddresses>
<add baseAddress="http://localhost:8080/reservations/" />
</baseAddresses>
</host>
<endpoint
address=""
binding="basicHttpBinding"
contract="Contracts.IHotelBookingService ">
</endpoint>
<endpoint
address="secured"
binding="wsHttpBinding"
contract="Contracts.IHotelBookingService ">
</endpoint>
<endpoint
address="net.tcp://localhost:8081/reservations/"
binding="netTcpBinding"
contract="Contracts.IHotelBookingService ">
</endpoint>
</service>
</services>
</system.serviceModel>
</configuration>
In the above example, the service configuration has three endpoints for the IHotelBookingService
contract. The first two endpoints use the http://localhost:8080/reservations/ base address specified in
the <host> element. Of these, the first endpoint uses an empty relative address, so the actual address of
the endpoint is the same as the base address. The second uses the secured relative address, so its actual
address is http://localhost:8080/reservations/secured. The third endpoint uses the
net.tcp://localhost:8081/reservations/ address. The third address cannot use the HTTP base address,
since it uses a different binding (TCP rather than HTTP). You will learn about bindings in the next topic.
Note: The service host matches the address and the base address according to the binding
of the endpoint and the scheme of the base addresses. For example, both basicHttpBinding and
wsHttpBinding use HTTP communication, and therefore the service host will search for a base
address with the HTTP scheme. Therefore, you can have only one base address per URI scheme.
The http:// URI scheme in endpoint addresses is part of the HTTP URI structure, but most other addresses
do not have URI schemes and are WCF-proprietary. This is why some address schemes have the "net."
prefix, such as net.tcp:// and net.pipe://. The soap.udp:// scheme and ws:// scheme are different, as
these schemes do have URI structures: soap.udp is used in UDP URIs, and ws is used in WebSocket URIs.
You can also use relative endpoint addresses when you declare endpoints in code. You can specify the
base address of the service host in the ServiceHost constructor method.
The following code demonstrates how to use relative endpoint addresses in code.
hotelServiceHost.AddServiceEndpoint(
typeof(IHotelBookingService), new BasicHttpBinding(), "");
hotelServiceHost.AddServiceEndpoint(
typeof(IHotelBookingService), new WSHttpBinding(), "secured");
hotelServiceHost.Open();
Developing Windows Azure and Web Services 5-19
The binding defines how to encode the messages onto the wire.
The binding defines which protocols, such as security and sessions, are required.
A binding in WCF is a combination of these three elements: transport, encoding, and protocols. In WCF,
you can create all the definitions and configurations of the binding in a single place - either in code or in
configuration files - which simplifies the amount of work required.
Note: The binding also defines the properties of the communications channel and
messages, such as timeouts and maximum message size.
Predefined Bindings
Instead of setting the three elements of the binding each time you define an endpoint, WCF provides a
collection of predefined bindings for the most common combinations of binding elements. You can use
these bindings with their default values or fine-tune the bindings to your needs. The following are some
typically used predefined bindings:
Although most bindings work even in scenarios for which they are not designed, it is a good practice to
choose the correct binding for a given endpoint. There are many considerations to take into account
when deciding which binding to use. Covering all those is beyond the scope of this topic. Here however
are some examples:
o NetTcpBinding
o WSHttpBinding
o BasicHttpBinding
o WsHttpBinding
o BasicHttpBinding
o WSHttpBinding
o NetTcpBinding
o WSDualHttpBinding
o NetHttpBinding
Configuring Bindings
Each binding has a set of configurable properties that you can change to modify the binding to your
needs. You can change the settings either through code or by using the configuration file.
The following example shows how you can configure the basic HTTP binding in the configuration file.
maxReceivedMessageSize="5000000"/>
</basicHttpBinding>
</bindings>
</system.serviceModel>
The <bindings> element needs to be placed inside the <system.serviceModel> element in the
configuration file.
Inside the <bindings> element, place an element by the name of the binding type that you wish to
change. Note that the name of the binding is in camel case (the first letter of the first word is in
lowercase, and each subsequent word is capitalized).
Inside this element, place a <binding> element and give it a name. The name attribute will then be
applied to any endpoints that make use of that binding.
Inside the <binding> tag, add the attributes and elements that you wish to set.
In the previous example, three attributes were changed. The sendTimeout attribute, which sets the
maximum time a service waits for a message to be sent, was changed to 10 minutes. The receiveTimeout
attribute, which sets the maximum time a service waits until a message is fully received, was changed to
30 minutes. The maxReceivedMessageSize attribute, which sets the maximum allowable size of a
message sent to the service, was set to 5000000 bytes.
To apply this binding to an endpoint, you will need to set the binding configuration of the endpoint
accordingly.
The following example shows how to configure an endpoint with the new binding configuration.
In addition to modifying the binding in configuration, you can also modify the binding in code. All the
predefined bindings are exposed as .NET classes. You can configure a binding by setting the properties of
the binding instance object.
The following code sets the same binding configuration as the previous code example, this time in code.
hotelServiceHost.AddServiceEndpoint(
typeof(Contracts.IHotelBookingService),
basicHttpWithIncreasedSettings,
"http://localhost:8080/booking/");
5-22 Creating WCF Services
hotelServiceHost.Open();
Custom Binding
In addition to predefined bindings, you can create your own custom binding. When using a custom
binding, you can select the binding elements that compose the binding. You can define the transport
element, the message encoding, and other elements such as security and transaction support. You can
define custom bindings by adding a <customBinding> element to the <bindings> section in the
configuration file, or by creating a new instance from the CustomBinding class in code.
Note: To use the CustomBinding class in your code, add a using directive for the
System.ServiceModel.Channels namespace.
The following example demonstrates how to create a custom binding that uses HTTP and binary XML
encoding.
The custom binding created in the above example uses HTTP transport. Instead of encoding the message
as plain text it encodes the message by using a WCF-proprietary binary encoding and compresses the
content with GZIP so that it can be sent faster on slow networks. In addition, the binding uses the WS-
ReliableMessaging protocol, which helps to cope with network failures while sending messages from end
to end.
This course does not cover the reliable messaging support of WCF. For more information on reliable
messaging in WCF, see:
Introduction to Reliable Messaging with the Windows Communication Foundation
http://go.microsoft.com/fwlink/?LinkID=298776&clcid=0x409
Question: What are the advantages of using the built-in bindings rather than creating
custom bindings?
Developing Windows Azure and Web Services 5-23
If you create different endpoints for different contracts but use the same binding and binding
configuration for all the endpoints, you can use the same address for all the endpoints. In the above
example, both addresses could be replaced with the same address, http://localhost:8080/reservations.
In such a case, WCF will automatically identify which endpoint was addressed by checking the message
headers. Every message sent to a service has an Action header that holds the name of the contract and
the name of the requested operation.
5-24 Creating WCF Services
Data contracts
Fault contracts
Service endpoint addresses
Metadata Exchange (MEX) endpoints. You can get the WSDL document by calling a special service
endpoint that uses SOAP messages with the WS-MetaDataExchange protocol. If you expose a service
metadata as an endpoint rather than over HTTP, you have more control over the type of transport
you use, and you can control other binding-related configurations. For example, you can decide if
you want to expose the service metadata over TCP and prevent unauthorized clients from accessing
the metadata.
By default, for security reasons, WCF services do not expose their metadata. If you want to expose your
service metadata, you will need to change the behavior of your service. You will recall from earlier in this
module that you can control service behavior through the [ServiceBehavior] attribute. However,
exposing metadata is one of several service behaviors that cannot be controlled through the service code
with the [ServiceBehavior] attribute. Instead, you must configure it either through the ServiceHost class
or in the service configuration file.
Note: Some behaviors, such as concurrency and instancing, are more development-
oriented. Others, such as service metadata, are more deployment-and-hosting-oriented.
Development-related behaviors are set in the service implementation by using attributes, while
hosting-related behaviors are set in the host project (either in the configuration file or in the
service host code, in the ServiceHost instance).
Developing Windows Azure and Web Services 5-25
To add a service behavior configuration to your configuration file, open your application configuration file
and perform the following steps:
Note: In addition to service behaviors, you can also use the <behaviors> section to
configure endpoint behaviors. Endpoint behaviors are beyond the scope of this course.
2. Add a new <behavior> element to the <serviceBehaviors> element, and set its name attribute to a
name describing the use of the behavior.
3. Add service behavior elements to the <behavior> element to configure various behaviors of your
service and your hosting environment.
4. After you create the service behavior configuration, set your service to use that configuration by
adding the behaviorConfiguration attribute to your <service> element and setting the value of the
attribute to the name of the service behavior. As long as you created the service behavior in the
configuration file first, Visual Studio 2012 will open a drop-down list showing the names of the
existing service behaviors when you add the behaviorConfiguration attribute.
You can change the behavior of your service so that it exposes metadata by creating a service behavior
element and adding the <serviceMetadata> element to it.
The following code demonstrates how to configure a service to expose metadata.
The above example adds the <serviceMetadata> element, which changes the default behavior of the
service so that it exposes its metadata. You can also control how the service exposes metadata by adding
the httpGetEnabled attribute and setting it to true. This attribute configures the service to expose the
metadata with simple HTTP GET requests.
Note: You can set other attributes of the <serviceMetadata> element to control how the
metadata is exposed. For example, you can set the httpsGetEnabled attribute to true to expose
the metadata over HTTPS.
5-26 Creating WCF Services
In addition to the <serviceMetadata> behavior, many more behaviors are available, such as the
<serviceDebug> behavior that was mentioned in Lesson 2, "Creating and Implementing a Contract. You
can also define the service behaviors in code before opening the service host.
Note: If you are going to host multiple services in the same hosting project, and you want
several services to use the same behavior configuration, you can omit the
behaviorConfiguration attribute from the <service> element and the name attribute from the
<behavior> element. Behaviors without a name will automatically apply to every service that
does not have a specific service behavior configuration.
You can also configure service behavior in code before opening your service host.
The following code demonstrates how to add service behaviors to the service host.
hotelServiceHost.Open();
The above example adds two behaviors, ServiceMetadataBehavior, and ServiceDebugBehavior. You
can find these two behaviors in the System.ServiceModel.Description namespace. You can mix adding
behaviors in code and in configuration, but make sure you do not add the same behavior twice.
If you prefer exposing your service metadata with a Metadata Exchange (MEX) endpoint, you can do so by
adding such an endpoint to your service endpoints list.
As you can see, the new MEX endpoint has no binding and no contract, but instead has the kind attribute.
The kind attribute is used when creating standard endpoints, such as MEX endpoints and service
discovery endpoints. When you use the mexEndpoint kind, the endpoint is configured to use HTTP
Developing Windows Azure and Web Services 5-27
Note: Instead of using the kind attribute, you can set the endpoint to use the
mexHttpBinding binding and the IMetadataExchange contract. This has the same result as
using the kind attribute.
You can change the binding attribute if you want to use non-HTTP bindings for the MEX
endpoint, such as mexHttpsBinding and mexTcpBinding, and you can provide additional
binding configuration if required.
In a single project, if you find yourself hosting more and more services, with more and more contracts,
you will have a very big configuration file to manage. Instead of writing the configuration file yourself,
you can use the Microsoft Service Configuration Editor tool (SvcConfigEditor.exe).
SvcConfigEditor.exe is a graphical utility that you can use to add new services and service endpoints to
your configuration file, and to edit WCF settings such as the binding configuration and service behaviors.
You can open the Service Configuration Editor in Visual Studio 2012 from the Tools menu (on the
Tools menu, click WCF Service Configuration Editor), or in Solution Explorer, by right-clicking
App.config and clicking Edit WCF Configuration.
Demonstration Steps
1. Open D:\Allfiles\Mod05\DemoFiles\DefineServiceEndpoints\begin\DefineServiceEndpoints.sln.
5-28 Creating WCF Services
2. In the ServiceHost project, open the App.config file, locate the <serviceBehaviors> section, and
then add a <behavior> element with the serviceMetadata behavior.
Note: Refer to Topic 6, "Exposing Service Metadata" in this lesson for a code sample of how
to add the serviceMetadata behavior.
3. In the App.config, add a base address to the <service> element by using the address
http://localhost:8733/.
Note: Refer to Topic 3, "Defining a Service Endpoint Address" in this lesson for a code
sample of how to add a base address in configuration.
4. Save the changes you made to the App.config file, open the file with the WCF Configuration Editor,
and then add a new endpoint to the service.
To open the App.config file with the WCF Configuration Editor tool, right-click the App.config file
in Solution Explorer, and then click Edit WCF Configuration.
Property Value
Address booking
Binding basicHttpBinding
Contract HotelBooking.IHotelBookingService
Save the changes you have made to the confirmation, and then close the Service Configuration Editor
window.
5. In the ServiceHost project, open the Program.cs, and add an endpoint that uses NetTcpBinding.
6. Run the service host in debug and test it by using the built-in WCF Test Client. Run the BookHotel
operation by using the TCP endpoint. Set the HotelName to HotelA, and then verify that the
response shows the booking reference AR3254.
7. Browse to the base address of the service, and then view the WSDL file with the service metadata.
8. Close the browser and the WCF Test Client. Return to Visual Studio, stop the debugger, and then
close Visual Studio 2012.
Question: Give several examples for creating service configuration from code instead of
using configuration files.
Developing Windows Azure and Web Services 5-29
Lesson 4
Consuming WCF Services
The last step in developing a service is consuming a WCF service, which is performed once to make the
service run. There are several ways to consume a WCF service. The most productive way is to use WCF on
the client side. However, you can also consume a service from non-.NET clients.
WCF and Visual Studio 2012 provide different tools that can help you consume services easily. In this
lesson, you will learn how to use Visual Studio 2012 to generate a service proxy class in design-time, and
how to create a service proxy at run time with the ChannelFactory<T> generic class. You will also learn
how to use the proxy created with these two techniques to call your service.
Lesson Objectives
After you complete this lesson, you will be able to:
Generate a client proxy with the Add Service Reference dialog box of Visual Studio 2012.
2. Open a channel to the service and send the message to the service, according to the required
transport and other binding settings.
3. Wait for the service to respond to the request and send its response.
If you want to use the proxy pattern to consume a service, you need to build a class that implements the
service contract and that is responsible for all the transformations and communication with the service.
WCF can build those proxy classes by using the Add Service Reference dialog box of Visual Studio 2012.
This tool reads WSDL documents, extracts the service contract, and creates proxy classes that match the
service contracts. In addition, this tool also creates data classes according to the data contracts specified in
the WSDL file. To use the Add Service Reference dialog box:
1. In Solution Explorer, right-click your project, and then click Add Service Reference.
5-30 Creating WCF Services
2. In the Add Service Reference dialog box, enter the WSDL file address of the service, and then click
Go. WCF will try to connect with the service and request the services WSDL file.
Note: If you are trying to consume a WCF service that you have developed, make sure the
service has exposed its WSDL document. To expose the document, the service must use the
ServiceMetadata service behavior.
3. After Visual Studio finds the WSDL, the list of service contracts will display. Enter the name of the
namespace with which you want to create proxies, and then click OK. WCF will create the proxy
classes that are required for every service contract and data contract classes exposed in the WSDL.
Visual Studio will also place all service endpoint configurations in the configuration file of the client.
This allows you to use the proxy easily with any of the service's endpoints.
The following screenshot shows the Add Service Reference dialog box in Visual Studio 2012.
Each of the generated proxies is named according to the name of the contact, without the I prefix, and
will be appended with the suffix Client. For example, if the name of the service contract is
IHotelBookingService, then the generated proxy class will be named HotelBookingServiceClient.
To use the generated proxy, create an instance of it in your code, and use it to call the service.
Developing Windows Azure and Web Services 5-31
The following example demonstrates how to use a generated proxy to call a service.
try
{
proxy.BookHotel(reservation);
}
catch (FaultException<ReservationFault> reservationEx)
{
Console.WriteLine(reservationEx.Message + Environment.NewLine +
reservationEx.Detail.ErrorCode);
}
catch (FaultException faultEx)
{
Console.WriteLine(faultEx.Message);
}
catch (Exception ex)
{
Console.WriteLine("An unknown error has occurred: " + ex.Message);
}
The above example creates a proxy object from the generated HotelBookingServiceClient class, a
reservation object from the generated Reservation class, and then calls the BookHotel method of the
proxy. Calling this method will invoke the BookHotel operation in the service.
Note: One of the risks in working with proxies is that developers can forget that they are
using an object that crosses the boundary of the application, and possibly even the device. Be
aware that, though it might seem that you are working with a local object, there is an underlying
mechanism that adds overhead on each method call. Communication latency, serialization, and
many other factors can cause a latency penalty.
The above code example also has a try/catch block to handle possible exceptions and service faults. The
code handles the fault exception that returns a ReservationFault object, a more general fault exception
for any other unknown service faults, and general exception handling in case of a more catastrophic
exception, such as a communication exception. You can see the server-side implementation of the
ReservationFault class in Lesson 2, "Creating and Implementing a Contract," in the "Handling Exceptions"
topic.
If the service contract changes over time and you want your proxies to reflect the new state of the
contract and/or data contract, you do not have to delete the service reference and start all over again. The
Add Service Reference dialog box also supports an update option. In Solution Explorer, expand the
Service References folder under your project, select the service reference that you want to update, right-
click it, and then click Update Service Reference.
If you want to create a cleaner configuration file, you can use the Svcutil.exe tool, which is similar to the
Add Service Reference dialog box. However, you can use the Svcutil.exe tool from a command prompt,
and it offers more options than the Add Service Reference dialog box, including metadata export and
service validation.
5-32 Creating WCF Services
3. Adding a service reference requires the service metadata to generate a proxy, while
ChannelFactory<T> requires the service contract interface as a generic type parameter.
To use the ChannelFactory<T> generic class, your client needs to have access to the service contract
interface and the data contracts. You can achieve this through either a shared assembly or a shared, linked
C# file that contains the service interface.
There are two ways to use the ChannelFactory<T> generic class:
The following code example demonstrates the different ways to use ChannelFactory<T>.
(proxyA as ICommunicationObject).Open();
(proxyB as ICommunicationObject).Open();
proxyA.BookHotel(reservation);
proxyB.BookHotel(reservation);
Developing Windows Azure and Web Services 5-33
(proxyA as ICommunicationObject).Close();
(proxyB as ICommunicationObject).Close();
Note: When you use the Add Service Reference dialog box, the generated proxy class
derives from the ClientBase<T> abstract class. Under the hood, this abstract class uses
ChannelFactory<T> to generate the inner proxy that is called by the generated proxy class.
The first CreateChannel method call receives two parameters: a binding object and the service endpoint
address. The third part of the service endpoint's ABC, the contract, is passed as the generic type of the
ChannelFactory<T> class. If you choose to use the second technique, by creating an instance on
ChannelFactory<T>, you can either call the constructor with a binding instance and a service address, as
you do with the static method, or pass a string representing the name of an already configured endpoint,
as shown in the example.
After creating the proxy objects, their channels are opened by calling the ICommunicationObject.Open
method. When you call the CreateChannel method, it dynamically creates a proxy object that
implements both the IHotelBookingService and the ICommunicationObject interfaces. You can use the
ICommunicationObject interface to open and close the communication channel manually, as well as
registering to channel-related events, such as Opening, Closing, and Faulted. If you do not open the
channel manually by calling the Open method, it will be opened when you send the first request to the
service.
Note: Opening the channel can take several seconds if there is a lengthy negotiation
process between the client and the service, for example when you call a secured service that
requires the client to authenticate. In such cases, opening the communication channel ahead of
time, when the application starts or the form is loaded, can save some time when calling the
service for the first time.
For the above example to work, you need to have a <client> endpoint configuration element in the
application configuration file of your client, and you need to set the name attribute of the element to the
name you used in the constructor.
The advantage of using channel factories is in how they handle breaking changes in contracts. Breaking
changes are changes that force you to fix your proxy code, such as when another parameter is added to
an operation, or when the name of a data contract is changed. If you use channel factories, a breaking
change will stop your code from compiling, but if you use a generated proxy, you might end up having
exceptions at run time.
5-34 Creating WCF Services
Demonstration Steps
1. Open D:\Allfiles\Mod05\DemoFiles\AddingServiceReference\begin\AddServiceReference.sln.
In the ServiceHost project, open the App.config file, and then view the service configuration,
including the endpoint configuration and service behaviors.
Question: What are the advantages and disadvantages of using the Add Service Reference
dialog box of Visual Studio 2012?
Demonstration Steps
1. Open D:\Allfiles\Mod05\DemoFiles\UsingChannelFactory\begin\UsingChannelFactory.sln, and
then view the service configuration, including the endpoint configuration and service behaviors.
2. In the ServiceClient project, add a reference to the Common project and the System.ServiceModel
assembly.
3. In the ServiceClient project, open the Program.cs file, and then add the following code before the
commented code.
ChannelFactory<IHotelBookingService> serviceFactory =
new ChannelFactory<IHotelBookingService>
(new BasicHttpBinding(),
"http://localhost:8733/HotelBooking/HotelBookingHttp");
IHotelBookingService proxy = serviceFactory.CreateChannel();
4. In the Main method, uncomment the code, run the service and the client, and then test whether the
client can connect to the service.
The console application should displays the message Booking response: Approved, booking reference:
AR3254
Question: What are the requirements for using the ChannelFactory<T> generic class?
Developing Windows Azure and Web Services 5-35
Objectives
After completing this lab, you will be able to:
Create service and data contract, and implement the service contract.
Lab Setup
Estimated Time: 40 minutes
Virtual Machine: 20487B-SEA-DEV-A, 20487B-SEA-DEV-C
1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.
3. If you executed a later lab before this one, follow these instructions:
In the Snapshots pane, right-click the StartingImage snapshot and then click Apply.
In the Apply Snapshot dialog box, click Apply.
4. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.
5. In the Action pane, click Connect. Wait until the virtual machine starts.
6. Sign in using the following credentials:
7. Return to Hyper-V Manager, click 20487B-SEA-DEV-C, and in the Action pane, click Start.
8. In the Action pane, click Connect. Wait until the virtual machine starts.
Password: Pa$$w0rd
5-36 Creating WCF Services
10. Verify that you received credentials to log in to the Azure portal from you training provider, these
credentials and the Azure account will be used throughout the labs of this course.
Set the access modifier of the class to public and decorate it with the [DataContract] attribute.
Name Type
FlightScheduleId int
Status FlightStatus
Class SeatClass
Name Type
TravelerId int
ReservationDate DateTime
DepartureFlight TripDto
Developing Windows Azure and Web Services 5-37
Name Type
ReturnFlight TripDto
Note: Review the ReservationCreationFault class in the Faults folder. The class will be
used later, as a data contract object to mark a fault reservation.
2. Decorate the interface with the [ServiceContract] attribute and set the Namespace parameter of the
attribute to http://blueyonder.server.interfaces/.
3. Add the CreateReservation method to the interface, and define it as an operation contract.
The method should receive a parameter named request of type ReservationDto, and return a string.
Decorate the class with the [ServiceBehavior] attribute and set the attribute's
InstanceContextMode parameter to InstanceContextMode.PerCall.
2. In the service implementation class, implement the service contract.
Create the CreateReservation method, but do not fill it with code yet.
Note: At this point, the class will not compile because no value is returned from the
method. Ignore this for now, as you will soon write the missing code.
3. Start implementing the CreateReservation method by verifying whether the request contains
information for the departure flight. If the departure flight information is missing, throw a fault
exception.
If the request.DepartureFlight property is null, throw a FaultException of type
ReservationCreationFault.
Property Value
Property Value
ReservationDate request.ReservationDate
In the FaultException constructor, set the second constructor parameter to the reason string
"Invalid flight info".
Property Value
TravelerId request.TravelerId
ReservationDate request.ReservationDate
Property Value
Class request.DepartureFlight.Class
Status request.DepartureFlight.Status
FlightScheduleID request.DepartureFlight.FlightScheduleID
5. Continue implementing the CreateReservation method by checking whether the return flight is not
null. If the request.ReturnFlight is not null, add a trip to the reservation object you created.
Initialize the reservation.ReturnFlight property with a new Trip object, and set its properties
according to the following table.
Property Value
Class request.ReturnFlight.Class
FlightScheduleID request.ReturnFlight.FlightScheduleID
6. Continue implementing the CreateReservation method by adding the new reservation object to the
database:
Create a new ReservationRepository object and initialize it with the ConnectionName field.
Use the ReservationUtils.GenerateConfirmationCode static method to generate a confirmation
code and assign in to the reservation.ConfirmationCode property before saving the new
reservation.
Call the Add and then the Save methods of the repository to save the newly created reservation.
Note: To make sure the context and the database connection are disposed properly at the
end of the service operation, you should place the repository-related code in a using block.
Results: You will be able to test your results only at the end of the second exercise.
1. Configure the console project to host the WCF service with TCP endpoint
Task 1: Configure the console project to host the WCF service with TCP endpoint
1. In the BlueYonder.BookingService.Host project, add a reference to the System.ServiceModel
assembly.
Note: The begin solution already contains all the project references that are needed for the
project. This includes the BlueYonder.BookingService.Contracts,
BlueYonder.BookingService.Implementation BlueYonder.DataAccess, and
BlueYonder.Entities projects, as well as the Entity Framework 5.0 package assembly.
Add the <system.serviceModel> element to the configuration, and in it add the <services>
element.
In the <services> element, add a <service> element, and then set its name attribute to
BlueYonder.BookingService.Implementation.BookingService.
4. Add an endpoint configuration to the service configuration you added in the previous step.
In the <service> element, add an <endpoint> element with the following attributes.
Attribute Value
name BookingTcp
address net.tcp://localhost:900/BlueYonder/Booking/
binding netTcpBinding
5-40 Creating WCF Services
Attribute Value
contract BlueYonder.BookingService.Contracts.IBookingService
<connectionStrings>
<add name="BlueYonderServer" connectionString="Data
Source=.\SQLEXPRESS;Database=BlueYonder.Server.Lab5;Integrated Security=SSPI"
providerName="System.Data.SqlClient" />
</connectionStrings>
Note: You can copy the connection string from the ASP.NET Web API services
configuration file in
D:\Allfiles\Mod05\Labfiles\begin\BlueYonder.Server\BlueYonder.Companion.Host\Web.c
onfig. Make sure you change the database parameter to BlueYonder.Server.Lab5.
2. In the Main method, add the following code to initialize the database.
3. Remaining in the Main method, add code to host the BookingService service class.
Create a new instance of the ServiceHost class for the BookingService service class.
Register to the service host's Opening and Opened events with the ServiceOpening and
ServiceOpened methods, respectively.
Open the service host, wait for user input, and the close the service host.
Note: Refer to Lesson 3, "Configuring and Hosting WCF Services", Topic 1, "Hosting WCF
Services", for an example on how to open the service host, wait for user input, and the closing the
service host.
4. Run the BlueYonder.BookingService.Host project in debug mode and verify it opens without
throwing exceptions. Keep the console window open, as you will need to use it later in the lab.
Results: You will be able to start the console application and open the service host.
Developing Windows Azure and Web Services 5-41
Exercise 3: Consuming the WCF Service from the ASP.NET Web API
Booking Service
Scenario
After you create the WCF service, you can consume it from the ASP.NET Web API web application. In this
exercise, you will configure the client endpoint in the ASP.NET Web API web application, and use the
ChannelFactory<T> generic class to create a client proxy. You will then use the new proxy to call the
WCF service, create the reservation on the backend system, and get the reservation confirmation code in
return.
1. Add a reference to the service contract project in the ASP.NET Web API projects
Task 1: Add a reference to the service contract project in the ASP.NET Web API
projects
1. Open the D:\AllFiles\Mod05\LabFiles\begin\BlueYonder.Server\BlueYonder.Companion.sln
solution file in a new Visual Studio 2012 instance, and add the
BlueYonder.BookingService.Contracts project from the
D:\Allfiles\Mod05\Labfiles\begin\BlueYonder.Server\BlueYonder.BookingService.Contracts
folder to the solution.
Add the <system.serviceModel> element to the configuration, and in it add the <client> element.
In the <client> element add an <endpoint> element with the following attributes.
address net.tcp://localhost:900/BlueYonder/Booking
binding netTcpBinding
contract BlueYonder.BookingService.Contracts.IBookingService
name BookingTcp
Note: Make sure you set the name attribute of the endpoint to BookingTcp, as you will
use this endpoint name in code to locate the endpoint configuration.
5-42 Creating WCF Services
In the channel factory constructor, use the BookingTcp endpoint configuration name.
2. In the CreateReservationOnBackendSystem method, uncomment the code that creates the
TripDto and ReservationDto objects.
Before the try block, create the channel by calling the factory.CreateChannel method.
Store the newly created channel in a variable of type IBookingService.
4. In the try block, call the CreateReservation operation of the Booking service.
The CreateReservation operation returns the confirmation code string. Store the returned string in a
local variable.
After calling the service, close the channel by casting the channel object to the
ICommunicationObject interface, and then call its Close method.
Return the confirmation code string and remove the return statement that is currently at the end of
the method.
Note: Refer to Lesson 4, "Consuming WCF Services", Topic 2, "Creating a Service Proxy with
ChannelFactory<T>", for an example on how to close a channel.
to
catch(FaultException<ReservationCreationFault> fault)
Create the HttpResponseMessage by using the Request.CreateResponse method. Set the status
code to BadRequest (HTTP 400), and the content of the message to the description of the fault.
Abort the connection in case of Exception, by calling the Abort method on the proxy object.
6. In the second catch block, abort the connection before calling the throw statement.
Abort the connection by casting the channel object to the ICommunicationObject interface, and
then calling its Abort method.
7. In the Post method, before adding the new reservation to the repository, call the Booking WCF
service and set the reservation's confirmation code.
Note: The reservation should be saved to the database with the confirmation code you got
from the WCF service, so make sure you set the confirmation code property before adding the
reservation to the repository.
3. Search for New and purchase a new trip from Seattle to New York.
4. Go back to the 20487B-SEA-DEV-A virtual machine, and debug the BlueYonder.Companion and
BlueYonder.Server solutions. Verify the ASP.NET Web API service is able to call the WCF service.
Continue running both solutions and verify the client is showing the new reservation.
Results: After you complete this exercise, you will be able to run the Blue Yonder Companion client
application and purchase a trip.
Question: Why should you use the NetTcpBinding in your endpoints instead of
BasicHttpBinding or WSHttpBinding?
Question: Why did you create the service contract and service implementation in separate
C# projects?
5-44 Creating WCF Services
You then learned the basics of WCF services: defining a service contract interface and data contracts,
implementing a service, and the proper way to handle exceptions. Next, you learned about hosting WCF
services; the responsibilities of the host, the options that are available in hosting, and the details of how to
self-host. You also learned how to configure service endpoints in code and in the configuration file of
your host. You then learned how to expose your service metadata, including information on the various
contracts and endpoints, by using service behaviors and MEX endpoints.
Finally, you learned how to create a proxy that can call a service by using the Add Service Reference
dialog box of Visual Studio 2012, or by using the generic ChannelFactory<T> class.
This module covered the fundamentals of creating WCF services. If you wish to go in depth into other
features of WCF, such as messaging patterns, WCF extensibility, and how to secure WCF services, refer to
Appendix 1, Designing and Extending WCF Services, and Appendix 2, Implementing Security in WCF
Services in Course 20487.
Review Question(s)
Question: When should you favor WCF over of ASP.NET Web API?
Question: When should you use the Add Service Reference dialog box, and when should
you use the ChannelFactory<T> generic class?
Tools
WCF Test Client, Microsoft Service Configuration Editor, Svcutil.exe
6-1
Module 6
Hosting Services
Contents:
Module Overview 6-1
Module Overview
The most important aspect of implementing a service is hosting it so that clients can access it. For both
Windows Communication Foundation (WCF) and ASP.NET Web API, the host is responsible for allocating
all the resources required for the service. The host opens listening ports, creates an instance of a service
when a request arrives, and allocates memory and threads as required. If the host fails, the service fails.
There is a one-to-one dependency between the host and the service. The reliability and performance of
the host directly affects the quality of the service.
You can host WCF services in a self-hosted Windows application. WCF supports hosting in IIS also, and
you can self-host your ASP.NET Web API services. In this module, you will explore the various ways of
hosting your services on-premises, and the benefits each type of host provides, in relation to issues such
as reliability, performance, and durability.
Apart from deciding on the type of hosting service to use, Web-host or self-host, you also need to think
about the hosting environment for your services - whether it will be on-premises or in the cloud platform.
Considerations for deciding which environment to use include:
Specific hardware requirements. When hosting on-premises, you have more control over the
hardware of your server than in the cloud platform. In the latter case, you only know how many
Central Processing Units (CPUs), memory, and disk space your virtual machines have.
Scaling requirements. Hosting services on-premises requires usage prediction and servers. Other
than the costs involved with over-provisioning, on-premises hosting can also be impacted by under
provisioning caused by rapid growth and unpredictable increase in demand. Hosting your services in
the cloud environment makes your servers available by using the elasticity of the cloud platform to
scale out when more resources are required.
Legal requirements. In some countries, certain types of data, such as personal data, can only be
stored inside the boundaries of the country. For on-premises hosting, this is achieved easily, but when
you host your services and data in the cloud platform, your data might be copied between data
centers in different locations on the globe, for reasons such as availability and backup.
Your decisions related to hosting type and hosting environment, although seemingly independent of each
other, can affect each other. For example, if you choose to host your services in the Windows Azure cloud
environment, you need to choose between hosting your services in a web role, a worker role, or a Web
6-2 Hosting Services
Site. If you chose to self-host your service, you only have the worker role option, but if you want to use
Web-hosting for your service, you need to choose between a web role and a Web Site. In this module,
you will learn more about hosting services in Windows Azure and supported hosting types.
Note: The Management Portal UI and Windows Azure dialog boxes in Visual Studio 2012
are updated frequently when new Windows Azure components and SDKs for .NET are released.
Therefore, it is possible that some differences will exist between screen shots and steps shown in
this module and the actual UI you encounter in the Management Portal and Visual Studio 2012.
Objectives
After you complete this module, you will be able to:
Host services in the Windows Azure cloud environment by using Windows Azure Cloud Services and
Web Sites.
Developing Windows Azure and Web Services 6-3
Lesson 1
Hosting Services On-Premises
When you want to host a web service on-premises, you can host it by using a Windows service or Internet
Information Services (IIS). A Windows service is a long-running application that runs in the background.
Windows services have no user interfaces, and do not produce any visual output. Services run in the
background while a user performs or executes any other task in the foreground, but they also run when a
user is not logged on. This makes Windows services a good candidate for classic server applications, such
as an email server or a File Transfer Protocol (FTP) server.
Running without a user interface poses a debugging and operations challenge, because the user is not
notified about warnings or errors. To overcome this, Windows services use the Windows Event Log and
other logging frameworks to record tracing information and notify the system administrator about error
conditions.
IIS is a Windows web server that hosts web applications.
This lesson explains how to host WCF services and ASP.NET Web API services in a Windows service, and
how to host WCF services in IIS.
Note: IIS can also host ASP.NET Web API services. This is covered in Module 3 "Creating
and Consuming ASP.NET Web API Services" in the Lesson "Hosting and Consuming ASP.NET Web
API Services" in Course 20487.
Lesson Objectives
After you complete this lesson, you will be able to:
Host WCF services in a Windows service.
The Windows operating system manages the loading and execution of Windows services. In addition, the
Microsoft Services Management Console is a UI tool that you can use for managing Windows Services and
their configuration settings. To open the Microsoft Services Management Console, open Control Panel
from the Start screen, open Administrative Tools, and then open Services.
You can configure a Windows service to start automatically when the system finishes booting, and to
restart in case a failure occurs. Both are useful features for WCF service hosts. Another difference between
a Windows service and a foreground application started by the user is that a Windows service runs within
a security context that is different from the security context of the user. By default, a specialized local
identity, such as Network Service or Local System, is used to run Windows Services, but you can change it
to suit your needs.
For more information about Service User Accounts, see
The following code example shows a Windows Service that hosts the BookingService WCF service.
It overrides the OnStart method to open the service host and the OnStop method to close it.
public BookingWindowsService()
{
InitializeComponent();
}
try
{
// Open the host and start receiving incoming messages
Developing Windows Azure and Web Services 6-5
_serviceHost.Open();
}
catch (Exception e)
{
// If an error occurred while trying to open the service, set the field to
null to prevent
// it from being closed when the Windows Service stops.
_serviceHost = null;
Debug.WriteLine("Unable to open service.\n{0}", e.Message);
}
}
_serviceHost.Close();
_serviceHost = null;
}
}
Windows services also support the pause and resume actions. The WCF service host does not provide a
pause and resume feature. If you close the WCF service host, you cannot open it again, and you will need
to create a new ServiceHost object. Therefore, pausing and resuming services is not common when
hosting WCF services. However, you can create your own logic for pausing services by checking a global
Boolean flag before executing your service operation's code for example, and then overriding the
OnPause and OnContinue methods of the ServiceBase class.
Note: Instead of creating a global Boolean flag, you can use the WCF Extensible objects
that were discussed in Lesson 3 "Extending the WCF Pipeline", in Appendix A, "Designing and
Extending WCF Services" in Course 20487.
Creating an Installer
After you create the Windows Service code, you need to create an installer class. The installer component
provides information about the Windows Service, such as its name, a short description, its start mode
(automatic/manual), and the Windows identity under which it runs.
To create an installer for your Windows Service, open the Windows Service class with the designer and in
the designer view, right-click, and click Add Installer. This will create a new class that derives from the
System.Configuration.Install.Installer class. In the new installer class, Visual Studio will create an
instance from each of the following classes:
You can configure each of these settings by opening the service installer in the designer, clicking either
the service process installer or the service installer, and setting their properties in the Properties window.
The following image is a screenshot of the service installer in Visual Studio 2012.
You should test your code carefully if you plan to execute it within a Windows service. The fundamental
difference between development and production environments is that a production Windows service runs
within a security context that is different from the development security context of the user. Issues such as
missing or incorrect permissions are common when switching between the security context of the user,
which is often an administrative account, and the security context of the Windows Service, which might be
a more restricted account.
For more information on creating Window Service application with Visual Studio 2012, see
Recycles your service if it uses too much CPU or memory over time.
Protects your service with rapid fail protection if your service fails or is unresponsive for a long time.
Note: Module 3, "Creating and Consuming ASP.NET Web API Services", Lesson "Hosting
and Consuming ASP.NET Web API Services" in Course 20487 provides more explanation of IIS
core capabilities.
IIS running on Windows Server 2008 and later versions supports most of the built-in transport types of
WCF, including HTTP, HTTPS, TCP, Message Queuing and Named pipes. The new User Datagram Protocol
(UDP) transport released with WCF 4.5 is not yet supported by IIS. For earlier versions of IIS (versions 6.0
and earlier), IIS only supports bindings that use the HTTP and HTTPS transports.
By default, WCF services that are hosted in IIS only support HTTP and HTTPS transports. To support other
transports, such as TCP and Named Pipes, you need to configure your Web application.
For more information on Configuring the Windows Process Activation Service for Use with Windows
Communication Foundation, see:
Configuring the Windows Process Activation Service for Use with Windows Communication
Foundation
http://go.microsoft.com/fwlink/?LinkID=313734
IIS uses a hierarchical-style directory management, where each virtual directory maps to a folder in the file
system. This virtual directory contains static files such as images and webpages, in addition to web
applications such as ASP.NET Web API services and WCF services. Because of the ability of the IIS to host
multiple web applications on a single server, you can deploy several WCF services to IIS, each of them
running independent of each other.
Note: If different web applications share the same application pool, these applications also
share the same worker process. If one of the services causes its worker process to fail, for example
due to a critical exception, all the hosted applications in the worker process will also fail. To
prevent such a scenario, consider separating web applications to different application pools.
6-8 Hosting Services
One of the differences between a self-hosted WCF service and an IIS-hosted WCF service is how you
construct the endpoint address in the service configuration, and how you configure it on the client-side.
On the client side, the address of a WCF service is constructed of two parts:
1. The virtual directory path in IIS. IIS has a set of hierarchical virtual directories. The path where you
place your web application will be the path used by clients to access your service.
2. The .svc file name of the service. When a client requests a resource from IIS, it uses a URL that
points to a file, such as a .html, a .aspx, or a .asmx file. IIS uses the file extension part of the URL to
determine which handler should handle the request. For IIS to identify a request as being sent to a
WCF service, you need to create a file with the .svc extension. The .svc file has specific content, which
identifies the service type that needs to be hosted and called.
For example, if the virtual directory path to the web application is /MyApps/Booking, and the.svc file
name of the service is bookingService.svc, the URL that the client uses will be
http://serverName/MyApps/Booking/bookingService.svc. The approach is similar to the one used
when your service uses other transport types, such as TCP or Named pipes.
When IIS hosts WCF services in a web application, it does not automatically load a service host for every
service. The first time IIS receives a request for a specific service, the service host is created and opened.
This is also referred to as message-based service activation. The .svc file contains the information required
by IIS to start the service host.
On the service side, IIS automatically sets the base address of your service to the virtual directory path, to
avoid using absolute addresses in your configuration. In addition, IIS does not require you to specify the
.svc file name in the address settings of the service, because IIS identifies the service it needs to call from
the content of the .svc file. Therefore, it only uses the configuration to identify which of the endpoints of
the service is being accessed. Because of this behavior of IIS, the address you set in your service endpoints
is considered as a suffix to the URL that points to the .svc file.
The following example demonstrates the content of a WCF service description file.
The ServiceHost directive contains all the information IIS requires to create a service host for your service,
and includes the following attributes:
Service. This attribute points to the Common Language Runtime (CLR) type of the service to host.
Factory. Because IIS instantiates the service host on your behalf, this attribute provides a way to inject
a service host instance to use instead of the default service host created by IIS. To do this, create a
new class that derives from the ServiceHostFactory class, and specify its type name in the Factory
attribute.
The web.config file contains the same service configuration as an app.config file. The difference is that you
do not have to use a base addressIIS always uses the URL of the .svc file as the base address for the
service.
Note: Because IIS does not require you to define base address for your service, and
because WCF supports automatic endpoint configuration, you might find that some web
applications do not have the <services> section in their WCF configuration.
Instead of creating a .svc file for each service, you can supply the same configuration in the web.config file
by adding the <serviceHostingEnvironment> section to the <system.serviceModel> section group.
The following configuration demonstrates how to describe a service without using a .svc file.
If you choose to use the web.config file to configure the service activation, you do not need to create
matching .svc files for your services.
Like the .svc file, you can configure the <add> element of the <serviceActivations> element to use a
service host factory, by adding the factory attribute to the element and setting the attribute to the type
name of the service host factory derived class.
For more information about hosting a WCF service under IIS, see
The following code example uses the HttpSelfHostServer class to host an ASP.NET Web API service
inside a console application.
6-10 Hosting Services
WebApiConfig.Configure(config);
Before you start the self-hosted server, you need to configure the ASP.NET Web API environment. To do
this, you need to create an instance of the HttpSelfHostConfiguration class, and apply the required
configuration, such as adding routing rules, and media type formatters.
After you create the configuration, you create an instance of the HttpSelfHostServer class, with the
configuration as a constructor parameter, and then call the OpenAsync method of the class to load the
service.
Note: The underlying implementation of the HttpSelfHostServer uses WCF for its
communication channel. However, the message does not flow through the entire WCF pipeline,
but is rather decoded in the channel stack, and then sent to the ASP.NET Web API routing
mechanism.
One of the scenarios for using self-hosting is running integration tests. To test an ASP.NET Web API
service, you start by testing the controller and its actions. If your message-handling pipeline includes
additional components that need to run as part of the test, such as message handlers and action filters,
you cannot test the controller end-to-end by simply instantiating it and calling one of its action methods.
You need to send an HTTP request to the service, so that the message will flow through the pipeline,
testing each part in the pipeline. Because of test cases being independent of each other, this requires
running multiple service hosts, which you can accomplish by using the self-hosting feature. In addition, to
reduce the latency caused by sending HTTP messages over the network, your test client can send
messages to the self-hosted service in-memory. This makes it possible to test code that interacts with the
request and response pipeline, without actually sending an HTTP message over the network. In these
cases, ASP.NET Web API self-hosting can be very useful. For testing, create an instance of the
HttpSelfHostServer class and pass it to an HttpClient instance for it to consume in-memory messages
without creating a network connection.
The following code example creates a self-host and passes it as a constructor parameter to an HttpClient
object for testing.
Demonstration Steps
1. Open Visual Studio 2012 and create a new WCF Service Application project named MyIISService.
2. Review the content of web.config file. Observe there is no <services> section in the
<system.serviceModel>. This is because IIS automatically defines the base address according to the
virtual directory where the services are hosted, and the default endpoint configuration of WCF
therefore does not require any specific configuration for each endpoint.
3. Review the content of the Service1.svc file by right-clicking it in Solution Explorer and then clicking
View Markup. When IIS receives a request addressed to the .svc file, it uses the value of the Service
attribute from the file to know which service it needs to call.
4. In Solution Explorer, select the MyIISService project, and then press F5 to debug the service. After
the browser opens, append the path Service1.svc?wsdl to the address and observe the WSDL file.
The WSDL is used for creating the client-side proxy code.
7. Browse to the IIS-hosted service and view its WSDL. Use the address
http://localhost/MyService/Service1.svc?wsdl.
Question: What is the difference in the way you create and configure your WCF service
when hosting in IIS?
Lifetime Service process lifetime is IIS shuts down idle services to improve resource
controlled by the operating management. The service is reactivated when a
system, and is not message- message is received.
activated.
Health No health management. Services are monitored and recycled when an error
6-12 Hosting Services
Endpoint Configured in the Bound to the IIS virtual directory path, which contains
Address app.config file. the .svc file.
Deployment Requires installation by Can be published from Visual Studio into IIS or into a
using installutil.exe. package for future deployment.
Hosting Services
http://go.microsoft.com/fwlink/?LinkID=298814&clcid=0x409
Developing Windows Azure and Web Services 6-13
Lesson 2
Hosting Services in Windows Azure
You can create highly scalable services that are easy to deploy and manage by using Windows Azure. When
hosting a service in Windows Azure, you provide the service application and its configuration. Windows Azure
handles hosting, lifecycle management, and the availability of your service.
Worker role. Hosts a background process for long process tasks, similar to a Windows service.
Web role. Hosts services that are used as a front-end to the Internet.
Web Site. Similar to web role with faster deployment but fewer features.
Virtual machines. Create virtual machines running Windows or Linux from existing templates or by
using an existing virtual hard disk (VHD) file. Virtual machines are not covered in this course.
This module explains the different options for hosting services in Windows Azure, the differences between
them, and how to use them to host your services.
Lesson Objectives
After you complete this lesson, you will be able to:
Run Windows Azure roles in a local compute emulator.
Explain the differences between Windows Azure Web Sites and Windows Azure web roles.
The following image is a snapshot of the Windows Azure Cloud Service dialog box that Visual Studio 2012
displays when you create a cloud project.
6-14 Hosting Services
The emulator limits the number of roles and the number of compute instances you can emulate. The
maximum number of roles per deployment is 25 and the maximum core count is 20.
Developing Windows Azure and Web Services 6-15
Services that run in the emulator have default administrator privileges, whereas services in Windows
Azure do not have default administrator privileges.
The emulator does not use Windows Azure load balancing capability.
By default, the emulator uses IIS Express. You can change this setting in the Visual Studio 2012
projects properties on the Web tab. Windows Azure uses IIS.
All roles in the emulator run on the same local computer, whereas in Windows Azure roles run on
separate virtual machines.
Web applications hosted with the Windows Azure Emulator can only be accessed from the local
computer running the emulator. Web applications deployed in Windows Azure can be accessed from
any computer on the Internet.
Note: In addition to the application you develop for the role, you can include built-in
modules that will deploy to each role instance. Modules can perform instance configuration, such
as setting remote desktop, or install background services that run in each instance, such as
diagnostics collection. You can select the modules to include in the role by using the role
properties window in Visual Studio 2012, or by manually editing the service definition file of the
cloud project. The Diagnostics module is discussed in depth in Module 10, "Monitoring and
Diagnostics" in Course 20487.
When deploying a cloud project to Windows Azure, all the deployed roles are packaged to a compressed
.cspkg file and the file is uploaded to Windows Azure Storage. After the package uploads, Windows Azure
locates a physical machine that can host roles. It then mounts a virtual machine, running Windows Server
2012 by default, and deploys each role to its assigned virtual machine by downloading the package from
the storage and deploying the required files to each instance. This procedure takes several minutes.
Easy to create and deploy. Cloud Services provide a Platform as a Service (PaaS) environment, where
you supply the application package, and the Windows Azure platform supplies the compute
instances.
Automatic Operating System updates. When you use Cloud Services, the platform is responsible
for keeping your instances up-to-date with the latest Windows updates.
6-16 Hosting Services
Hardware monitoring. Cloud Services can detect faulty hardware and move your deployment to a
virtual machine on a different physical server.
All the compute instances of the role are accessible from the Internet through a single IP address. This is
achieved by using a load balancer. The load balancer is an important component of Windows Azure,
because it maintains the health of the role and the application deployed to it. Windows Azure uses a
software load balancer to distribute network traffic among role instances. Requests sent to a public
endpoint are distributed among live instances of the role. The load balancer probes each role for its
health at specified intervals and recycles unresponsive roles.
Note: Storing the deployment package in Windows Azure Storage is crucial for the role's
continuity. If any of the role instances breaks down and stops working, another instance is
automatically located instead of the failed one and is assigned to the role. After the new virtual
machine is turned on, the deployment package is downloaded from storage and deployed to the
new instance. Windows Azure Storage is explained in depth in Module 9, "Windows Azure
Storage" in Course 20487.
Note: The staging and production deployment environments have the same hardware
capabilities.
These two environments have the same functionality and support all role features. The difference between
them is in the virtual IP addresses by which the cloud service is accessed. The staging environment URL
contains a globally unique identifier (GUID) such as http://3D535850-3C00-4C8C-B89A-
795B9EF93038.cloudapp.net. In the production environment, the URL is based on a friendlier Domain
Name System (DNS) prefix assigned to the cloud service, such as: http://calculatorservice.cloudapp.net.
http://go.microsoft.com/fwlink/?LinkID=298816&clcid=0x409
Note: Windows Azure Storage will be discussed in detail in Module 9, "Windows Azure
Storage" in Course 20487.
By default, every Web application you add to your cloud project creates a new Web role, and each such
Web role will be deployed to a different set of compute instances. Windows Azure supports deploying
several Web applications to the same Web role, by using IIS Web Sites, however this feature is not
accessible through the Visual Studio 2012 user-interface, and instead you will need to change the cloud
project's service definition file manually, ServiceDefinition.csdef. Due to the complexity of this process, it
will not be covered in this course.
Back-end processing. Runs a background process that receives processing instructions from a front-
end application, such as a Web application hosted in a Web role. Background processes hosted in
worker roles can receive messages by listening to queues, such as Windows Azure Queues or
Windows Azure Service Bus.
Windows Azure Queue storage. Used for reliable, persistent messaging between roles. Queues store
messages that may be read by any role that has access to the Windows Azure Storage account.
Windows Azure Queue storage is discussed in Module 9, "Windows Azure Storage" in Course 20487.
Windows Azure Service Bus. Offers simple and guaranteed message delivery. Service Bus Topics
deliver messages to multiple subscriptions and easily fan out message delivery at scale to downstream
systems. Windows Azure Service Bus is discussed in Module 7, "Windows Azure Service Bus" in Course
20487.
A typical operation scenario involves uploading a file to a web site for processing. The file is loaded by the
Web role, which saves it to a Windows Azure Storage Blob and sends a message to the worker role to
process it. The worker role retrieves the file from the Windows Azure Storage Blob and processes it.
Note: In the previous lesson, you compared hosting services in IIS and in self-hosted
environments. The choice you make on how to host your services will affect the type of role you
choose: use web roles to host your services in IIS, or worker roles if your services should be self-
hosted.
For more information about web roles, worker roles, and Cloud Services in general, see
6-18 Hosting Services
Question: When should you prefer using a worker role instead of a Web Role?
Note: Windows Azure Web Sites can access the Internet, so you can hand-off any
asynchronous background processing to a Windows Azure Worker role by either communicating
with a Windows Azure hosts, or placing the information Windows Azure requires in a queue.
Windows Azure Web Sites can also access other Windows Azure components, such as SQL
Database and Windows Azure Storage.
Web Sites have different hosting modes that affect the cost of hosting your web site and resource
allocation:
Free Mode. Web Sites run alongside web sites of other users and share web server resources. The free
mode has hard quotas on CPU utilization, memory usage, and is limited to 165 megabytes (MB) of
outgoing data transfer per day.
Shared Mode. A Web Site running in shared mode is deployed in a shared/multitenant hosting
environment. The shared instance model improves the free mode and provides support for custom
domain names, 1 gigabyte (GB) of storage, and unlimited outgoing data transfer that is charged
according to your Windows Azure subscription.
Reserved Mode. When running in reserved instance mode your Web Sites are guaranteed to run
isolated by using dedicated virtual machines, meaning no other customers share your machines.
Note: The quota restrictions specified for the Free, Shared, and Reserved modes are the
known restrictions and might be subject to change. Before choosing the mode for your Web
application, consult the Windows Azure pricing page.
Note: ASP.NET MVC 4 is covered in Module 3, "Creating and Consuming ASP.NET Web API
Services" in Course 20487.
Demonstration Steps
1. Open Visual Studio 2012 and create a new ASP.NET MVC 4 Web Application project named
MyWebSite by using the Web API template.
3. In Visual Studio, open Server Explorer, right-click Windows Azure Web Sites, and then click Import
Subscriptions. Download the subscription file and import it into Visual Studio. You only need to
import your Windows Azure credentials once. After you do that, you can see the list of your Web Sites
and cloud service when you deploy your applications.
4. Right-click the project in Solution Explorer, and then click Publish. Import the deployment settings
for the newly created Windows Azure Web Site, and complete the publish process. Note that the
deployment process is quick, because the deployment process only copies the content of the
application to an existing virtual machine, and does not need to wait for a new virtual machine to be
created.
5. Return to Visual Studio 2012 and add a Windows Azure Cloud Service project for the Web application
by right-clicking the project, and then clicking Add Windows Azure Cloud Service Project.
6. To deploy the Web application to a Web role, you need to create or select a cloud service, and
choose whether to deploy the application to production or staging. Create a new cloud service for
your role and name it WebRoleDemoYourInitials (YourInitials contains your initials). Set the location
of the role to the region closest to you. Leave the other settings with their current values.
7. To deploy your Web application to a Web role, you also need to create a Windows Azure Storage
Account where the deployment package will be stored. Click the Advanced Settings tab, and then
create a new storage account named webroledemostorageyourinitials (yourinitials contains your
initials) in the same location you choose for your cloud service. Make sure all the letters in the storage
account name are in lowercase.
Note: If you get a message saying the service creation failed because you reached your
storage account limit, delete one of your existing storage accounts and retry the step. If you do
not know how to delete a storage account, consult the instructor.
8. Publish the application and wait for the Windows Azure Activity Log window to appear. Visual Studio
will start the deployment process, which includes packaging the application, uploading the package
6-20 Hosting Services
to storage, creating the VM, and then deploying the package to the VM. This process will take several
minutes to complete.
Note: If time permits, wait for the deployment to complete, and then click the Website
URL link to open the Web application in a browser. If you receive an error message saying the
web browser cannot be started, open a browser manually and type the address.
Question: What is the difference in the deployment process between deploying to a Web
Site and to a web role?
Comparing Windows Azure Web Roles and Windows Azure Web Sites
To decide whether to use a Windows Azure web
role or a Windows Azure Web Site, you first have
to know the differences between the features
offered by these two environments, and then
compare your application requirements against
those features. The following table describes the
features that are unique to each environment and
are not supported by the other.
Hosting Virtual machine for each Dependent on the instance modes: Free and
method instance. Shared modes share a virtual machine with
other web sites; Reserved mode does not share
virtual machines with other web sites not in your
account.
Scaling Long (minutes), requires Short (seconds), uses already running virtual
creation of a virtual machine. machines.
Configuration Every change to the Web.config Supports editing parts of the Web.config
file requires redeploying the without redeploying the entire application.
application. To change
configuration without
redeploying, move the
configuration settings to the
service configuration file.
Windows Support Windows Azure Virtual You cannot join a Virtual Network.
Azure Virtual Network to connect with other
Network web roles and VMs.
Windows Azure Web Sites, Web Roles, and Virtual Machines: when to use which?
http://go.microsoft.com/fwlink/?LinkID=298818&clcid=0x409
6-22 Hosting Services
In addition, Blue Yonder Airlines wants to separate the deployment of the flights management web
application from the Travel Companion back-end service. Because the web application is a small
application and does not require many resources, it was decided that the application should be deployed
to a Windows Azure Web Site. In this lab, you deploy the booking management web application to a
Windows Azure Web Site.
Objectives
After you complete this lab, you will be able to:
Host the booking management web application in a Windows Azure Web Site.
Lab Setup
Estimated Time: 45 minutes
For this lab, you will use the available virtual machine environment. Before you begin this lab, you must
complete the following steps:
1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.
3. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.
4. In the Action pane, click Connect. Wait until the virtual machine starts.
Password: Pa$$w0rd
6. Verify that you received credentials to log into the Azure portal from your training provider; these
credentials and the Azure account will be used throughout the labs of this course.
In this lab, you will install NuGet packages. It is possible that some NuGet packages will have newer
versions than those used when developing this course. If your code does not compile, and you identify
the cause to be a breaking change in a NuGet package, you should uninstall the NuGet package and
instead, install the old version by using Visual Studio's Package Manager Console window :
Developing Windows Azure and Web Services 6-23
1. In Visual Studio, on the Tools menu, point to Library Package Manager, and then click Package
Manager Console.
2. In Package Manager Console, enter the following command and then press Enter.
install-package PackageName -version PackageVersion -ProjectName ProjectName
(The project name is the name of the Visual Studio project that is written in the step where you were
instructed to add the NuGet package).
3. Wait until Package Manager Console finishes downloading and adding the package.
The following table details the compatible versions of the packages used in the lab:
EntityFramework 5.0.0
3. To the BlueYonder.Server solution, add a new ASP.NET Empty Web Application project named
BlueYonder.Server.Booking.WebHost.
BlueYonder.BookingService.Contracts
BlueYonder.BookingService.Implementation
BlueYonder.DataAccess
BlueYonder.Entities
6. Copy the FlightScheduleDatabaseInitializer.cs file from the booking service self-host project to the
new web application project.
7. Add a Global.asax file to the new web application, and in the Application_Start method, add the
required database initialization code.
Initialize the database by using the database initializer you copied from the self-host project.
6-24 Hosting Services
Refer to the database initialization code in the Program.cs file that is in the
BlueYonder.Server.Booking.Host project.
Inside the <serviceActivations> section, add an <add> tag with the following parameters.
Attribute Value
service BlueYonder.BookingService.Implementation.BookingService
relativeAddress Booking.svc
Note: You can also refer to Lesson 1, "Hosting Service On-Premises", Topic 2, "Hosting WCF
Service in IIS" in this module, for a code example.
Note: .NET 4 and 4.5 use the same .NET Framework version for the IIS application pool. If
you do not specify the target framework of your Web application in the Web.config file, the
default version will be .NET 4.
4. Remove the addresses used in the service metadata behavior and the service endpoint.
Note: IIS uses the address of the web application to create the service metadata address
and the service endpoint address.
Note: The site bindings configure which protocols are supported by the IIS Web Site and
which port, host name, and IP address are used with each protocol.
Note: In addition to adding net.tcp to the site bindings list, you also need to enable net.tcp
for each Web application you host in IIS. By enabling net.tcp, WCF will automatically create an
endpoint with NetTcpBinding.
3. Connect to the WCF service through the WCF Test Client application, and verify if the application is
able to retrieve metadata from both services.
Open the WCF Test Client application from D:\AllFiles.
Results: You will be able to run the WCF Test Client application and verify if the services are running
properly in IIS.
After you create the components in Windows Azure, you will create the cloud project in Visual Studio
2012, and configure it to deploy the ASP.NET Web API web application to a Windows Azure web role.
Before deploying the application to Windows Azure, you will test it locally with the Windows Azure
compute emulator, and after you verify it is running properly, you will deploy it to Windows Azure.
Task 1: Create a new SQL database server and a new cloud service
1. Open the Windows Azure Management Portal (http://manage.windowsazure.com)
Select a Region that is closest to your location and create the SQL Database Server.
6-26 Hosting Services
3. Configure the SQL Database server to allow access from any IP address by creating a rule with the
following settings:
RULE NAME: OpenAllIPs
Note: As a best practice, you should allow only your IP address, or your organization's IP
address range to access the database server. However, in this course, you will use this database
server for future labs, and your IP address might change in the meanwhile, therefore you are
required to allow access from all IP addresses.
Open the new cloud service configuration and then open the CERTIFICATES tab.
Use the password 1 to open the certificate file.
Note: In this lab, the ASP.NET Web API services are accessible through HTTP and HTTPS. To
use HTTPS, you need to upload a certificate to the Windows Azure cloud service.
8. Locate the client endpoint configuration for the Booking service, and change its address to point to
the web-hosted service. Use the address
net.tcp://localhost/BlueYonder.Server.Booking.WebHost/Booking.svc.
Right click the BlueYonder.Companion.Host project and select Add Windows Azure Cloud
Service Project
Verify that the cloud project contains a web role named BlueYonder.Companion.Host.Azure.
Note: You can achieve the same result by adding a new Windows Azure Cloud Service
project, to the solution, and then manually adding a Web Role Project from an existing project.
Use the Certificates tab in the role's Properties window, to add a certificate according to the
following settings.
Name Value
Name BlueYonderCompanionSSL
Store Name My
3. On the Certificates tab, change the Service Configuration to Local, and then change the
BlueYonderCompanionSSL certificate from BlueYonderSSLCloud to BlueYonderSSLDev. Change
the service configuration back to All Configurations when you are done.
Note: SSL certificates contain the name of the server so that clients can validate the
authenticity of the server. Therefore, there are different certificates for the local deployment, and
for the cloud deployment.
Use the Endpoints tab in the role's Properties window, to add an endpoint according to the
following settings:
Name Value
Name Endpoint2
Type Input
Protocol https
5. Run the ASP.NET Web API project with the Windows Azure compute emulator.
After the two web browsers open, verify they use the addresses http://127.0.0.1:81 and
https://127.0.0.1:444.
Note: The endpoint configuration of the role uses ports 80 and 443 for the HTTP and
HTTPS endpoint. However, the local IIS Web server already uses those ports, so the emulator
needs to uses different ports.
6. Log on to the virtual machine 20487B-SEA-DEV-C as Admin with the password Pa$$w0rd.
7. Open the BlueYonder.Companion.Client solution file from the
D:\AllFiles\Mod06\LabFiles\begin\BlueYonder.Companion.Client folder.
6-28 Hosting Services
8. The client app is already configured to use the Windows Azure compute emulator. Run the client and
verify it can connect to the emulator by searching for a flight to New York and verifying you see a list
of flights.
Note: Normally, the Windows Azure Emulator is not accessible from other computers on
the network. For purposes of testing this lab from a Windows 8 client, a routing module was
installed on the server's IIS, routing the incoming traffic to the emulator.
2. Switch to Comments and search for TO-DO items. Double-click each comment and look at the
disabled WCF calls :
UpdateReservationOnBackendSystem
CreateReservationOnBackendSystem
Note: Prior to the deployment of the cloud project to Azure, all the on-premises WCF calls
were disabled.
These include calls from the Reservation Controller class and the Trips Controller class.
After you deploy the ASP.NET Web API project to Windows Azure, it cannot call the on-premises
WCF service, so for now, the WCF Service calls are disabled. In Module 7, "Windows Azure Service
Bus" in Course 20487, you will learn how a cloud application can connect to an on-premises
service.
3. Use Visual Studio 2012 to publish the BlueYonder.Companion.Host.Azure project to the cloud
service you created.
Open the Publish Windows Azure Application dialog box for the cloud project.
Save the publish settings file and return to Visual Studio 2012
Click Import, select the publish settings file you saved, and move to the next step of the publish
wizard.
If required, create a new Windows Azure Storage account named byclyourinitials (yourinitials
contains your names initials, in lower-case) in a region closest to you location.
On the Advanced Settings tab, set the name of the new deployment to Lab6, and clear the Append
current date and time check box.
Note: The abbreviation bycl stands for Blue Yonder Companion Labs. An abbreviation is
used because storage account names are limited to 24 characters. The abbreviation is in lower-
case because storage account names are in lower-case. Windows Azure Storage is covered in
depth in Module 9, "Windows Azure Storage" in Course 20487.
5. Start the deployment process by clicking Publish. This might take several minutes to complete.
Developing Windows Azure and Web Services 6-29
In the BlueYonder.Companion.Shared project, open the Addresses class and in the BaseUri
property, replace the address of the emulator with the cloud service address you created earlier.
3. Run the client app and search for flights to New York. Verify the client application is able to connect
to the ASP.NET Web API Web application hosted in Windows Azure and retrieve the list of flights.
Results: You will verify the application works locally in the Windows Azure compute emulator, and then
deploy it to Windows Azure and verify it works there too.
2. Upload the Flights Management web application to the new Web Site by using the Windows Azure
Management Portal
3. Open the new Web Site's Dashboard page, and download the publish profile file.
Note: The publishing profile file includes the information required to publish a Web
application to the Web Site. This is an alternative publish method to downloading the
subscription file, as shown in Lesson 2, "Hosting Services in Windows Azure", Demo 1, "Hosting in
Windows Azure" in Course 20487. The difference is that by importing the subscription file, you
can publish to any of the Web Sites manages by your Windows Azure subscription, whereas
importing the publish profile file of a Web Site will only allow you to publish to that specific Web
Site.
6-30 Hosting Services
Task 2: Upload the Flights Management web application to the new Web Site by
using the Windows Azure Management Portal
1. Open the BlueYonder.Companion.FlightsManager solution file from the
D:\AllFiles\Mod06\LabFiles\begin\BlueYonder.Server folder in a new Visual Studio 2012 instance.
2. In the BlueYonder.FlightsManager project, open the web.config file, and edit the <appSettings>
section to substitute the {YourInitials} placeholder with the initials you have used when you created
the Windows Azure cloud service earlier.
Use the publish profile file you downloaded in the previous task.
After the deployment completes, a browser will automatically open, showing the deployed site.
4. Verify that you can see flights schedule from Paris to Rome, indicating that the application was able
to retrieve information from the web role.
Results: After you publish the flights manager web application, you will open the web application in a
browser and verify if it is working properly and is able to communicate with the web role you deployed in
the previous exercise.
Question: In this lab, you used a Web Site and a web role, while the worker role was
mentioned earlier in the lesson. What criteria would you use to determine what roles your
application requires?
Developing Windows Azure and Web Services 6-31
Review Question(s)
Question: What would you use to host a personal blog site in Windows Azure, and why?
6-32 Hosting Services
7-1
Module 7
Windows Azure Service Bus
Contents:
Module Overview 7-1
Module Overview
Integration and collaboration are key requirements in many distributed applications. Windows Azure
Service Bus provides a variety of cloud-based infrastructures that you can use on-premises and with
cloud-based applications to interconnect and collaborate in a distributed, scalable, and hybrid
environment.
This module describes messaging patterns for distributed applications in hybrid environments, and the
infrastructures provided by Windows Azure Service Bus to support these patterns.
Note: The Management Portal UI and Windows Azure dialog boxes in Visual Studio 2012
are updated frequently when new Windows Azure components and SDKs for .NET are released.
Therefore, it is possible that some differences will exist between screen shots and steps shown in
this module, and the actual UI you encounter in the Management Portal and Visual Studio 2012.
Objectives
After completing this module, you will be able to:
Describe the purpose and functionality of relayed and buffered messaging.
Enhance the effectiveness of queue-based communications by using topics, subscriptions and filters.
7-2 Windows Azure Service Bus
Lesson 1
Windows Azure Service Bus Relays
The Service Bus Relay is a middle-tier service, running in Windows Azure, for connecting remote clients
and cloud-based applications with on-premises services. You can connect the clients and applications with
the on-premises services across firewalls and network address translation (NAT) over the Internet, without
using virtual private networks (VPNs). This lesson describes the Service Bus Relay capabilities and
programming model.
Lesson Objectives
After completing this lesson, you will be able to:
Note: NATs are components, hardware or software, responsible for translating IP addresses
for both outgoing and incoming messages. For example, a private network of 50 computers can
connect to the Internet through one computer, called a gateway, which has a public Internet IP
address and a NAT installed. When one of the computers sends a message to the Internet, the
message passes through the gateway, and the NAT replaces the origin IP with the IP of the
gateway. When a message is sent from the Internet to one of the computers, the message arrives
to the gateway computer, where the NAT replaces the public IP address of the gateway with the
internal network address of the target computer, and then sends it to the private network.
Windows Azure Service Bus Relay acts as a cloud-based mediator between networks, for example between
a corporate network and the cloud, or between two corporate networks. You can host a WCF service on
premises and delegate its endpoint to the Service Bus Relay by establishing a bidirectional communication
channel between the two. Remote applications that want to use your service, first, establish a connection
to the Service Bus Relay, which then routes the communication all the way to your local service. By using
Windows Azure Service Bus, you can expose your local services securely to customers outside of the
enterprise network without opening the corporate firewall.
Developing Windows Azure and Web Services 7-3
Windows Azure Service Bus Relay uses the WCF programming model and supports various
communication patterns and protocols. You can use one-way messaging and route incoming messages to
one or more listeners. You can use the relay to establish a live socket between sender, the relay, and
receiver, and then upgrade the socket to a direct channel, if supported.
The Access Control Service (ACS) manages the authentication and authorization of Windows Azure Service
Bus. You can control who can access your services at a very granular level.
Note: One-way messaging and other messaging patterns you can use in WCF services are
discussed in Appendix 1, "Designing and Extending WCF Services". ACS is discussed in Module 11,
"Identity Management and Access Control" in Course 20487.
Windows Azure Service Bus Relay forwards messages and sessions between senders and receivers
bypassing NAT and firewalls, yet it neither persists nor filters or processes messages in any way.
Windows Azure Service Bus Relay provides new communication options to local services that were
traditionally exposed only to customers within the corporate network.
The Service Bus Relay was designed for scalability and availability. The Service Bus has a large number of
load-balanced front-end nodes through which the sender and receiver establish communication and a
back-end fabric, which executes the core relay functionality.
The relay is capable of establishing a firewall and NAT-traversing connection because both sender and
receiver create a channel to the relay, implying that NAT is not relevant and the firewall will allow the
communication.
One-Way Messaging
With one-way relay bindings, a client sends
messages to a service, without expecting any
response from it. Because the client cannot
communicate directly with the service, it uses the
Service Bus as the mediator.
4. The receiver creates another outbound connection to the front-end nodes, also referred to as a
rendezvous connection, which now acts as a socket-to-socket forwarder. This outbound connection
serves as a live socket that the sender and the receiver can use for communication. From the point of
view of the sender and the receiver, the socket is a simple point-to-point socket even though it is
implemented by two different sockets combined by a socket forwarder in the relay front-end node.
Socket forwarding looks and feels like a normal socket but it is slow. Upgrading to a real point-to-point
socket between the sender and receiver will improve performance, but it requires knowledge of the
correct IP address of the receiver behind NAT. Service Bus Relay uses NAT traversal algorithms to try and
figure out the IP address to use by sending control messages to the receiver. By looking at the source and
destination address, the relay can determine the behavior of the NAT translator and estimate the correct
IP address that will provide direct communication with the receiver through the NAT translator. If the
relay succeeds in determining the correct IP address, the channel can be upgraded, which leads to major
performance improvements.
To instruct the Service Bus Relay to try establishing a direct connection between sender and the receiver,
you need to change the connection mode of the binding from a relayed connection to a hybrid
connection.
For more information about configuring a hybrid connection, see How to: Change the
Connection Mode
http://go.microsoft.com/fwlink/?LinkID=313736
7-6 Windows Azure Service Bus
To create a new Service Bus namespace, log on to the Windows Azure Management Portal, click SERVICE
BUS in the navigation pane, and then click Create. In the Create a Namespace dialog box, enter a name
for the namespace, a region, and then click the check mark to begin the provisioning process.
Note: The process for creating a new Service Bus namespace is demonstrated in Demo 1,
"Creating Service Bus Relays" of this lesson.
Service Bus resources, such as service endpoints and queues, are organized hierarchically in tree form,
with the namespace as the root of the tree. Each individual resource has a URI that is relative to the URI of
the Service Bus namespaces. For example, Blue Yonder Airlines can create a relay endpoint for the
Booking service under sb://blueyonder.servicebus.windows.net/booking.
To create a new Service Bus namespace, log on to the Windows Azure Management Portal, click SERVICE
BUS in the navigation pane, and then click Create. In the Create a Namespace dialog box, enter a name
for the namespace, a region, and then click the check mark to begin the provisioning process.
Note: The process for creating a new Service Bus namespace is demonstrated in Demo 1,
"Creating Service Bus Relays" of this lesson.
Service Bus resources, such as service endpoints and queues, are organized hierarchically in tree form,
with the namespace as the root of the tree. Each individual resource has a URI that is relative to the URI of
the Service Bus namespaces. For example, Blue Yonder Airlines can create a relay endpoint for the
Booking service under sb://blueyonder.servicebus.windows.net/booking.
permissions you assigned to their parent node. Configuring Service Bus with ACS will be discussed in
detail in Module 11, "Identity Management and Access Control".
BasicHttpBinding BasicHttpRelayBinding
WebHttpBinding WebHttpRelayBinding
WSHttpBinding WSHttpRelayBinding
WS2007HttpBinding WS2007HttpRelayBinding
WSHttpContextBinding WSHttpRelayContextBinding
WS2007FederationHttpBinding WS2007FederationHttpRelayBinding
NetTcpBinding NetTcpRelayBinding
NetTcpContextBinding NetTcpRelayContextBinding
n/a NetOnewayRelayBinding
n/a NetEventRelayBinding
To configure your app with all of the necessary Service Bus dependencies, such as referenced assemblies
and WCF extension configuration, install the Windows Azure Service Bus NuGet package.
The WCF service itself contains no code that is relevant to the relaying operation. This means that you can
use the same WCF service to provide functionality inside your corporate network as well as a relay
receiver.
The following code shows how to build a WCF service that can act as a receiver in the relay.
7-8 Windows Azure Service Bus
The following code shows how to host a WCF service that acts as a relay receiver.
endpoint.EndpointBehaviors.Add(relayCredentials);
host.Open();
In the preceding example, you create an endpoint with the NetEventRelayBinding binding, and attach a
security token to it to authenticate the service as a listener (receiver). The binding uses TCP
communication, therefore the URI is created with the sb scheme.
The preceding example creates the endpoint and endpoint behavior in code. However, you can also
create this configuration in the configuration file.
The following code shows how to declare a service with a relay endpoint by using XML configuration.
</behaviors>
<services>
<service name="Service.ChatService">
<endpoint contract="Service.IChatContract"
binding="netEventRelayBinding"
address="sb://myServiceBusNamespace.servicebus.windows.net/MyChatService"
behaviorConfiguration="sbAuthentication"/>
</service>
</services>
The owner user, used in the preceding example, is created automatically with the Service Bus namespace
and has full control over the namespace, including permissions to create new users and grant or deny
access to resources. To learn how to create users with restricted access, for example, users that can only
send messages to the relay, to be used in client applications, refer to Module 11, "Identity Management
and Access Control", Lesson 3, "Configuring Service to Use Federated Identities".
To send a message to the relay, you create a WCF proxy that uses a relay-binding and relay-endpoint
address. Similar to the receiver, the sender also has to authenticate with the relay before sending
messages. To implement authentication, attach a TransportClientEndpointBehavior endpoint behavior
to the proxy.
For additional information about the different relay bindings, see Service Bus Bindings
http://go.microsoft.com/fwlink/?LinkID=313737
Question: What are the three steps for adding relay endpoints in a WCF service?
7-10 Windows Azure Service Bus
Demonstration Steps
Open Visual Studio 2012, and then open the ServiceBusRelay.sln solution from the
D:\Allfiles\Mod07\DemoFiles\ServiceBusRelay\begin\ServiceBusRelay folder.
1. Observe the code in the Program.cs file of the ServiceBusRelay.Server project. The service endpoint
is configured to receive messages directly by using TCP on port 747.
2. Note the code in the HomeController.cs file of the ServiceBusRelay.WebClient project. The Web
client consumes the service by using the address and port of the server.
3. Configure the ServiceBusRelay solution to have multiple startup projects, starting both the
ServiceBusRelay.Server and ServiceBusRelay.WebClient projects.
4. Run the solution, and use the console writer web application to write text to the console window.
Close both windows after you finish testing the application.
5. Open the Windows Azure Management Portal (http://manage.windowsazure.com) and create a
new Windows Azure Service Bus namespace named ServiceBusDemo07YourInitials (Replace
YourInitials with your initials).
6. Select the newly created Service Bus namespace, click ACCESS KEY, and then copy the default key to
the clipboard.
7. Install the WindowsAzure.ServiceBus NuGet package in the ServiceBusRelay.Server and
ServiceBusRelay.WebClient projects.
8. Change the Program class in the ServiceBusRelay.Server project to expose a Service Bus endpoint
instead of a TCP endpoint. Change the base address of the host to
sb://ServiceBusDemo07YourInitials.servicebus.windows.net/console (Replace YourInitials with
your initials), and change the binding of the endpoint to NetTcpRelayBinding.
10. Change the HomeController class in the ServiceBusRelay.WebClient project to consume the
Service Bus endpoint. In the Write method, change the endpoint address to
sb://ServiceBusDemo07YourInitials.servicebus.windows.net/ console (Replace YourInitials with
your initials), and change the binding to NetTcpRelayBinding.
11. Add a new TransportClientEndpointBehavior to the endpoint of the factory. Set the
TokenProvider property of the behavior by using the
TokenProvider.CreateSharedSecretTokenProvider method. Use the owner issuer name and the
access key you copied from the management portal as the issuer secret.
12. Create a Windows Azure Web Site named ServiceBusDemo07YourInitials (Replace YourInitials with
your initials) in the Windows Azure Management Portal. Create the new web site in the same region
as your Service Bus namespace.
13. Open the web site's Dashboard page and download the web site's publish profile file.
14. Publish the ServiceBusRelay.WebClient project to Windows Azure by using the publish settings file
you downloaded.
Developing Windows Azure and Web Services 7-11
15. Set the ServiceBusRelay.Server project as the startup project, and start it without debugging. Use
the console writer web application to write text to the console window. The web application running
in a web site will send the message to the console window hosted on your computer.
Question: If you need to implement the opposite scenario, where you have a single sender
with multiple receivers, which relay binding would you use?
7-12 Windows Azure Service Bus
Lesson 2
Windows Azure Service Bus Queues
Brokered messaging is the use of a central entity, referred to as a broker, which passes messages between
senders and receivers. Brokered messaging simplifies integration and improves scalability by providing
durable, enterprise-level messaging capabilities within the Windows Azure Service Bus. In contrast to
relays, brokered messaging is capable of decoupling senders and receivers, which means that a receiver
does not have to be online when a message is sent to it, and the receiver does not have to be online when
the receiver picks up the message.
This lesson describes the brokered messaging pattern and the queuing infrastructure provided by the
Windows Azure Service Bus.
Lesson Objectives
After completing this lesson, you will be able to:
Describe the concept of brokered messaging.
Describe how to create a Service Bus Queue by using the Management Portal, Visual Studio 2012, and
the .NET Framework.
Describe the functionality of the BrokeredMessage class.
Send messages to and receive messages from a Service Bus Queue.
There are enough resources on the server side to process the request and create a response message
in a reasonable amount of time.
There are many scenarios where these assumptions are not valid. For example:
Burst of traffic. A large number of requests are sent simultaneously, at a particular point in time and
there might not be enough resources to handle all of them.
In such scenarios, brokered messaging might be the solution. In brokered messaging, producers (senders)
and consumers (receivers) do not have to be online at the same time. A producer sends messages over a
communication infrastructure that reliably stores these messages in a queue until the consumer is ready
to receive and process them. If the consumer is busy or unavailable, messages will just accumulate in the
queue and no error will be raised. When the consumer is ready to process these requests, it will extract the
messages from the queue and a response message might be sent back to the client.
In contrast to the request-response messaging pattern, brokered messaging is non-blocking. This means
that the client does not wait for an immediate response from the server. Messages are sent one-way. This
might introduce some complexity, if responses are required, because the server must be aware to the
address of the client and the client must be reachable.
The core components of the Windows Azure Service Bus brokered messaging infrastructure are:
Service Bus Queues
Service Bus Queues are basic persistent communication channels. Queues offer First In, First Out (FIFO)
message delivery to one or more consumers. Messages are expected to be received and processed by the
receivers in the temporal order in which they were sent, and each message is received and processed by
exactly one message consumer.
Service Bus Topics are special kind of queues, in which, depending on subscriptions and filters, multiple
consumers (multicast) can receive a single message. You can use Service Bus Topics to implement a
publish/subscribe pattern in which one or more consumers subscribe to a specific type of message.
Publishers send messages to the topic and the messages will be delivered to one or more associated
subscriptions, depending on the filter associated with each subscription. Messages are still expected to be
received and processed in the temporal order in which they were sent, but an individual message may be
received and processed by multiple consumers. Service Bus Topics will be discussed in Lesson 3, "Windows
Azure Service Bus Topics".
For additional information about brokered messaging, relayed messaging, and the Service
Bus infrastructure, see Relayed and Brokered Messaging
http://go.microsoft.com/fwlink/?LinkID=313738
7-14 Windows Azure Service Bus
Before creating a Service Bus Queue, you must first create a Service Bus namespace. A queue is a resource
in the namespace tree. Similar to all other resources, a queue has a URI that is relative to the URI of the
namespace, which you can secure by using ACS.
The following figure presents the NEW dialog for Service Bus Queue creation
You can create a connection to your Service Bus namespace in one of the following ways:
Import your Windows Azure subscription information. In the Add Connection dialog box, select the
Your subscription option, click the Download Publish Settings link, download your subscription's
Developing Windows Azure and Web Services 7-15
publish settings file, and then click the Import button to import the file. After importing your
subscription settings file, select any of the Service Bus namespaces that exist in your subscription.
Manually enter namespace credentials. Type the namespace name, and provide an access key for the
namespace, which you can retrieve from the Management Portal. To retrieve the access key from the
Management Portal, log on to the Management Portal, click SERVICE BUS in the navigation pane,
click the namespace you wish to connect to, click ACCESS KEY, and then copy the Default Key to the
clipboard and paste it back in Visual Studio 2012, in the Issuer Key box.
This figure presents how to create a new connection to your Service Bus namespace in Visual Studio 2012.
This figure presents the dialog box for creating a new Service Bus Queue in Visual Studio 2012.
7-16 Windows Azure Service Bus
Note: When you create queues with Visual Studio 2012, you have more control over the
queue settings, than when creating the queue through the Management Portal. For example, you
can control the default lock duration of messages before they are released, or the maximum size
of the queue. If you created the queue in the Management Portal, you can change these settings
in the queue's Configure page.
Create a Service Bus Queue with the Windows Azure Service Bus .NET Libraries
It is also possible to create a new queue by using the Windows Azure Service Bus libraries for .NET. To
configure your application with all of the necessary Service Bus dependencies, first you need to install the
Windows Azure Service Bus NuGet package. After you install the NuGet package, you will have
references to the Service Bus .NET libraries and will be able to create new queues in code.
The following code shows how to create a new queue by using the Service Bus .NET libraries. If the queue
already exists, the code will delete the existing queue before creating a new one.
Developing Windows Azure and Web Services 7-17
NamespaceManager namespaceManager =
new NamespaceManager.CreateFromConnectionString(connectionString);
// Delete if exists
if (namespaceManager.QueueExists(queueName))
{
namespaceManager.DeleteQueue(queueName);
}
namespaceManager.CreateQueue(queueName);
Instead of placing the Service Bus connection string inside your code, making it hard to update, you
should place it in the configuration file. If you create a cloud-based application, you should place it in the
web or worker role service configuration file, and access it by using the
CloudConfigurationManager.GetSetting method.
After you have access to your Service Bus namespace, you can access, create, and delete queues. When
creating a queue you can configure properties such as message Time to Live (TTL), message lock duration,
duplicate detection and session support, in addition to the queue name shown in the preceding example.
Duplicate Detection
You can use duplicate detection to ensure that messages will be sent only once.
If a client sent a message to a queue and a timeout exception was thrown, there is no way for the client to
know if the message has been sent successfully or if it failed. The client can choose to resend the message
to ensure at-least-one time delivery, but the result can be the same message being sent twice. The client
can choose not to resend the message to ensure at-most-one time delivery, but the result can be the
message not being sent at all. By configuring the queue to detect duplicate messages, the client can retry
sending the message and the queue will ignore any duplicates. This will ensure that the message is
delivered exactly once. To turn on duplicate detection, check the Requires Duplicate Detection option
when you create the queue with Visual Studio 2012. If you are creating the queue in code, create a
QueueDescription object for the new queue, set its RequiresDuplicateDetection property to true, and
pass the object to the CreateQueue method.
7-18 Windows Azure Service Bus
Defer. Indicates that the receiver wants to defer the processing for this message.
RenewLock. Renews the lock on a message.
Complete. Completes the receive operation of a message and indicates that the message should be
marked as processed and deleted or archived.
A brokered message has three major parts:
Message Body. The message body contains the message payload, which is usually the serialized
representation of a business entity. The message body is transparent to the messaging infrastructure.
For queues and topics, the message body is plain data that should be transmitted from producer to
consumer.
Brokered message properties. Brokered message properties is a user-defined key-value collection
of data that is visible to the infrastructure and is used to route or handle messages while being
transmitted. By being able to access the message properties, the infrastructure can participate in the
message processing. Topic subscriptions and filters use the information in the brokered message
properties to route the message to the proper destination. Service Bus Topics will be discussed in the
next lesson, Windows Azure Service Bus Topics.
Other metadata. The message metadata contains information such as content-type, ID, sequence
number, size, expiry time, delivery count, and so on.
To learn more about the BrokeredMessage class, see The BrokeredMessage Class.
http://go.microsoft.com/fwlink/?LinkID=298819&clcid=0x409
Question: What is the difference between the body and properties parts of a brokered
message?
Developing Windows Azure and Web Services 7-19
Create a QueueClient
// Create a queue client using a connection string
QueueClient client1 = QueueClient.CreateFromConnectionString(connectionString,
queueName);
The last line of code in the preceding example is a bit tricky. At first glance, the instantiation of a
QueueClient without specifying the connection string or by using a MessagingFactory should not work,
however this code will execute successfully. When you instantiate a new MessagingFactory object, it is
automatically stored in the memory in a cache. When you create new instances of the QueueClient class
without using a MessagingFactory or a connection string, the constructor uses the cached
MessagingFactory to initialize the QueueClient object. You can also use this constructor if you called the
NamesapceManager.CreateFromConnectionString method, because this method also creates a
MessagingFactory object internally.
Sending Messages
To create a message, you create an instance of BrokeredMessage and send it by using the queue client.
The following code shows how to create and send a brokered message by using a queue client.
Brokered message properties. The customerID is a property that can be used by the consumer of
the message, and the queue infrastructure.
7-20 Windows Azure Service Bus
Metadata properties. The ContentType property contains a string intended to help the consumer
decide which processing logic to perform. For example, you can set the ContentType to the type of
the message body to inform the consumer how to deserialize the content.
Sessions
You can use sessions to group messages that belong to a certain logical group and receive them all on a
dedicated receiver. Senders can attach a SessionId to outgoing messages and the queue ensures that all
messages with the same SessionId are delivered to the same receiver. To group several messages in the
same session, set the SessionId property of all the BrokeredMessage objects to a unique string.
One common use case for using sessions is to get around the 265 kilobytes (KB) message size limit of
Service Bus Queues. You can split a large message into a collection of small messages and send all as a
group.
Receiving Messages
To receive a message, call the Receive method of the QueueClient class. Calling this method will create
internally an instance of a MessageReceiver object for receiving messages.
The following code shows how to receive a message by using a QueueClient object.
When you call the Receive method, the message is retrieved from the queue, but does not get removed
from the queue; instead, the message is marked as locked in the queue, to prevent other consumers from
retrieving it. Because the message is only locked and not deleted, if the consumer fails during message
processing, the message is not lost. This retrieval technique is also referred to as Peek-Lock. After you
finish processing the message, you call the Complete method to instruct the queue to release the lock
and delete the message from the queue.
Note: If the consumer fails to call the Complete method during the allotted lock time, the
message will unlock. You can set the lock duration, which is by default 1 minute, when you create
the queue.
Instead of using the Peek-Lock technique, you can use the Receive-and-Delete technique, in which a
received message is deleted immediately from the queue. This technique can improve the performance of
the queue when many messages are locked, but can be problematic if the receiver crashes during
processing of the message. To change the queue from Peek-Lock to Receive-and-Delete, use the
QueueClient.CreateFromConnectionString method overload that accepts the ReceiveMode parameter.
After you retrieve the message from the queue, you can deserialize the message body to an object by
calling the GetBody<T> generic method. You will need to set the generic type to the actual type of the
object placed in the queue. It is also possible to receive a message by creating a MessageReceiver
explicitly.
The following code shows how to receive a message by using a MessageReceiver object.
BrokeredMessage message;
TimeSpan interval = new TimeSpan(hours: 0, minutes: 0, seconds: 5);
while ((message = myMessageReceiver.Receive(interval)) != null)
{
ProcessMessage(message.MessageId, message.GetBody<Customer>()));
message.Complete();
}
Demonstration Steps
1. Open Visual Studio 2012, and then open the QueuesDemo.sln solution from the
D:\Allfiles\Mod07\DemoFiles\QueuesDemo\Begin folder.
3. Install the Windows Azure Service Bus NuGet package in the ServiceBusMessageSender project.
5. If you did not perform the demo in the previous lesson, create a new Windows Azure Service Bus
namespace named ServiceBusDemo07YourInitials (Replace YourInitials with your initials). Locate
the connection string of the namespace, and then copy it to clipboard.
6. In the ServiceBusMessageSender project, open the App.config file, and replace the value of the
Microsoft.ServiceBus.ConnectionString app setting with the connection string copied in the
previous step.
7. In the ServiceBusMessageReceiver project, open the App.config file, and replace the value of the
Microsoft.ServiceBus.ConnectionString app setting with the connection string copied in the
previous step.
7-22 Windows Azure Service Bus
9. Use the NamespaceManager object to check if a queue named servicebusqueue already exists, and
if not, create it.
11. Add code to continuously get text input from the user and send it to the queue. Create a
BrokeredMessage object to hold the string payload, and send it to the queue by using the
QueueClient.Send method.
12. Run the ServiceBusMessageSender project. Enter some text in the console window to send it to the
queue and then close the console window.
13. Observe the code in the Program.cs file, in the ServiceBusMessageReceiver project. This
application pulls messages from the queue and prints them to the console. Run the
ServiceBusMessageReceiver project and verify the text you entered before it is printed in the
console window. Close the console window after the text appears.
Question: Why do you need to call the Complete method when you finish processing the
message?
Developing Windows Azure and Web Services 7-23
Lesson 3
Windows Azure Service Bus Topics
Topics are the infrastructure provided by Windows Azure Service Bus to implement Publish-Subscribe
persistent messaging. Publish-Subscribe persistent messaging is a messaging pattern where a single
producer can send notifications to an unknown number of consumers, possibly even zero, without having
to be aware to the specific consumers.
This lesson describes Windows Azure Service Bus Topics, subscriptions, and filters.
Lesson Objectives
After completing this lesson, you will be able to:
Create a subscription and filters by using various options such as the Management Portal, Visual
Studio, or the C# API.
Create subscription filters.
Subscription-Based Messaging
With brokered-based messaging, a single queue
can have multiple receivers, but all receivers are
expected to handle messages in the same way.
The purpose of multiple receivers is to provide
scalability when the queue receives many
messages. However, there are scenarios where
multiple receivers are required not for scalability
reasons, but because of the type of actions that
the receivers need to perform. Each receiver
performs different actions when it receives a
message. For example, you can have one receiver
that only logs the content of each message, and
another receiver that processes the message. If both receivers listen to the same queue, some of the
messages would reach one of the receivers, but not the other. An optional solution for this scenario would
be to create two queues, and send the same message to both queues, but that would couple the sender
of the message to the number of receivers that currently exist in the system.
To handle such scenarios, you can implement subscription-based messaging in which multiple receivers
share the same queue but receive only the messages targeted for them. Receivers use subscriptions to
inform the queue infrastructure the messages they expect to receive and the queue uses subscriptions to
forward incoming messages to the correct listeners. With subscription-based messaging, a single message
sent to a queue can reach multiple receivers, if the message matches their subscription, or none of the
receivers, if the message does not match with any subscription.
information topic. Publishers and receivers are decoupled, which means that publishers do not know the
identity or the location of receivers.
You can also use subscription-based messaging to implement multicast, in which one message can be
forwarded to multiple receivers, if there are multiple subscriptions to the same topic. There are scenarios
where unicast messaging is required and only one subscriber will receive the message.
Windows Azure Service Bus implements subscription-based messaging by using Service Bus Topics. You
can use a Service Bus Topic as a shared channel to communicate with multiple receivers. Unlike Service
Bus Queues, where a single receiver processes a message, topics and subscriptions provide a one-to-many
form of communication in which each message will be forwarded to all subscriptions. You can register a
filter on each subscription to filter messages. This way the receiver receives only the messages it expects
from the subscription.
Just as messages in Service Bus Queues are persisted and first-in, first-out (FIFO) order is maintained,
subscriptions behave as virtual queues that receive copies of all messages that were sent to the topic.
For additional comparison of Service Bus Queues and Topics, see Service Bus Queues, Topics,
and Subscriptions.
http://go.microsoft.com/fwlink/?LinkID=313740
The following code shows how to create a new topic by using the NamespaceManager class. If the topic
already exists, the code will delete the existing topic before creating a new one.
NamespaceManager namespaceManager =
new NamespaceManager.CreateFromConnectionString(connectionString);
// Delete if exists
if (namespaceClient.TopicExists(topicName))
{
namespaceClient.DeleteTopic(topicName);
}
namespaceClient.CreateTopic(topicName);
After you create a topic, you continue by creating subscriptions for your different consumers.
Log on to the Windows Azure Management Portal, click SERVICE BUS in the navigation pane, and then
click the namespace in which the topic is defined. Click the TOPICS tab, click the topic name for which
you want to create a subscription, and then click CREATE SUBSCRIPTION. Specify the subscription name,
set its properties, and finally create the subscription by clicking the check sign.
The following illustration presents the dialog box for creating a new subscription in the Windows Azure
Management Portal.
7-26 Windows Azure Service Bus
Note: To view the list of topics, first you need to import your Service Bus namespace
configuration to Visual Studio 2012. The instructions for adding the namespace to the Server
Explorer window are detailed in Lesson 2, "Windows Azure Service Bus Queues", Topic 2, and
Creating Windows Azure Queues".
The following illustration shows the dialog box for creating a new subscription in Visual Studio 2012.
Developing Windows Azure and Web Services 7-27
Create a Subscription with the Windows Azure Service Bus .NET Libraries
It is possible to create a new subscription with the Windows Azure Service Bus libraries by using the
NamespaceManager class.
The following code shows how to create a subscription by using the NamespaceManager class.
After you call the CreateSubscription method, the subscription is created without any filters, which
means that it will receive every message that is sent to the topic. In the preceding example, every message
will be forwarded to all three subscriptions. If you want the subscription to receive some of the messages,
you need to create a filter for the Accounting subscription. For example, you would create a filter if you
want the Accounting subscription to receive only messages for products that cost over $5000. .
7-28 Windows Azure Service Bus
Creating Filters
A subscription without a filter forwards all
incoming messages in a topic to the subscription's
consumer. However, most consumers will
probably want to receive only the messages that
are relevant for them. The Publish/Subscribe
pattern clearly defines that subscribers can choose
to subscribe to a specific subset of messages that
they wish to receive. Receiving only a subset of
message is done by creating filters for your
subscriptions. With filters, only the required
messages will be forwarded to each subscription.
SqlFilter. Forwards messages based on an SQL-like expression that is evaluated by using values in the
message property dictionary.
CorrelationFilter. Forwards messages based on the value of the CorrelationId property of the
brokered message. You can use correlation filters to match a set of messages that relate to each
other.
The TrueFilter class will make the subscription receive any message sent to the topic. If you call the
CreateSubscription method without passing a filter, as shown in previous examples, the subscription is
created with the TrueFilter automatically.
The SqlFilter constructor takes a string parameter with an SQL-like expression. The properties you use for
comparison, such as the ProductPrice property in the preceding example, are expected to exist in the
message properties dictionary, which you set when sending a message.
The third type of filters, correlation filters, do not use expressions, but rather check the value of the
message's CorrelationId string property. You can use correlation filters to match and correlate between
sets of messages. Consider the following scenario:
3. The service completes the order processing and wants to inform the client that the order was
approved.
The service cannot return a message to the client, because topics, as queues, provide one-way messaging,
not request-response messaging. If the client was also listening to the subscription, the service could send
a message to the client through the topic.
If the client wants to receive a message from the topic, it too needs to create a subscription, preferably
before the service sends the response (to prevent the response being sent before the subscription is
capable of receiving it). However, the client does not know the ID of the order before sending the
CreateOrder message, because the order was not created yet.
3. The client creates a new CreateOrder message, sets the CorrelationId property of the message to the
unique ID, and then sends the message.
4. The service receives the message, processes it, and then sends an approval message to the topic with
the same CorrelationId property value.
5. The subscription in the client-side receives the message and informs the client that the order was
approved.
The following code shows how to create a subscription with a correlation filter.
If the subscription was created only for handling the response message, you can delete the subscription
after you receive the response. If you create a subscription with a SqlFilter, and your SQL expression is
solely comprised of comparing a property by using the equality operator, consider using correlation filters
instead, because correlation filters do not use string expressions and therefore provide better
performance over SQL filters.
7-30 Windows Azure Service Bus
If you want to send messages with a correlation ID, set the BrokeredMessage.CorrelationId property to
the correlation ID.
To receive messages from a subscription, use the SubscriptionClient class.
The following code shows how to receive a message from a topic by using a SubscriptionClient class.
while (true)
{
BrokeredMessage message = subscriptionClient.Receive();
if (message != null)
{
try
{
ProcessMessage(message.GetBody<Product>());
message.Complete();
}
catch
{
message.Abandon();
}
}
}
After you process the message from the subscription, you call the Complete method to have it removed
from the subscription. This is similar to how you would receive messages from queues. If there is a
problem during processing, you can release the lock placed on the message in the queue by calling the
Abandon method of the BrokeredMessage class. Calling the Abandon method is preferable than letting
the lock time out, because by releasing the lock, other consumers can immediately receive the message
and process it again, instead of waiting for the lock to expire.
Demonstration Steps
1. Open Visual Studio 2012, and then open the TopicsDemo.sln solution from the
D:\Allfiles\Mod07\DemoFiles\TopicsDemo folder.
3. Explore the ServiceBusTopicPublisher project. The code in the Main method creates a topic by
using the NamespaceManager.CreateTopic, then creates three subscriptions by using the
NamespaceManager.CreateSubscription method and the SqlFilter class, and then sends four
messages to the topic by using the BrokeredMessage and TopicClient classes.
4. Explore the ExpensivePurchasesSubscriber project. The code uses the SubscriptionClient class to
connect to the productsalestopic Service Bus topic with the ExpensivePurchases subscription. The
CheapPurchasesSubscriber and AuditSubscriber projects are used similarly with the two other
subscriptions.
5. Open the Windows Azure Management Portal (http://manage.windowsazure.com).
6. If you did not perform the demo in the previous lesson, create a new Windows Azure Service Bus
namespace named ServiceBusDemo07YourInitials (Replace YourInitials with your initials). Locate
the connection string of the namespace and copy it to clipboard.
7. With the connection string you copied from the Management Portal, replace the Service Bus
connection string defined in App.Config files in all projects in the solution.
8. Run the ServiceBusTopicPublisher project and wait until the application sends the four messages to
the topic.
9. Open the properties of the solution, and change the startup mode to multiple projects. Set the three
subscriber projects to start.
10. Run the projects and verify that the subscribers received the appropriate messages according to their
subscription filter (under $4000, over $4000, and all messages).
Question: Why is it better to use SQL filters instead of creating a single subscription and
check the value of the message's ContentType property?
7-32 Windows Azure Service Bus
In addition, Blue Yonder Airlines wishes to improve the service offered to Travel Companion users by
sending users updated information about changes made to their booked flights directly to their client
app. To provide immediate feedback to the end user who updates the flight schedules, it was decided that
the notifications will not be sent during the update process but rather be sent by a background process.
To interact with the background process, the ASP.NET Web API service will use Service Bus Queues, and
the background process itself will run in a Windows Azure Worker Role. For the notifications, the worker
role will use Windows Push Notification Services (WNS).
In this lab, you will update the ASP.NET Web API services to use Windows Azure Service Bus Queues, and
create a new Windows Azure Worker Role to perform background processing.
Objectives
After completing this lab, you will be able to:
Lab Setup
Estimated Time: 60 Minutes.
Virtual Machine: 20487B-SEA-DEV-A, 20487B-SEA-DEV-C
For this lab, you will use the available virtual machine environment. Before you begin this lab, you must
complete the following steps:
1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.
3. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.
4. In the Action pane, click Connect. Wait until the virtual machine starts.
Password: Pa$$w0rd
6. Return to Hyper-V Manager, click 20487B-SEA-DEV-C, and in the Action pane, click Start.
7. In the Action pane, click Connect. Wait until the virtual machine starts.
Password: Pa$$w0rd
9. Verify that you received credentials to log in to the Azure portal from you training provider, these
credentials and the Azure account will be used throughout the labs of this course.
In this lab, you will install NuGet packages. It is possible that some NuGet packages will have newer
versions than those used when developing this course. If your code does not compile, and you identify
the cause to be a breaking change in a NuGet package, you should uninstall the NuGet package and
instead, install the old version by using Visual Studio's Package Manager Console window:
1. In Visual Studio 2012, on the Tools menu, point to Library Package Manager, and then click
Package Manager Console.
2. In Package Manager Console, enter the following command and then press Enter.
(The project name is the name of the Visual Studio project that is written in the step where you were
instructed to add the NuGet package).
3. Wait until Package Manager Console finishes downloading and adding the package.
The following table details the compatible versions of the packages used in the lab:
WindowsAzure.ServiceBus 2.1.0.0
Exercise 1: Using a Service Bus Relay for the WCF Booking Service
Scenario
In this exercise, you will create a Service Bus namespace, configure the on-premises WCF Booking service
to use a Service Bus Relay, and then configure the ASP.NET Web API services running in Windows Azure
to communicate with the on-premises services by using the newly created relay.
1. Create the Service Bus namespace by using the Windows Azure Management Portal
2. Add a new WCF Endpoint with a relay binding
3. Configure the ASP.NET Web API back-end service to use the new relay endpoint
4. Test the WCF service
Task 1: Create the Service Bus namespace by using the Windows Azure Management
Portal
1. In the 20487B-SEA-DEV-A virtual machine, run the Setup.cmd file from
D:\AllFiles\Mod07\LabFiles\Setup, and write down the name of the cloud service created by the
script.
Note: You may see a warning saying the client model does not match the server model.
This warning may appear if there is a newer version of the Windows Azure PowerShell cmdlets. If
this message is followed by an error message, please inform the instructor, otherwise you can
ignore the warning.
Copy the value from the DEFAULT KEY box to the clipboard.
Locate the endpoint of the service named BookingTcp, and change its binding attribute to
netTcpRelayBinding.
Add an address attribute with the following value:
sb://BlueYonderServerLab07YourInitials.servicebus.windows.net/booking (Replace YourInitials
with your initials).
4. Add a new endpoint behavior named sbTokenProvider to the endpoint behaviors configuration.
Add a new <endpointBehaviors> element to the <behaviors> element that is under the
<system.serviceModel> section.
In the new <endpointBehaviors> element, add a new <behavior> element, and set its name
attribute to sbTokenProvider.
Note: Visual Studio Intellisense uses built-in schemas to perform validations. Therefore, it
will not recognize the transportClientEndpointBehavior behavior extension, and will display a
warning. Disregard this warning.
5. Locate the endpoint of the service, and set it to use the new endpoint behavior.
Use the behaviorConfiguration attribute to connect the endpoint to the endpoint behavior named
sbTokenProvider.
Note: Application initialization automatically sends requests to specified addresses after the
Web application loads. Sending the request to the service will make the service host load and
initiate the Service Bus connection.
Developing Windows Azure and Web Services 7-35
7. Open IIS Manager, and then set the start mode of the DefaultAppPool:
Open the Connections pane and click the Application Pools node.
Note: Setting the start mode to AlwaysRunning will load the application pool
automatically after IIS loads. To use application initialization the application pool must be
running.
Note: When preload is enabled, IIS will simulate requests after the application pool starts.
The list of requests is specified in the application initialization configuration that you already
created.
Task 3: Configure the ASP.NET Web API back-end service to use the new relay
endpoint
1. Open BlueYonder.Companion.sln from D:\AllFiles\Mod07\LabFiles\begin\BlueYonder.Server in a new
Visual Studio 2012 instance.
2. Install the Windows Azure Service Bus NuGet package in the BlueYonder.Companion.Host project.
In the new <endpointBehaviors>, element add a new <behavior> element, and in it, add a
<transportClientEndpointBehavior> behavior element.
In the behavior element add a <tokenProvider> element, and in it, add a <sharedSecret> element
with the issuerName attribute set to owner and the issuerSecret attribute set to the access key of
the new Service Bus you created.
7-36 Windows Azure Service Bus
Note: Visual Studio Intellisense uses built-in schemas to perform validations. Therefore, it
will not recognize transportClientEndpointBehavior behavior extension, and will display a
warning. Disregard this warning.
2. Locate the BlueYonderServerLab07YourInitials (Replace YourInitials with your initials) Service Bus
namespace, and verify that it contains the booking relay.
3. In the BlueYonder.Companion solution, bring back the call to the WCF service from the reservation
controller.
// TODO: Lab 07, Exercise 1: Task 4.3: Bring back the call to the backend WCF service.
You can use the Task List window to locate the TODO comment.
Uncomment the call to the CreateReservationOnBackendSystem method.
4. Publish the BlueYonder.Companion.Host.Azure project. If you did not import your Windows Azure
subscription information yet, download your Windows Azure credentials, and import the downloaded
publish settings file in the Publish Windows Azure Application dialog box.
5. Select the cloud service that matches the cloud service name you wrote down at the beginning of the
lab, after running the setup script.
8. Log on to the virtual machine 20487B-SEA-DEV-C as Admin with the password Pa$$w0rd.
9. Open the
D:\AllFiles\Mod07\LabFiles\begin\BlueYonder.Companion.Client\BlueYonder.Companion.Client.sln
solution file.
10. In the BlueYonder.Companion.Shared project, open the Addresses class, and set the BaseUri
property to the Windows Azure Cloud Service name you wrote down at the beginning of this lab.
11. Start the client application without debugging, and purchase a new trip to New York.
Wait for the app to show list of flights from Seattle to New York.
Fill in the reservation form and click Purchase.
12. Go back to the 20487B-SEA-DEV-A virtual machine to debug the WCF Web application.
Verify that you break on the WCF service code.
Continue running and verify the client is showing the new reservation.
Results: After you complete this exercise, you can run the client app and book a flight, and have the
ASP.NET Web API services running in the Windows Azure Web Role communicate with the on-premises
WCF services by using Windows Azure Service Bus Relays.
Developing Windows Azure and Web Services 7-37
2. Create a Windows Azure Worker role that receives messages from a Service Bus Queue
Open the Service Bus you created in the previous exercise and click CONNECTION INFORMATION
to open the ACCESS CONNECTION INFORMATION dialog.
2. Return to the BlueYonder.Companion solution in Visual Studio 2012, and then add a string setting
to the web role to store the Service Bus connection string.
Name the new setting Microsoft.ServiceBus.ConnectionString, and then set its value to the
connection string you found in the previous step.
3. Open the ServiceBusQueueHelper class located in the BlueYonder.Companion.Controllers project.
Create a Service Bus namespace manager object by using the connection string of the Service Bus.
To create the namespace manager object, use the CreateFromConnectionString method of the
NamespaceManager class.
Check if the Queue exists and create it by using the CreateQueue API if necessary.
Return a new QueueClient object for the queue by using the CreateFromConnectionString method
of the QueueClient class.
7-38 Windows Azure Service Bus
Note: The Queue name is stored in a static variable called QueueName, and has the value
of FlightUpdatesQueue
5. In the FlightsController class, add a static field for the QueueClient object.
Call the method you previously created in a static constructor to set the object.
6. In the Put method, after saving the changes made to the flight schedule, set the FlightId property of
the updatedSchedule variable to the id parameter containing the updated flight id.
Create a new BrokeredMessage object with the updated schedule as the message body, set the
ContentType property of the message to UpdatedSchedule, and send the message to the queue.
7. Review the Register method of the NotificationsController class. The same pattern of creating a
QueueClient object in the static constructor and then sending the update messages by using the
BrokeredMessage is applied to this controller.
Note: The Register method subscribes clients to flight update notifications. When a flight
update message is sent to the queue, every subscribed client waiting for that flight will be
notified by using the Windows Push Notification Services (WNS).
Task 2: Create a Windows Azure Worker role that receives messages from a Service
Bus Queue
1. Create a new Worker Role named BlueYonder.Companion.WNS.WorkerRole in the
BlueYonder.Companion.Host.Azure project.
Use the Worker Role with Service Bus Queue template when creating the role.
You can find the above configuration in the WnsConfiguration.xml file, under the lab's Assets
folder.
Note: The ClientSecret and PackageSID settings were retrieved by the Windows 8 client
team during the upload process of the client app to the windows store.
BlueYonder.Companion.WNS
BlueYonder.Companion.Entities
BlueYonder.DataAccess.Interfaces
BlueYonder.DataAccess
BlueYonder.Entities
5. Add the MessageHandler.cs file from the labs Assets folder to the
BlueYonder.Companion.WNS.WorkerRole project.
Note: The MessageHandler class contains the code to subscribe clients to WNS and send
notifications to clients when their flights are rescheduled.
7. In the WorkerRole class, change the value of the QueueName constant from ProcessingQueue to
FlightUpdatesQueue.
8. In the Run method, add code after the // Process the message comment, to handle received
messages, according to the value of the received message ContentType property:
Subscription. Use the receivedMessage.GetBody<T> generic method to retrieve the content of the
message as a RegisterNotificationsRequest object and call the MessageHandler.CreateSubscription.
Task 4: Test the Service Bus Queue with flight update messages
1. Place the two virtual machine windows so you work in virtual machine 20487B-SEA-DEV-A and see
the right-hand side of 20487B-SEA-DEV-C.
2. Run the client app in the 20487B-SEA-DEV-C virtual machine. The trip you purchased in the previous
exercise will show in the Current Trip list. Write down the date of the trip.
7-40 Windows Azure Service Bus
3. Leave the client app running and return to the 20487B-SEA-DEV-A virtual machine. Open the
BlueYonder.Companion.FlightsManager solution file from the
D:\AllFiles\Mod07\LabFiles\begin\BlueYonder.Server folder in a new Visual Studio 2012 instance.
4. Open the web.config file from the BlueYonder.FlightsManager project, and in the <appSettings>
section, locate the webapi:BlueYonderCompanionService key and update the {CloudService}
string to the Windows Azure Cloud Service name you wrote down at the beginning of the lab.
5. Run the BlueYonder.FlightsManager web application, find the flights from Seattle to New York, and
change the departure time of your purchased trip to 9:00 AM.
Verify you see a toast notification in the client app in the 20487B-SEA-DEV-C virtual machine (it
might take the notification a few seconds to appear).
Results: After you complete this exercise, you will be able to run the Flight Manager Web application,
update the flight departure time of a flight you booked in advance in your client app, and receive
Windows push notifications directly to your computer.
Question: In the lab, you used a worker role to send updates to customers. This worker role
runs as a separate component that is detached from the main application. What are some
benefits of using this architecture?
Developing Windows Azure and Web Services 7-41
Review Question(s)
Question:
Module 8
Deploying Services
Contents:
Module Overview 8-1
Module Overview
The deployment of a web application to a remote server can sometimes be complex and involve several
steps, such as copying files, changing permissions, and configuring databases. Instead of performing these
steps manually, there are tools that can help automate this process and execute the deployment more
safely so that availability of your application is not affected. For example, some deployment tools can
automatically back up the web application before deploying the new version, and restore the backups if
any errors occur during the deployment.
In this module, you will learn how to use tools such as Web Deploy to improve the deployment process,
how to perform continuous delivery to automate the build and deployment process, and some best
practices for deployment in a production environment.
Note: The Management Portal UI and Windows Azure dialog boxes in Visual Studio 2012
are updated frequently when new Windows Azure components and SDKs for .NET are released.
Therefore, it is possible that some differences will exist between screen shots and steps shown in
this module, and the actual UI you encounter in the Management Portal and Visual Studio 2012.
Objectives
After you complete this module, you will be able to:
Deploy web applications with Visual Studio.
Apply best practices for deploying web applications on-premises and to Windows Azure.
Developing Windows Azure and Web Services 8-3
Lesson 1
Web Deployment with Visual Studio 2012
One of the quickest ways to deploy a web application to a remote server is to deploy it with the Web
Deployment Framework, or Web Deploy. With Web Deploy, you can perform several tasks at one time,
such as copying files to remote servers, configuring IIS application pools, and applying permissions to the
file system. There are many ways to use Web Deploy, but one of the easier ways is by using the publishing
feature of Visual Studio 2012.
In this lesson, you will learn about Web Deploy and how to deploy web applications by using Web Deploy
in Visual Studio.
Lesson Objectives
After you complete this lesson, you will be able to:
Create a Web Deploy package and perform a live deployment with Visual Studio 2012.
This is where Web Deploy, which was released in 2009, is most useful. Web Deploy was created to simplify
the deployment of web applications to servers. Web Deploy can perform more than just file copy between
source and destination. It can perform additional tasks such as copying configuration from one IIS to
another, writing to the registry, setting file system permissions, performing transformation of
configuration files, and deploying databases.
Web Deploy is installed with Visual Studio 2012. If you have a computer that does not have Visual Studio
2012 installed on it, and you want to use Web Deploy, you have to install it manually.
You can use Web Deploy to publish and synchronize an existing web application on a remote server. You
can also use Web Deploy to create a deployment package from an existing web application, and publish
that package to a server later on. A deployment package, which is a standard compressed file, contains
both the content that you want to copy to the server, and an instruction file that contains the list of
actions to execute on the target server. The instructions, or Providers, as they are referred to in the Web
Deploy terminology, control the various resources that can be created or manipulated in the server, such
as files, IIS applications, databases, and registry. You can also create your own custom Web Deploy
provider if you have to perform a task that is not implemented by any of the existing providers, such as
attaching a .VHD file as a local hard-drive.
You can use Web Deploy in various ways. For example, when you use Visual Studio 2012 to publish a web
application, you are actually using the Web Deployment Framework for the task. The same is true when
you export an application from IIS Manager, or when you use the MSDeploy command-line tool.
Note: If you are familiar with Windows PowerShell, there is also a Web Deploy snap-in for
PowerShell, which is discussed in Lesson 3, Command-Line Tools for Web Deploy in this
module.
Web Deploy and Web Deploy Package. These two options use the Web Deployment Framework to
perform complex deployments. For example, if you decide to create a Web Deploy package, you can
create a package that includes the web application files and database scripts, which will be executed
after the application is deployed.
Whichever deployment technique you choose, you can control some basic settings through the properties
of the web application project. If you do not plan to use Web Deploy, you can only control a few settings,
Developing Windows Azure and Web Services 8-5
such as whether to deploy files that are in the project folder but were not included in the project. If you
plan to use Web Deploy (either live or by creating a package), you can configure more settings, such as
copying local IIS application pool settings to the deployed server, and listing the SQL script files which will
execute as part of the deployment.
1. Right-click your web application project in the Solution Explorer window in Visual Studio 2012, and
then click Properties.
2. In the Properties window, there are two tabs for configuring publish settings, Package/Publish Web
and Package/Publish SQL.
The following illustration shows the location of the Package/Publish Web and Package/Publish SQL
tabs in the web application Properties window:
After you have configured the publish settings, you can publish the web application by right-clicking the
project in the Solution Explorer window, and then clicking Publish. This displays the Publish Web dialog
box using which you can perform the following tasks:
Select which solution configuration you want to publish, such as debug or release.
Begin the publishing process.
If you select any of the Web Deploy techniques, you can also provide additional settings, such as a new
connection string that will replace the current connection string in the web.config file.
8-6 Deploying Services
Visual Studio 2012 stores all the publish settings in the project so that the next time you have to publish
the application, you can do a one-click publish instead of supplying all the information again.
Visual Studio 2012 supports storing more than one publishing profile so that you can create profiles for
different scenarios. For example, you can create different profiles for testing and production
environments, each with its own database connection strings.
For more information on how to use the Web Deploy dialog box, see:
How to: Deploy a Web Project by Using One-Click Publish in Visual Studio
http://go.microsoft.com/fwlink/?LinkID=298825&clcid=0x409
Note: If you create a Web Deploy Package, you will find that in addition to the packaged
compressed file, a .cmd file is created, together with a readme.txt file that describes how to run
the .cmd files to deploy the package.
Demonstration Steps
1. Open Visual Studio 2012, and create a new ASP.NET 4 MVC Web Application project named MyApp
in the D:\Allfiles\Mod08\DemoFiles\DeployWebApp\begin
2. Clear the Create directory for solution check box.
4. Publish the application by using the following settings, and verify the JSON response contains two
items.
Profile name: RemoteServer
Server: http://10.10.0.11/msdeployagentservice
Site Name: Default Web Site/MyApp
Password: Pa$$w0rd
5. In the MyApp project, under the Controllers folder, open the ValuesController.cs file, and change
the implementation of the parameterless Get method.
Change the implementation of the Get method to return a collection of three items instead of two
items.
6. Publish the application again, wait until the publishing is complete and the browser is opened, and
verify the JSON response contains three values.
Developing Windows Azure and Web Services 8-7
Lesson 2
Creating and Deploying Web Application Packages
Now that you have seen what Web Deploy can do, you can explore other ways of using Web Deploy, in
addition to Visual Studio 2012. In this lesson, you will learn how to create Web Deploy packages by using
IIS Manager to synchronize content such as web application files, IIS registry settings, and SSL certificates,
and to deploy them to other servers.
Lesson Objectives
After you complete this lesson, you will be able to:
Export/Import Server/Site Package: You can use this option when you select the root node (the
computer node) from the Connections pane. This option creates a deployment package for Web
Deploy that contains the configuration of the web server. This includes configuration from the
applicationHost.config configuration file, IIS registry settings, SSL certificates, and the content of all
the web applications hosted in the server.
Note: Whether you decide to export an application or the whole server, you can still
control which parts to export. For example, if you decide to export the server, you can exclude
some web sites from the package, or remove some configuration that you do not want to export.
Unlike Visual Studio 2012, where you can only use some Web Deploy Providers, such as IIS and database
providers, when you export a Web Deploy package from IIS, you can use any of the supported providers.
For example, when you use IIS Manager to export a WCF service web application that uses message
8-8 Deploying Services
security, you can use the cert Web Deploy Provider to include the certificate that is used for service
authentication in the exported package.
In addition, when creating packages by using IIS, you can also use parameters. Parameters are used for
varying the way packages are deployed to different environments. For example, you probably want to use
the same database schema both in staging and in production. However, you probably want to update
different database servers in each environment when you deploy the package. By adding parameters, you
can do more than control the settings used by providers.
You can also change the content of configuration files of your application by using substitution tokens to
find sub-strings in text files and replacing them with the value set in the parameter. Each parameter that
you create can have a default value, a name, and a description shown when importing the package to
help the person importing the package understand what value they have to enter.
Note: If you are familiar with web.config transformations, do not confuse it with Web
Deploy Parameters. Web Deploy parameterization is not limited to web.config files, and you can
use it on any XML or text file in the deployment package. web.config transformations will be
explained in Lesson 6, Best Practices for Production Deployment.
For more information about how to export Web Deploy packages by using IIS Manager, see:
For more information about how to use parameters with Web Deploy, see:
How to: Use Web Deploy Parameters in a Web Deployment Package
http://go.microsoft.com/fwlink/?LinkID=298827&clcid=0x409
1. Packages that contain a web application: If you exported a web application with IIS Manager, or
published a web application through Visual Studio 2012, you can import it by selecting the target
web site in the Connections pane, and clicking Import Application in the Actions pane.
Developing Windows Azure and Web Services 8-9
Note: You can also import a package and deploy it under an existing web application
instead of under a web site. However, this scenario is less common.
2. Packages that contain a web server: If you exported a web server by using IIS Manager, you can
open IIS Manager on a different server, select the root node (machine node) from the Connections
pane, and then click the Import Server or Site Package in the Actions pane.
Note: Exporting a web server with IIS Manager uses the Web Deploy webServer Provider.
You can use the same provider in the command-line with MSDeploy.exe or by using the
PowerShell Snap-in, and import the created package with IIS Manager to achieve the same result.
To start the import process, select the deployment compressed file that you created. Next, select the parts
of the application to deploy. (By default, the entire package will be deployed, but you can select which
providers will run and which will not. Then provide values for the package parameters that you created
when exporting the package.
The following figure shows the list of parameters that are requested to deploy a sample package:
If you only want to see the list of actions that will be performed during deployment, you can use the
WhatIf flag. Performing an import with the WhatIf flag will not execute the deployment. It will only print
the actions that each provider will perform. To turn on the WhatIf option, click the Advanced Settings in
the Import Application Package dialog box, and then change the WhatIf setting from False to True.
When the import is complete, you can check the Details tab for a list of actions that would have been
performed if the WhatIf flag was set to False.
8-10 Deploying Services
Demonstration Steps
1. In the 20487B-SEA-DEV-B virtual machine, open IIS Manager, select the MyApp web application,
and then click Export Application.
2. In the Export Application Package dialog box, press Next until you reach the Save Package step,
enter the package path c:\MyApp.zip, and then complete the export process.
3. Copy the MyApp.zip file from the local C:\ root folder to the remote server at the UNC
\\10.10.0.10\c$\.
4. In the 20487B-SEA-DEV-A virtual machine, open IIS Manager, select the Default Web Site, and then
click Import Application.
5. In the Import Application Package dialog box, type the package path C:\MyApp.zip, and then
continue with the import.
6. Open a browser, and browse to the address http://localhost/MyApp/api/values. Verify that you see
the output of the service.
Developing Windows Azure and Web Services 8-11
Lesson 3
Command-Line Tools for Web Deploy
The previous two lessons showed how to use Web Deploy with applications such as Visual Studio 2012
and IIS Manager, but Web Deploy can also be implanted by using scripts that do not require human
interaction. Running scripts that do not require human interaction is useful if you are planning on
automating the deployment process. For example if you want to deploy the web application to an
integration environment each night.
In this lesson you, will learn how to use Web Deploy by using command line and PowerShell to automate
your packaging of web applications and deploying them to remote servers.
Lesson Objectives
After you complete this lesson, you will be able to:
To start using the MSDeploy command line tool, you have to open a Command Prompt window, and run
the MSDeploy executable from the Microsoft Web Deploy folder in the %ProgramFiles%\IIS folder.
Note: After you install IIS8 and Visual Studio 2012, you might have several Microsoft Web
Deploy folders for the different versions of Web Deploy installed on your computer. You should
select the most recent version of MSDeploy.
The following command line executes MSDeploy to create a package file for the MyApp web application.
Executing the MSDeploy tool from command line to package the MyApp web application.
msdeploy.exe -verb:sync -source:iisApp="Default Web Site/MyApp" -
dest:package=c:\MyApp.zip
8-12 Deploying Services
The -verb parameter specifies which operation is required. The sync operation instructs the tool to
synchronize source and destination. If you use the dump operation instead, the tool will only list the
information that it received from the source.
For example, the following command line executes MSDeploy, but only prints the list of files that will be
copied without actually copying them.
Executing the MSDeploy tool to dump the list of files to be copied from the MyApp web
application
The -source operation parameter indicates the source of the data. In the previous example, the source is
the iisApp provider. The iisApp provider can synchronize the content of a website or a web application,
either by its physical or virtual path. The iisApp provider is actually constructed from four other providers:
contentPath, createApp, dirPath, and filePath.
You can also use MSDeploy for live server-to-server synchronization. The following command line
executes MSDeploy to synchronize the MyApp web application from the current server to a remote server
named Server2.
MSDeploy is also able to use the WhatIf flag to only print which actions will be taken, without actually
performing them. To use WhatIf, add the whatif parameter to the end of the command line.
The following command line will print the MyApp web application files that should be deployed to the
remote server. However, it will not actually copy the files to the remote server.
Executing live server-to-server synchronization with the MSDeploy tool by using the -whatif
operation setting
msdeploy -verb:sync -source:iisApp="Default Web Site/MyApp" -dest:iisApp="Default Web
Site/MyApp",computerName=Server2 -whatif
Note: When synchronizing web applications between servers, Web Deploy will check which
files already exist, unchanged, on the target server. These files will not be copied, therefore
improving the performance of the deployment process. If all the files on the target server are the
same as those in the source server, running the MSDeploy tool with the -whatif operation setting
will result in a message that states that 0 (zero) changes were made.
For a complete reference for the MSDeploy command line tool. see:
To use the Web Deploy PowerShell Snap-in, open a PowerShell window, and then type the following
command: Add-PSSnapin WDeploySnapin3.0.
The Web Deploy PowerShell Snap-in has several cmdlets that are resource and action specific, such as the
following:
Backup-* / Restore-*: This set of cmdlets backs up resources to a packaged compressed file and
restores them to a selected server. You can back up and restore a web application, a web site, a web
server, an SQL database, or a MySQL database. For example, the Restore-WDApp cmdlet is used for
restoring packages that contains a backed up web application.
Note: The package file that is created by the Backup-* cmdlets, and the one used by the
Restore-* cmdlets are standard Web Deploy packages. Therefore, you can use those packages
with other tools that support packaged files, such as IIS Manager, or MSDeploy. For example, you
can use the Backup-WDServer to create a Web Deploy package for a web server, and import it
through IIS Manager.
.Sync-*: This set of cmdlets synchronizes a resource between servers. Similar to the Backup-* and
Restore-* cmdlets, you can use the Sync-* cmdlets to synchronize web applications, web sites, web
servers, or databases such as MySQL and SQL Server databases.
Get/New-WDPublishSettings: By default, the Web Deploy cmdlets run on the local server. For
example, when you back up a web application, it is backed up from the local IIS. When you restore a
web application, it is restored to the local IIS. If you want to perform a backup/restore from/to a
different server, you have to first create a publish settings file for that server. A publish settings file
contains the address of the server and the credential information for that server. The New-
WDPublishSettings cmdlet creates a publish settings file for a server, and the Get-
WDPublishSettings cmdlet loads a publish settings file into an object that you can use with other
cmdlets, such as Backup-*, Restore-*, and Sync-*.
Note: When you use the Sync-* cmdlet, you can choose between synchronizing resources
in the same server (for example, for duplicating a web site), from the local server to a remote
server, or between two remote servers. If you decide to synchronize between two servers, you will
need two publish settings files, one for the source and one for the destination.
8-14 Deploying Services
The following script shows how to synchronize a web application named MyApp between the local server
and the remote server named Server2.
Synchronizing the MyApp web application from the local server to Server2
$cred = Get-Credential
New-WDPublishSettings -ComputerName Server2 -Credentials $cred -AgentType MSDepSvc -
FileName:"C:\Server2.publishsettings"
Sync-WDApp "Default Web Site/MyApp" "Default Web Site/MyApp" -DestinationPublishSettings
"C:\Server2.publishsettings"
The Get-Credentials cmdlet will show a credentials request dialog box where the user must enter their
credentials for accessing Server2. The New-WDPublishSettings cmdlet will create a publish settings file
that has information on how to publish resources to Server2. The Sync-WDApp cmdlet will synchronize
the MyApp Web Application from the local IIS to Server2 to the same Web site, and with the same web
application name.
The Sync-WDManifest cmdlets is a bit different from the other cmdlets, because it can synchronize
multiple providers in a single command. To use this cmdlet, you must create two manifest files, one for
the source and another for the destination. The manifest file is an XML file that lists providers and their
parameters.
For a detailed list of the Web Deploy PowerShell cmdlets, see:
Demonstration Steps
1. In the 20487B-SEA-DEV-A virtual machine, open a PowerShell window, and then add the
WDeploySnapin3.0 PowerShell Snap-in.
Type the following command and press Enter.
Add-PSSnapin WDeploySnapin3.0
2. Request the credentials for the server that you will deploy to, and store them in a variable named
$cert.
$cred = Get-Credential
For the credentials, use the username Administrator and the password Pa$$w0rd.
3. Create a publish settings file for the remote server by typing the following command.
4. Synchronize the web application to the remote server by typing the following command.
Lesson 4
Deploying Web and Service Applications to Windows
Azure
In previous lessons, you learned how to deploy web applications by using Web Deploy. However, in
Windows Azure you can host both web applications and service applications (background processes that
run tasks). Therefore, it offers other techniques for software installation.
In this lesson you will learn about the different techniques that you can use to deploy web applications
and service applications to Windows Azure Cloud Services, and to Windows Azure Web Sites.
Lesson Objectives
After you complete this lesson, you will be able to:
When you use Visual Studio 2012 to perform the deployment, the environment handles the process.
When you publish a Cloud project, you provide Visual Studio 2012 with your Windows Azure account
information. With that information, Visual Studio creates a Cloud Service, uploads the package and
configuration, creates the deployment and instances, and deploys the package to the new instances.
If your Cloud project includes web roles, you can enable Web Deploy to improve the performance of
deploying web roles. With Web Deploy, Visual Studio will connect directly to the hosted instance and
update the files locally on the server. Because this requires connecting directly to the instance, using Web
Deploy requires enabling Remote Desktop, and is only applicable if you deploy the web role to a single
instance.
For more information about how to use Web Deploy with Windows Azure Roles, see:
Since Visual Studio 2012 can identify existing Cloud Services, you can decide whether to deploy the
service to a new Cloud Service or to an existing one. You can also decide whether to deploy to the
production or staging environment.
For more information about how to publish services to Windows Azure from Visual Studio, see.
Publishing Cloud Services to Windows Azure from Visual Studio
http://go.microsoft.com/fwlink/?LinkID=298833&clcid=0x409
In addition to publishing, Visual Studio 2012 provides the packaging option. When you package a Cloud
project, a Windows Azure package is created locally together with a configuration file. However, it will not
be deployed to Windows Azure. You can then take the files and deploy them manually to Windows Azure
either through the Windows Azure Management Portal or by using PowerShell cmdlets.
Note: The package file (with a .cspkg extension) and the configuration file (with a .cscfg
extension) are created under the Cloud projects folder, in the bin\configuration\app.publish
folder, where configuration is the build configuration that you choose when packaging, such as
debug or release. After the packaging is complete, Visual Studio 2012 opens the folder in File
Explorer.
After you create the deployment package, you can deploy it manually through the Windows Azure
Management Portal at http://manage.windowsazure.com. After you sign in to the portal, create a Cloud
Service or select an existing one, and then click Upload. In the dialog box, you can name the deployment,
and select the package and the configuration files that you want to deploy. The portal uploads the
package and configuration files, and processes the rest of the deployment steps.
Another option that you have is to deploy the package by using PowerShell. Although with other tools,
such as Visual Studio 2012, or the Windows Azure Management Portal where deployment is completed
manually, with PowerShell, you can automate the deployment process, and include it in a script.
Note: The Windows PowerShell cmdlets for Windows Azure are not installed with the
Windows Azure SDK. You can download the cmdlets from:
http://go.microsoft.com/fwlink/?LinkID=298834&clcid=0x409
The following PowerShell script creates a Cloud service, and then deploys a Windows Azure package to its
production environment.
The Set-AzureSubscription cmdlet configures the default subscription that is used by the rest of the
script. The subscription information is also stored in a file so you can use it with other scripts. The New-
8-18 Deploying Services
AzureService cmdlet creates a Cloud service in the north central US region, and the New-
AzureDeployment cmdlet deploys the package and the configuration files to the production
environment in the new Cloud service.
Note: If you want to try this script, you must set the thumbprint of your Windows Azure
management certificate, your Windows Azure subscription ID, the Cloud Service name, which
must be unique, and the path and names of the package and configuration files.
2. In Visual Studio 2012, publish a web application project, and in the Profile tab of the Publish Web
dialog box, click Import.
3. In the Import Publish Profile dialog box, select the Import from a publish profile file option, click
Import, and then import the downloaded publish profile file.
4. Click OK, and then continue with Web Deploy publishing as you usually would.
To select a web site from your Windows Azure subscription, proceed with the following steps:
1. In Visual Studio 2012, publish a web application project, and in the Profile tab of the Publish Web
dialog box, click Import.
2. In the Import Publish Profile dialog box, if you have not already added your Windows Azure
subscription to Visual Studio 2012, click Add Windows Azure subscription, and then follow the
instructions to add your subscription to Visual Studio 2012.
3. Select the Import from a Windows Azure web site option, and then select the web site to which
you want to deploy.
4. Click OK, and then continue with Web Deploy publishing as you usually would.
Developing Windows Azure and Web Services 8-19
The following figure shows the location of the Download publish profile link in the Windows Azure
Management Portal.
The following figure shows the result of importing a publish profile file in Visual Studio 2012.
Note: If you examine the content of the publish profile file, you might notice that its
content resembles that of the publish settings file that is created by the New-
WDPublishSettings cmdlet. Although the files do look similar in structure, you cannot use a
8-20 Deploying Services
WAWS publish profile file that uses the Web Deploy PowerShell cmdlets, because the cmdlet
expects a different format for the destination address.
Developing Windows Azure and Web Services 8-21
Lesson 5
Continuous Delivery with TFS and Git
In previous lessons, you saw how to use web deployment techniques to deploy your application both on-
premises and to Windows Azure. However, there are some questions you will probably want to answer
before you start using these deployment techniques, like when are you going to deploy your
applications? Will you deploy to your source control after each check-in or on demand? Will you deploy
only after the code passes unit tests? Will you deploy every couple of days or deploy nightly to have an
up-to-date testing environment the following day? And will you manually build, test, and deploy the
application every time, or use automated, scheduled tasks? Some of these questions, if not all, are
answered by a process called continuous delivery. If used correctly, it can help you increase the quality of
your application.
In this lesson, you will learn the benefits of using continuous delivery, how to use continuous delivery with
Windows Azure and with source control management (SCM) systems such as Git and Team Foundation
Services (TFS).
Lesson Objectives
After you complete this lesson, you will be able to:
Describe the benefits of continuous delivery.
Increase the confidence of your development teams through the need to maintain a high-quality
product constantly.
Reduce the overall risk in developing a complex software product by using automated tools.
8-22 Deploying Services
Note: You can see an example of a deployment script in PowerShell in Lesson 4, Deploying
to Windows Azure.
If you are using the automated builds in TFS, for more information on how to deploy your web
application to Windows Azure Cloud Services, see:
Instead of configuring the deployment step in the automated build process yourself, Windows Azure
provides continuous delivery for two well-known source control management systems:
Git: Windows Azure lets you create a Git repository for each Windows Azure Web Site. When you
finish creating and testing your web application, you can push it to the Git repository, which
automatically deploys it to your Windows Azure Web Site.
Team Foundation Service: With Windows Azure, you can create an automated build in your Team
Foundation Service project that automatically deploys your web application to a Windows Azure Web
Site or a Windows Azure Cloud Service when you check in your changes.
Note: When you use continuous delivery with either Git or Team Foundation Service,
Windows Azure only provides the deployment step of the automated build process. You must
create the tests that will execute during the automated build, and configure the build to execute
those tests.
For more information on publishing a web application to a Windows Azure Web Site with Git, see:
For more information on publishing a web application to a Windows Azure Cloud Service by using Team
Foundation Service, see:
Continuous delivery to Windows Azure by using Team Foundation Service
http://go.microsoft.com/fwlink/?LinkID=298838&clcid=0x409
Demonstration Steps
1. Sign in to http://go.microsoft.com/fwlink/?LinkID=313752, or sign up to create an account.
Note: If you previously created a TFS account by using your Windows Live ID, you cannot
create another account. If you have already created an account, use this step to show how to
create an account, and then browse to https://AccountName.visualstudio.com.
2. Create a new Team project named 20487B with the default process template, and locate the new
project.
3. Open a new instance of Visual Studio from the TFS project Web site, and then create a new ASP.NET
MVC 4 Web Application project named MyTFSWebApp with the Web API project template. Select
Add to source control when you create the project, and after the project is created, add the project
to the suggested location in the TFS.
5. Leave Visual Studio 2012 open, return to the browser, open the Windows Azure Management Portal
(https://manage.windowsazure.com), and then create a new Windows Azure Web Site by using
the Quick Create option.
Note: You can skip this step if you already have a Windows Azure Web Site that you want
to use.
6. Open the configuration of the newly created Windows Azure Web Site, and then click Set up TFS
publishing. Enter the name of your TFS account, and then click Authorize Now. Accept the
requested permissions, then select the 20487B project, and then complete the set up process.
7. Leave the browser open, return to Visual Studio 2012, check out the ValuesController.cs file, and
change the parameterless Get method, change the code to return an array of three values instead of
two. Save the file, and check in the file back to TFS.
8. In Team Explorer, open the Builds view, click the build under the All Build Definitions, show the
students the list of builds, and update the list every couple of seconds until the build process is
completed.
9. Return to the browser to view the deployment history on the DEPLOYMENTS tab, and then click the
DASHBOARD tab to open the Web Sites dashboard, and then click the link under SITE URL. Append
the /api/values string to the URL in the address bar of the browser. Verify that the returned list
contains three values.
10. Clear the Add to source control check box on the New Project dialog box.
Note: If you do not clear the Add to source control check box, you will be prompted to
register new projects with a source control each time you add a new project.
Developing Windows Azure and Web Services 8-25
Lesson 6
Best Practices for Production Deployment
By now, you have learned how to use Web Deploy and continuous delivery to automate the deployment
process of your application, but there is more to deployment than just making sure the target server has
the same version of the new application. For example, when you deploy more than one web application
to a web server, there are steps you can take to improve the way these applications run side-by-side. In
addition, when you deploy a new application to an existing environment, especially to production
environments, you have to consider how the deployment process itself will affect users that are currently
trying to use your application. Will the application still be able to respond to requests while being
updated? Will its throughput be affected when servers are down for deployment?
In this lesson, you will learn of additional tools and techniques that can assist you in deploying
applications to production environments.
Lesson Objectives
After you complete this lesson, you will be able to:
Transform web.config files when publishing web applications.
Web.config Transformations
One of the common tasks when you deploy web
applications to a different environment is
changing the content of the web.config file,
because some configurations differ between
environments. For example, different
environments usually use different database
connection strings; production environments
usually change the compilation mode from Debug
to Release; and in the development environment,
you would probably want to see the original
errors, whereas in other environments, you would
probably choose to hide them and show a custom
error page.
One of the options to change the content of the web.config file is to use Web Deploy parameters, as
explained in Lesson 1, Web Deployment with Visual Studio 2012, and Lesson 2, Creating and
Deploying Web Application Packages. For example, when publishing a web application through Visual
Studio 2012, you can set the database connection string that you want to use in the deployed
environment. This change is applied with the help of Web Deploy parameters. When you deploy through
IIS Manager, MSDeploy, and the Web Deploy snap-in for PowerShell, you have even more control over
file content with parameters that use XPath expressions to find and replace file content in an XML file.
However, creating XPath expressions to locate a specific XML content can be somewhat complex
sometimes. For example, the following XPath string represents a search pattern that matches a connection
string named MyAppDb: //*[local-name()='connectionStrings']/*[local-
name()='add'][@name='MyAppDb']/@connectionString.
8-26 Deploying Services
This is where web.config transformations are useful. web.config transformations are created specifically for
locating configuration sections and XML elements inside the web.config file, changing them, adding new
elements or attributes to them, and even removing elements and attributes from the configuration file.
When you publish a web application that uses Visual Studio 2012, during the build step, the
transformation is applied to the web.config file automatically. This results in a new web.config file. Then,
this is used to as a replacement of original web.config file for the rest of the publishing process (either
copying the new file to the destination server or packaging it).
Web.config transformation uses transformation files. These files use the naming convention of
web.configuration.config, where configuration is the solution configuration that you want to use the
transformation for, such as Debug or Release. For example, the file Web.Debug.config is the
transformation file that is used when publishing a web application that uses the Debug solution
configuration.
If you create a new web application project with Visual Studio 2012, two transformation files are created
automatically, one for Debug, and another for Release. If you delete these files, or add new solution
configurations, you can generate new transformation files by right-clicking the web.config file under
your web application project in the Solution Explorer window, and clicking Add Config Transform.
Clicking that option makes Visual Studio 2012 generate the transformation files for the missing solution
configurations automatically.
The content of a transformation file is XML-based, and is a mix of XML configuration structure and
transformation expressions.
The overall structure of the XML file resembles that of a web.config file. It has the configuration root
element, and a section structure with connectionStrings and system.web, similar to that of a typical
web.config file. As the original web.config is transformed, the transformation mechanism searches for
elements in the original web.config as they appear in the transformation file. For example, the
transformation looks for a compilation element under the system.web section in the original web.config,
because that is the XML structure in the transformation file.
After the transformation process locates the element, it uses the attributes set for the element in the
transformation file to process the original element. For example, in the compilation element, the
xdt:Transform="RemoveAttributes(debug)" attribute changes the original compilation element by
removing the debug attribute from it. In the add element that is under the connectionStrings element,
the xdt:Transform="SetAttributes" sets the values of the connectionString and name attributes of the
original web.config to the attributes supplied in the transformation file, only if the original name attribute
and the name attribute of the transformation file are a match (both have the value dbConnection).
The transformed web.config has a different dbConnection connection string, and the debug attribute is
removed from the compilation element.
After you create a transformation file and add the relevant transformations, you can see a preview of the
resulting web.config by right-clicking the transformation file in the Solution Explorer, and then clicking
Preview Transform. This opens a web.config preview page in the document area with the original file
and the transformed file one next to the other, and highlights the content that was added or removed.
The following figure shows a preview of a web.config transformation:
web.config Transformation Syntax for Web Project Deployment By Using Visual Studio
http://go.microsoft.com/fwlink/?LinkID=298839&clcid=0x409
8-28 Deploying Services
Demonstration Steps
1. Open Visual Studio 2012, and create a new ASP.NET MVC 4 Web Application that uses the Web API
template.
2. Open the Web.config file, and show the current DefaultConnection connection string and
<compilation> section in the <system.web> configuration group.
3. Expand the Web.config file in Solution Explorer, and open the Web.Release.config file.
4. View the auto-generated transformation for the <compilation> section, and add a new
transformation for the DefaultConnection connection string, by using the following configuration.
<connectionStrings>
<add name="DefaultConnection"
connectionString="Data Source=ProductionSQLServer;Initial
Catalog=MyAppProductionDB;Integrated Security=True"
xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/>
</connectionStrings>
6. In Visual Studio 2012, open the MyProductionApp web site from the Local IIS.
To open a web site from Visual Studio 2012, on the File menu, point to Open, and then click Web
Site.
7. Open the Web.config file, and show the students the altered DefaultConnection connection string,
and the missing debug attribute in the <compilation> section.
Note: The default setting for IIS is to shut down the worker process automatically, if the
application is idle for more than 20 minutes. You can change this application pool setting
through IIS Manager.
To reduce some assembly loading time, a shared assembly feature was introduced with Visual Studio
2012. Consider a scenario where you have multiple web applications on the same server, and these
applications have the same set of assemblies, such as Entity Framework assemblies, JSON.NET assemblies,
and ASP.NET Web API core assemblies, which are deployed to their Bin folder. By using the shared
assembly feature, identical versions of an assembly can be replaced with a single assembly file, whereas
the rest of the assembly files are replaced with a symbolic link. Then, assembly files can be loaded more
quickly, reducing the startup time of web application.
To start this feature, you must run the aspnet_intern command line tool. This tool scans the ASP.NET
temporary files folder in search for reused assemblies, makes a single copy of these assemblies in a special
folder that you specify in the command line, and replaces the original assemblies with a symbolic link.
This command, when it is executed from the Developer Command Prompt for Visual Studio 2012, scans
the Temporary ASP.NET Files folder for 64-bit ASP.NET web applications, copies a single file of each
reused assembly to the C:\CommonAssemblies folder, and replaces the original assembly files with a
symbolic link.
The -sourcedir parameter must point to the location of the temporary ASP.NET files folder, because
ASP.NET loads assemblies from a temporary folder and not from the Bin folder of the web application.
For more information about loading assemblies from temporary folders in ASP.NET, see:
If you change the -mode parameter from exec to analyze, you can view the list of reusable assemblies
without replacing them with a symbolic link.
Note: Because this command is a one-time operation, you will probably want to run the
aspnet_intern command routinely, by using a scheduled task.
8-30 Deploying Services
Note: You can change the maximum number of upgrade domains by editing the service
definition configuration file, and setting the upgradeDomainCount attribute in the
ServiceDefinition root element.
The following figure shows how a deployment with 10 instances is divided among five upgrade domains.
Note: In the previous figure, there is also a column that displays the fault domain of each
instance. Fault domains are discussed in Lesson 2, Load Balancing, of Module 12, Scaling
Services.
When an in-place update occurs, upgrade domains take turns stopping, updating their instances, and
bringing them back online. For example, if a deployment has 10 instances, then at first, instances 0 and 5
(that belongs to upgrade domain 0) are stopped, updated, and brought online. Then, instances 1 and 6
(from upgrade domain 1) begin the same process, continuing until instances 4 and 9 (upgrade domain 4)
are updated by using the same steps.
Developing Windows Azure and Web Services 8-31
The use of upgrade domains leaves most of your instances running while it is updating only some of
them. This keeps your web application available. However, if you have more than two upgrade domains,
you will end up running two versions of your application at a time: the new version on the first upgrade
domain, and the old version on the third upgrade domain and on, whereas the second upgrade domain is
being updated. Having multiple versions of your web application under the same load balancer could
cause clients to receive different responses from services, or experience faulty connections if the service
contract has changed. Using in-place update frequently includes the burden of constructing your services
to be backward compatible. To overcome this problem, you might consider using a different approach for
updates by using VIP Swap, which is discussed next.
Another thing that you can use the staging environments for is VIP Swap. With VIP Swap, the virtual IP
and DNS address of your staging and production environments are swapped. This results in your
production environment having the address and VIP of the staging environment and vice-versa.
By creating a staging environment that has the same hardware and software configuration as your
production environment, you can use VIP Swap to upgrade your production environment quickly without
experiencing the downtime of upgrade domains.
Note: If you have a single instance in your production environment in Windows Azure,
performing an in-place upgrade disables the instance during the upgrade. Using multiple
instances, which is the recommendation for production environment to achieve 99.95%
availability, provides the required availability of your service, but reduces the throughput of the
service because of the downtime of instances in the upgrade domain.
1. Deploy the upgraded web application to the staging environment. Use the same VM machine size
and number of instances as you use for your production environment.
2. Verify that your application works correctly in the staging environment. You might have to change
the service URL you are using in the client application to point to the staging environment instead of
the production environment.
Note: VIP Swap requires having both production and staging environments deployed. If
you only have the staging environment deployed, you will not be able to use VIP Swap.
3. In the Windows Azure Management Portal, click Cloud Services, select the service deployment, click
Staging from the dashboard, and then click Swap. In the Swap dialog box, verify that the number of
roles and instances is identical, and then click Yes.
Note: After you complete the VIP Swap, and no longer require the staging environment
instances, it is advised that you delete the staging deployment to conserve CPU hours.
http://go.microsoft.com/fwlink/?LinkID=298842&clcid=0x409
http://go.microsoft.com/fwlink/?LinkID=298843&clcid=0x409
Developing Windows Azure and Web Services 8-33
In addition, Blue Yonder Airlines wants to scale its WCF booking service and frequent flyer service to
another server to increase its durability. In this lab, you will create an IIS deployment package and deploy
it to another server.
Objectives
After you complete this lab, you will be able to:
Deploy a web application to Windows Azure staging environment, and perform VIP swap.
Create an IIS deployment package, and install it on a different server.
Lab Setup
Estimated Time: 45 minutes
Virtual Machine: 20487B-SEA-DEV-A, 20487B-SEA-DEV-B, 20487B-SEA-DEV-C
1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.
3. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.
4. In the Action pane, click Connect. Wait until the virtual machine starts.
5. Sign in using the following credentials:
6. Return to Hyper-V Manager, click 20487B-SEA-DEV-B, and in the Action pane, click Start.
7. In the Action pane, click Connect. Wait until the virtual machine starts.
Password: Pa$$w0rd
9. Return to Hyper-V Manager, click 20487B-SEA-DEV-C, and in the Action pane, click Start.
10. Verify that you received credentials to log in to the Azure portal from you training provider, these
credentials and the Azure account will be used throughout the labs of this course.
8-34 Deploying Services
2. Deploy the updated project to staging by using the Windows Azure Management Portal
3. Test the client app with the production and staging deployments
4. Perform a VIP Swap by using the Windows Azure Management Portal and retest the client app
Task 1: Add the New Weather Updates Service to the ASP.NET Web API project
1. In the 20487B-SEA-DEV-A virtual machine, run the setup.cmd file from
D:\AllFiles\Mod08\LabFiles\Setup.
Write down the names of the Windows Azure Service Bus namespace and Windows Azure Cloud
Service.
Open the Windows Azure Management Portal and locate the cloud service that was created
during the setup process.
Select the production environment for the cloud service and then click Update or Upload at the
bottom of the page (only one of the buttons should be visible).
Use Lab08 as the deployment name, and select the path to the configured package file, which should
be located in the bin\debug folder of the begin solution. The file name is
BlueYonder.Companion.Host.Azure.cspkg.
Select the path for the Configuration section from the ServiceConfiguration.Cloud.cscfg file from
the same location.
Select the Deploy even if one or more roles contain a single instance check box and approve.
Use the Locations.GetSingle method to get the Location object according to the locationId
parameter.
Developing Windows Azure and Web Services 8-35
Call the GetWeather method of the WeatherService class to get the WeatherForecast object.
Note: The Begin Solution already contains the WeatherService class. The class uses the
WeatherForecast class, and the WeatherCondition enum.
Expand the DataTransferObjects folder to review the files.
Key Value
name LocationWeatherApi
routeTemplate locations/{locationId}/weather
New
{
controller = "locations",
action = "GetWeather" }
new
{
httpMethod = new HttpMethodConstraint(HttpMethod.Get)
}
Task 2: Deploy the updated project to staging by using the Windows Azure
Management Portal
1. Create a new package for the BlueYonder.Companion.Host.Azure project. Use the same procedure
as in the previous task.
Note: You are performing the exact same procedure as you did in Task 1 of this exercise,
with one difference: you are deploying to the Staging environment and not to the Production
environment.
Task 3: Test the client app with the production and staging deployments
1. In the 20487B-SEA-DEV-C virtual machine, open the client solution from
D:\AllFiles\Mod08\LabFiles\begin\BlueYonder.Companion.Client.
2. In the Addresses class of the BlueYonder.Companion.Shared project, set the BaseUri property to
the Windows Azure Cloud Service name you wrote down at the beginning of this lab.
8-36 Deploying Services
3. Run the client app without debugging, purchase a trip from Seattle to New York, and verify that the
weather forecast for the current trip is missing the temperature.
The temperature text should only show the degrees Fahrenheit sign.
Close the client app after you verify the temperature is not shown.
4. In the Addresses class, change the BaseUri property to the staging deployment URL.
In the Management Portal, open the configuration of your cloud service, and then copy the staging
deployment URL to the BaseUri property.
Note: You will use the production environment address again shortly, so it is best that you
copy it aside, either to Notepad, or place it in comments, in the Addresses class.
5. Run the client app again, verify that the weather forecast is shown for the current trip, and then close
the client app.
Note: The staging and the production deployments share the database, which is why the
current trip, which you created with the production deployment, is shown when connecting to
the staging deployment.
Task 4: Perform a VIP Swap by using the Windows Azure Management Portal and
retest the client app
1. Return to the Windows Azure Management Portal and perform a VIP Swap between the staging and
production deployments. Return to Visual Studio 2012 when the swap is complete.
2. Set the service URL in the BaseUri property back to the production deployment URL, run the client
app without debugging and verify that the weather forecast is shown.
3. Return to the Windows Azure Management Portal and delete the staging deployment.
Note: After the production deployment is running and has been tested, it is recommended
that you delete the staging deployment to reduce compute hour charges.
Results: After you complete this exercise, the client app will retrieve weather forecast information from
the production deployment in Windows Azure.
After you create the deployment package, you will copy it to the remote server, logon to that server, and
deploy the package locally.
1. Export the web applications containing the WCF booking and frequent flyer services
Task 1: Export the web applications containing the WCF booking and frequent flyer
services
1. In the 20487B-SEA-DEV-A virtual machine, open IIS Manager, select the Default Web Site, and
then open the Export Application Package dialog box.
2. In the Export Application Package dialog box, open the Management Components, and then
clear the list of components.
3. Add two appHostConfig providers to synchronize the Default Web
Site/BlueYonder.Server.Booking.WebHost and the Default Web Site/BlueYonder.Server.
FrequentFlyer.WebHost web applications.
5. Store the package in C:\backup.zip. Complete the package creation and close IIS Manager.
6. Copy the backup.zip file from C:\ to \\10.10.0.11\c$.
In the Management Portal, locate the Service Bus namespace you wrote down at the beginning of
this lab
Open the Service Bus configuration, click the RELAYS tab, and verify that there are two listeners for
the booking relay.
Results: As soon as both servers are online, they will listen to the same Service Bus relay, and will be load
balanced. You will verify that both servers are listening by checking the Service Bus relay listeners
information supplied by Service Bus in the Windows Azure Management Portal.
Question: Why did you synchronize the certificates between the two servers instead of
creating new certificates in the additional server?
Question: What is the benefit of using VIP Swap instead of deploying directly to a
production deployment?
8-38 Deploying Services
Use MSDeploy or the Web Deploy PowerShell snap-in when you deploy web applications through
scripts, instead of using tools such as XCopy.
Check whether your SCM supports automated build and use them. If it does not provide automated
builds, investigate external third-party automated build tools or consider switching to an SCM system
that does have automated builds.
Deploy to staging deployments in Windows Azure before you deploy an updated version to your
production deployment.
Review Question(s)
Question: What are the tools that use the Web Deployment Framework?
Tools
Visual Studio 2012
IIS
Web Deploy
Windows PowerShell
Windows Azure
Module 9
Windows Azure Storage
Contents:
Module Overview 9-1
Module Overview
Storage Services are an important concept in cloud computing. Due to the volatile nature of cloud
computing a single source of is needed to maintain consistency of application data and static resources.
For this reason most (if not all) cloud platform have a storage solution providing a persistence store in the
cloud.
Windows Azure provides three different storage services for various purposes:
Windows Azure Blob Storage. This provides a file based persistence store, ideal for saving files and
static content.
Windows Azure Table Storage. This provides a simple key/value store.
Windows Azure Queue Storage. This provides a cloud-based persisted queuing mechanism.
You can access all storage services through the various client SDKs or directly by using their HTTP-based
APIs. These storage services provide an out-of-the-box solution for common data storage challenges such
as securing and transferring a large amount of data.
Note: The Management Portal UI and Windows Azure dialog boxes in Visual Studio 2012
are updated frequently when new Windows Azure components and SDKs for .NET are released.
Therefore, it is possible that some differences will exist between screen shots and steps shown in
this module, and the actual UI you encounter in the Management Portal and Visual Studio 2012.
Objectives
After completing this module, you will be able to:
Use Windows Azure Queues as a communication mechanism between different parts of your
application
Control access to your storage items.
Developing Windows Azure and Web Services 9-3
Lesson 1
Introduction to Windows Azure Storage
Windows Azure Storage is a core pillar of the Windows Azure Platform. Built on top of the platform fabric,
Windows Azure Storage represents the power and flexibility of the cloud environment by offering three
types of storage solutions.
In this lesson, you will explore the different types of storage solutions and their major advantages and
strengths.
Lesson Objectives
After completing this lesson, you will be able to:
Describe the meaning and the capabilities of a Windows Azure Storage Account.
Using a persistent store in the cloud is not only necessary for application data, it is also used by the
Windows Azure fabric for persisting data. For example, deployment packages that are used to create
instances of roles are stored in Windows Azure storage and are used when deploying new instances such
as during the initial deployment, during scale-up, and when recovering for a failure. Performance metrics
and diagnostics data are also stored in Windows Azure storage.
Windows Azure Storage is an elaborate storage system. While the operating system provides a file system
mechanism, Windows Azure Storage provides additional features. In particular, Windows Azure Storage
provides you the choice of storing your data according to its essential nature. You can use a powerful API
to handle the data efficiently and have more flexibility than Relational Database Management System
(RDBMS) storage.
Windows Azure Storage ensures that all our stored data is replicated on multiple machines. Data can
further be geo-replicated to another data center in the same region (USA, Europe, Asia, etc.) for disaster
recovery scenarios.
9-4 Windows Azure Storage
Note: Despite its name, Table storage is not a relational table store. Windows Azure
provides a relational database as a service solution called SQL Database which is not covered in
this course.
Storage
Access Mechanism Transactional Size
Type
Table Key- OData with Windows Azure At the table 100 terabytes per table
storage Entity Storage Client abstraction level
pairs
Queue Messages HTTP-based API with At the message 64 kilobytes (KB) per
storage Windows Azure Storage level message
Client abstraction
By comparing all storage options, you can see that Windows Azure Storage offers a great deal of flexibility
with regards to sizes and access mechanisms.
Despite the differences, all storage options have built-in synchronous replication to other machines within
the same Windows Azure data center.
Blob storage and Table storage further offer a geo-replication feature, which copies data to a second
datacenter in the same region (North America, Europe or Asia). This option is enabled by default and
offers better protection in the case of an entire datacenter going offline.
Choosing the right solution depends on the type of application and how the application works with the
data in the cloud. When choosing a solution, you need to take the following into consideration :
Developing Windows Azure and Web Services 9-5
1. Size of data.
3. Potential cost
It might not be possible to arrive at a single solution for these questions. Applications use different types
of data and work in different ways. Because of this diversity, Windows Azure Storage offers a variety of
options for data management. Using each option in an efficient way requires a good understanding of the
scenarios it makes the most sense in.
Note: This module concentrates in storage solutions. However, Windows Azure has a
variety of database solutions, which are not covered in this course.
3. Affinity groups that you can use to collocate storage accounts and computer resources in the same
cluster in the data center.
4. Each account has two independent 512 bit shared-secret keys for authenticating clients. These keys
can be regenerated.
Accessing a storage account requires credentials to be supplied. Credentials are built from the account
name, which is unique across all data centers and the 512-bit shared key. A model similar to database
connection strings can be used by writing to a connection string, which is then used to create the
connection to the storage.
Note: You should change the access keys to your storage account periodically to help keep
your storage connections more secure. Two access keys are assigned to enable you to maintain
connections to the storage account using one access key while you regenerate the other access
key.
9-6 Windows Azure Storage
Note: Remember that having access to the 512-bit shared key allows unrestricted access to
your storage account. Ensure you keep it safe.
As part of its installation, the Windows Azure SDK installs the local Windows Azure Storage Emulator. The
storage emulator is ideal for testing applications locally. The Storage Emulator provides local blob, table,
and queue storage, and does not require you to have an active Windows Azure account. There are several
limitations that apply when you use the Storage Emulator. For a complete list of limitations, refer to the
following MSDN article.
Differences Between the Storage Emulator and Windows Azure Storage Services
http://go.microsoft.com/fwlink/?LinkID=298845&clcid=0x409
Microsoft.WindowAzure.Storage.dll
Windows Azure Storage exposes its functionality via HTTP-based APIs, but working with it directly can be
time consuming. Instead, you can use the Windows Azure Storage Client Library for .NET in your
applications for an easier programming model.
You can obtain the assembly in multiple ways; the easiest one is with NuGet. Simply add the
WindowsAzure.Storage package to your project from the NuGet Official Packages Source.
The Windows Azure Storage account is represented by the
Microsoft.WindowsAzure.Storage.CloudStroageAccount class. You can create an instance by using a
connection string that represents the storage account name and private access key. Like all connection
strings, you can put your storage connection string in a configuration file, rather than hard-coding it.
Best Practice: Make sure that you store the connection string in the most affective
configuration file for example, when using the connection string from Windows Azure web or
worker roles, it is recommended you store your connection string using the Windows Azure
service configuration system (*.csdef and *.cscfg files). For other .NET applications, it is
recommended you store your connection string using the .NET configuration system such as the
web.config or app.config file.
If you wish to use the Storage Emulator instead of a Windows Azure Storage account, set the value of the
connection string to UseDevelopmentStorage=true. In this case, there is no need to specify the
protocol, account name, and account key.
The recommended way to set the connection string is by using the settings tab in the roles properties
window in Visual Studio.
Note: The default connection string points to the Storage Emulator. Keep in mind that the
Storage Emulator cannot be used by a deployed application; therefore you must set the actual
storage account before deploying your application.
After a connection string is configured, you can use it to create an instance of the the
Microsoft.WindowsAzure.CloudStorageAccount class. You can create a CloudStorageAccount
instance by using the CloudStorageAccount.Parse static method, which receives a connection string. To
Developing Windows Azure and Web Services 9-7
get the storage connection string from the deployment packages .cscgf file, use the
CloudConfigurationManager.GetSetting method.
The following code illustrates how to create an instance of the CloudStorageAccount class.
Creating a CloudStorageAccount
var account = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("DataConnectionString"));
Demonstration Steps
1. Open the Windows Azure Management Portal website (http://manage.windowsazure.com)
2. .Start creating a new storage account. In the URL text box, enter demostorageaccountyourinitials
(yourinitials contains your names initials, in lower-case). The URL you set will be used to access blob,
queue, and table resources for the account.
3. In the REGION box, select the region closest to your location. To reduce the communication latency,
it is better to create the storage account in the same region as your application deployment. Click
CREATE STORAGE ACCOUNT at the lower right corner of the portal and wait until the storage
account is created.
Note: If you get a message saying the storage account creation failed because you reached
your storage account limit, delete one of your existing storage accounts and retry the step. If you
do not know how to delete a storage account, consult the instructor.
4. Wait for the storage account to be created, and click the storage account name to review the storage
DASHBOARD and CONFIGURE tabs.
5. Open the storage account keys, and notice the account has two keys, primary and secondary. The
secondary key is intended to be used when renewing the primary key, for example, if the primary key
is compromised.
9-8 Windows Azure Storage
Lesson 2
Windows Azure Blob Storage
Windows Azure Storage introduces the Blob storage service for storing files in a scalable and durable
manner.
In this lesson, you will explore the Windows Azure Blob Storage features and learn how to use them.
Lesson Objectives
After completing this lesson, you will be able to:
Data in blobs can be exposed publicly to anyone with Internet access or privately for our own application.
You can find Blob storage useful for the following scenarios:
This is not a closed list and there are many more scenarios that can benefit from the use of blobs.
However, having such a large number of objects requires some type of organization.
Developing Windows Azure and Web Services 9-9
Blob storages are also used extensively throughout the Windows Azure platform. For example, the
Windows Azure deployment mechanism saves the deployment packages to Blob storage. These packages
are also used by the auto scaling mechanism. Diagnostics logs are also saved to cloud storage and
Windows Azure virtual machines disks are persisted to Blob storage as well.
Storage account. Storage accounts are the root entities of the blob service. Every access to Windows
Azure Storage must be done through a storage account.
Container. Containers are the sub-entities of the storage accounts. Each container can contain blobs.
An account can contain an unlimited number of containers. A container can store an unlimited
number of blobs.
Blob. Blobs are the leaf of the hierarchy; and represent a file of any type. There are two types of
blobs: block blobs or page blobs. The differences between block blobs and page blobs are covered
later in this lesson.
Note: The Windows Azure Client SDK contains a class called CloudBlobDirectory, however
directories are not part of the hierarchy and simple represent substrings of the blobs name
separated by /.
Using this schema each blob can be addressed by using the following URL format :
http://<storage account>.blob.core.windows.net/<container>/<blob>
Block Blobs
Block blobs are designed for streaming workloads
where the entire blob is uploaded or downloaded
as a stream of blocks.
The maximum size for a block blob is 200 GB, and
it can include up to 50,000 blocks.
To upload a block blob you must first upload a collection of blocks and then commit them by their
BlockID.
Block blobs simplify large file upload over the network by introducing the following features:
It is possible to create a new version of an existing blob by uploading new blocks or deleting existing ones
and committing all BlockIDs of the blob in a single commit operation.
The following code shows how to split a file into blocks and upload them to a block blob
var fs = File.OpenRead("MyFile.txt");
byte[] data = new byte[100];
int id = 0;
while (fs.Read(data, 0, 100) != 0)
{
using (var stream = new System.IO.MemoryStream(data))
{
string blockID =
Convert.ToBase64String(Encoding.UTF8.GetBytes((id++).ToString()));
// Upload a block
blob.PutBlock(blockID, stream, null);
blockList.Add(blockID);
}
}
When a block blob upload is larger than the value specified in the SingleBlobUploadThresholdInBytes
property, the storage client breaks the file into blocks.
You can set the number of threads used to upload the blocks in parallel using the
ParallelOperationThreadCount property.
The following code shows how to upload a large file to a block blob using multiple threads
Page Blobs
Page blobs are designed for random-access workloads in which clients execute random read and write
operations in different parts of the blob.
Page blobs can be treated much like an array of bytes structured as a collection of 512-bytes pages.
Handling a page blob is similar to handling a byte array:
Read and write operations are executed by specifying an offset and a range (that align to 512-byte
page boundaries)
Developing Windows Azure and Web Services 9-11
Unlike block blobs, page blobs do not introduce a separate commit phase meaning that writes to page
blobs happen in-place and are immediately committed to the blob.
Reading data from page blobs can be done by using the OpenRead method that lets you stream the full
blob or a range of pages from any offset in the blob, or by using the GetPageRanges method for getting
an enumeration over PageRange objects.
The following code shows how to read from page blob by using OpenRead.
Unlike block blobs, page blobs are not continuous so when reading over pages without any data stored in
them, the blob service will return 0s for those pages. You can use the method GetPageRanges to get a
list of the ranges in the blob that contain valid data. You can then enumerate the list and download the
data from each page range.
The following code shows how to read from page blob by using GetPageRanges.
Using GetPageRanges
CloudBlobContainer myContainer = blobClient.GetContainerReference("myContainer");
CloudPageBlob myPageBlob = myContainer.GetPageBlobReference("myPageBlob");
Like any other storage task, it is possible to create a container using the Windows Azure Storage HTTP
API.
To create a container you can send an HTTP PUT request according to the following pattern:
http://myaccount.blob.core.windows.net/mycontainer?restype=container
The request must be authorized by setting the Authorization header. The Date and x-ms-version
headers must also be provided.
The following HTTP request shows how to create a new container by using the Windows Azure Storage
HTTP API.
Create a new container using the Windows Azure Storage HTTP API
PUT http://myaccount.blob.core.windows.net/mycontainer?restype=container HTTP/1.1
Request Headers:
x-ms-version: 2011-08-18
x-ms-date: Sun, 25 Sep 2012 10:12:32 GMT
x-ms-meta-Name: CreateContainerSample
Authorization: SharedKey myaccount:Z5HJWFKK978NFRsKNh0PNtksNc9nbXSSqGHueE00JdjidOQ=
It is possible to create a container using storage management tools such as Visual Studio.
The following figure shows how to create a new container by using Visual Studio.
Developing Windows Azure and Web Services 9-13
Note: After deleting the container, a container with the same name cannot be created for
at least 35 seconds.
The following code shows how to create and delete a container by using C# storage API.
container.Delete();
Like any other storage task, it is possible to delete a container using the Windows Azure Storage REST API.
To delete a container you can send an HTTP DELETE request according to the following pattern:
http://myaccount.blob.core.windows.net/mycontainer?restype=container
The request must be authorized by setting the authorization header.
The following http request shows how to delete a container using the Windows Azure Storage REST API.
Request Headers:
x-ms-version: 2011-08-18
x-ms-date: Sun, 25 Sep 2012 10:15:32 GMT
9-14 Windows Azure Storage
x-ms-meta-Name: DeleteContainerSample
Authorization: SharedKey myaccount:Z5HJWFKK978NFRsKNh0PNtksNc9nbXSSqGHueE00JdjidOQ=
Download. ICloudBlob contains four synchronous methods for downloading data from a blob:
DownloadByteArray, DownloadText, DownloadToFile, and DownloadToStream each specifically
designed for particular types of input. BeginDownloadToStream can be used to download data
from a blob asynchronously.
Delete. ICloudBlob contains two methods for downloading data from a blob: Delete and
DeleteIfExists. BeginDelete and BeginDeleteIfExists can be used to delete a blob asynchronously.
Note: In SDK, tasks-based async methods were added to the ICloudBlob interface.
The following code shows how to execute basic blob operations by using ICloudBlob methods.
myBlob.UploadFile("MyPicture.jpg");
byte[] buffer = myBlob.DownloadByteArray();
myBlob.Delete();
CloudBlobDirectory simulates the notion of directories for Blob storage. A blob container contains a flat
list of blobs yet it is possible to simulate file system directories by naming blobs with names that include
the \ character.
Developing Windows Azure and Web Services 9-15
CloudBlobDirectory is a client-side artifact that you can use to list blobs that have a name that starts
with a directory name followed by either a \ or / delimiter. For example both myDir\MyBlob1 and
myDir/MyBlob2 can be associated with the CloudBlobDirectory myDir.
You can use the method ListBlobs of CloudBlobContainer and CloudBlobDirectory to list the blob they
reference.
Blobs and containers can contain metadata information stored in a collection of key-value pairs.
You can use metadata to store user-defined information about the blob such as the creation time of the
blob or the identity of the user who created it.
The following code shows how to store metadata on a blob.
myBlob.Metadata["creationTime"] = DateTime.UtcNow.ToShortDateString();
myBlob.Metadata["owner"] = Thread.CurrentPrincipal.Identity.Name;
myBlob.SetMetadata();
Some information is provided automatically in a collection of built-in properties. A container has only
read-only properties, while a blob has both read-only and read-write properties. You can use properties
to read information about blobs and containers such as ETags and length.
9-16 Windows Azure Storage
The following code shows how to access blob and container properties.
Demonstration Steps
1. Open the BlobsStorageEmulator.sln solution located in
D:\Allfiles\Mod09\DemoFiles\BlobsStorageEmulator.
2. Open the properties of the BlobStorage.Web Web Role located in the BlobStorageEmulator
project, and review the settings on the Settings tab. The PhotosStorage connection string points to
the storage emulator, which runs on the local computer
3. In the BlobStorage.Web project, locate the ContainerHelper class and review the code in the
GetContainer method. The method uses the connection string from the Web Role settings to
connect to the storage account, verifies that the blob container named files exists, and creates the
container if it does not exist. The method also verifies that the containers permission level is set to
public.
Note: The name of the container that is passed into the GetContainerReference method
must be in lowercase. The default permissions for a container are private, which means the
container is not publicly accessible from the Internet, and you can only access it by using the
storage account access key
4. In the BlobStorage.Web project, open the HomeController.cs file and review the contents of the
Index method. The method uses the ListBlobs method to get a list of blob items from the container,
similar to a flat list of files.
Developing Windows Azure and Web Services 9-17
5. Still in the HomeController class, review the contents of the UploadFile method. The method
uploads a file to the blob container by using the GetBlockBlobRefrence method to get a reference
to the new block blob and the UploadFromStream method to upload the file to the new blob.
6. In the BlobStorage.Web project, open the BlobsController.cs file and view the code in the Get
method. The method uses the GetBlockBlobRefrence method to get a reference to an existing block
blob, and then uses the OpenRead method to download the content of the blob.
7. Run the BlobStorageEmulator project and use the file upload area to upload the
EmpireStateBuilding.jpg and StatueOfLiberty.jpg files from D:\Allfiles\Mod09\LabFiles\Assets
folder. After uploading the files you will see the list of the blobs in the container.
8. Use the Direct Download link to download one of the photos directly from the storage account by
using HTTP GET request, and use the Download link to download the other photo by using the
Windows Azure Storage API.
9. Return to Visual Studio 2012 and use the Server Explorer window to view the list of blob in the files
blob container of the local (development) storage account.
There are three RetryPolicies built-in within the Storage Client Library:
RetryPolicies.NoRetry. No retry is executed
RetryPolicies.Retry. Retries N number of times with the same backoff interval between each attempt.
RetryPolicies.RetryExponential (Default). Retries N number of times with an exponentially
increasing backoff interval between each attempt.
Not all exceptions will cause the storage client to initiate a retry. Exceptions are classified as retryable or
non-retryable. For example, all http status codes >=400 and <500 are non-retryable exceptions statuses,
which imply inability to process the clients request by the service due to the request itself.
All other exceptions are retryable. For example, if a client side-timeout was triggered then it makes sense
to initiate a retry.
After retryable exceptions are caught, the Storage Client Library evaluates the RetryPolicy and decides if
to initiate a retry. The exception will be presented to the client only if the RetryPolicy determines that
there is no need to retry the operation. For example, if the RetryPolicy was configured to execute 3 retry
attempts, the exception is rethrown to the client only when the third attempt fails.
It is possible to construct custom retry policies and customize the retry algorithm to fit your specific
scenario. For example, you can set a retry algorithm per exception type.
The following code shows how to create and use a custom retry policy.
};
};
}
Lesson 3
Windows Azure Table Storage
Windows Azure Table storage introduces a scalable key-value store designed for storing entities. Table
storage is useful for storing entities in simple yet scalable scenarios, when advanced scenarios, such as
joining and advanced filtering are not needed.
In this lesson, you will explore the Windows Azure Table storage features and learn how to use them.
Lesson Objectives
After completing this lesson, you will be able to:
Describe the basic Table Storage features and compare Table storage with a relational database.
Windows Azure Table Storage is a key-value store. Key-value stores are designed to store simple data in a
scalable manner. You can use Table storage to store a large set of structured entities at a low cost and
issue simple queries to retrieve entities when required.
Similar to other key-value stores, Windows Azure Table storage was designed for linear scale and enforces
no schema to the entities stored in the Table, this means you can store different types of entities in the
same Table.
Windows Azure Table storage does not provide any way to represent relationships between entities and
thus does not support join operation.
Tables, like all other storage offerings, are accessed via a URI. Tables URIs are formed according to the
following format: http://<storage account name>.table.core.windows.net/<table name>
Windows Azure Table storage can store simple entities that can be easily mapped to .NET objects with
properties of the following types: byte[], bool, DateTime, double, Guid, Int32/int, Int64/long and String.
An entity can contain a maximum of 255 properties and is limited to 1 Mb in size, yet the only limitation
on table size is the 100 terabytes allowed per storage account.
Every entity must contain three basic properties: partition key, row key, and a timestamp.
9-20 Windows Azure Storage
Entities with the same partition key can be queried more efficiently, and inserted/updated in atomic
operations. An entity's row key is its unique identifier within a partition.
The following code shows how to delete a Windows Azure Storage table by using the Windows Azure
.NET SDK.
Like any other storage task, it is possible to create a table using the Windows Azure Storage HTTP-based
API.
To create a table, you can send an HTTP POST request according to the following pattern:
http://myaccount.table.core.windows.net/Tables
The request must be authorized by setting the Authorization header. Additionally, the Content-Type,
Content-Length and Date headers must be provided.
The following HTTP request body shows how to create a Windows Azure Storage table by using HTTP-
based API.
A request body for creating a Windows Azure Storage table using HTTP-based API
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<entry xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices"
xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"
xmlns="http://www.w3.org/2005/Atom">
<title />
<updated>2013-01-10T12:48:31.1230639+02:00</updated>
<author>
<name/>
</author>
<id/>
Developing Windows Azure and Web Services 9-21
<content type="application/xml">
<m:properties>
<d:TableName>customers</d:TableName>
</m:properties>
</content>
</entry>
To delete a table, you can send an HTTP DELETE request according to the following pattern:
http://127.0.0.1:10002/devstoreaccount1/Tables('mytable')
The request must be authorized by setting the Authorization header. Additionally, the Content-Type,
Content-Length and Date headers must be provided.
Creating and deleting tables is possible using a variety of cloud storage management tools.
Deriving from TableEntity might be problematic when using other class topologies or when using
DataContract serialization because TableEntity is not DataContract-serializable. In such scenarios, it is
possible to create an entity as a simple object that contains the PartitionKey, RowKey, and Timestamp
properties and decorate the type with the DataServiceKey attribute.
The following code shows how to create a table entity as a simple object.
public Employee() { }
The above code sample contains the PartitionKey and RowKey properties, among other entity-related
properties. However, the code does not include the TimeStamp property. The TimeStamp property exists
for every entity, and its value is automatically updated in the table to the last modification date of the
entity. If you include the property in the class, it will be populated with the value from the table. You can
also update the TimeStamp property manually to any DateTime value.
To retrieve entities, use the CreateQuery method, which accepts the table name. The CreateQuery
method returns an IQueryable implementation, which can be queried using LINQ.
The following code shows how to create a query and retrieve an entity.
Developing Windows Azure and Web Services 9-23
Update an Entity
var context = tableClient.GetDataServiceContext();
var kids = context.CreateQuery<Person>("customers").Where(p => p.Age < 18);
var firstKid = kids.FirstOrDefault();
if (firstKid != null)
firstKid.Address = "London";
context.UpdateObject(firstKid);
context.SaveChanges();
Delete an Entity
var context = tableClient.GetDataServiceContext();
var kids = context.CreateQuery<Person>("customers").Where(p => p.Age < 18);
var firstKid = kids.FirstOrDefault();
context.DeleteObject(firstKid);
context.SaveChanges();
Demonstration Steps
1. Open the Windows Azure Management Portal (http://manage.windowsazure.com)
2. If you did not perform the demo in the first lesson, create a new Windows Azure Storage account
named demostorageaccountyourinitials (yourinitials contains your names initials, in lower-case).
Locate the storage accounts primary key and copy it to clipboard.
4. Open the Web.config file, locate the StorageAccount application setting and replace the place
holders in the text with the storage account name and the account access key you copied to the
clipboard.
5. Open the Country.cs file from the projects Models folder. The Country class derives from the
TableServiceEntity, so it can be added to the Table storage. The RowKey property contains the
name of the country as the unique identifier of the entity, and the PartitionKey property, which is
used for partitioning and scalability, contains the continent name of the country.
9-24 Windows Azure Storage
6. Open the CountriesController.cs file from the projects Controllers folder, and examine the content
of the GetTableContext method. The method calls the CreateIfNotExists method to verify that the
table exists, create it if it does not exist, and finally, return a TableServiceContext object which is
used for querying and adding entities to the table.
7. Examine the content of the Index method. The method uses the CreateQuery<T> generic method
to create Table storage queries. The first query retrieves all the countries in the Table, and the second
query filters the list of entities according to the PartitionKey property, by using a LINQ statement.
8. Examine the content of the Add method. The method uses the AddObject method to add a new
country to the local context, and then uses the SaveChanges method to persist the changes to the
Table storage.
9. Run the web application and add a country with the following information:
Language: Italian
Continent: Europe
Name: Italy
Verify you see the new country in the list of countries shown at the top of the page.
10. Add another country with the following information:
Language: Chinese
Continent: Asia
Name: China
Verify you see both added countries in the list of countries shown at the top of the page.
11. Append countries?continent=Europe to the browsers address bar and verify you only European
countries.
12. In Visual Studio 2012, add your Windows Azure Storage account to the list of storage accounts in the
Server Explorer window. Use the account name and key you used in the Web.config file.
13. Open the Countries table from the storage account you added and verify you see the PartitionKey,
RowKey, TimeStamp, and Language columns.
14. Open the Country.cs file from the projects Model folder, and add a new int property named
Population to the Country class.
15. Open the CountriesController.cs file from the projects Controllers folder, locate the Add method,
and set the Population property of the country object by getting the value of the Population key
from the collection object. You will need to parse the value to an int value.
16. Run the web application and add a country with the following information:
Population: 65350000
Language: French
Continent: Europe
Name: France
Verify you see the new country in the list of countries shown at the top of the page.
17. Return to Visual Studio 2012, refresh the contents of the Countries table, and verify that the
Population column was added to the table.
Developing Windows Azure and Web Services 9-25
Lesson 4
Windows Azure Queue Storage
Queues are important for implementing high-scale web applications. Windows Azure Storage introduces
a simple yet scalable queuing system that you can leverage for communication between your
applications roles.
In this lesson, you will explore the Windows Azure Queue Storage features and learn how to use them.
Lesson Objectives
After completing this lesson, you will be able to:
Describe the difference between Windows Azure Storage queues and Service Bus queues.
Windows Azure queues is a simple queuing infrastructure. It is not transactional, but the queue
guarantees that messages are sent at least once. Message size is limited to 64 kilobytes (KB), there are no
dead-letter queues but poison messages are supported. Windows Azure Queues can contain millions of
messages - up to the 100 terabytes total capacity limit of a storage account - and maintain them in
memory up to seven days.
The security model of Azure Queues is designed for Windows Azure roles to use. To access a queue, you
must supply the Windows Azure storage credentials. Such credentials cannot be distributed to clients for
obvious reasons.
Service Bus Queues
It is important to know that Windows Azure Queues are not the only queue offering in Windows Azure.
The Service Bus infrastructure provided by Windows Azure was designed for collaboration and integration
of applications at large scale. As such, its security model was designed for customers to use. Like all
9-26 Windows Azure Storage
Service Bus resources, queues can be authenticated with Access Control Service (ACS) and leverage
federated authentication. The service bus queue message size is larger than an Azure queue and is limited
to 256 KB. The service bus queue is transactional and messages are sent exactly once. The service bus
queue supports batching and locking at the queue level and provides built-in WCF integration.
The following table contains a feature comparison between Azure Queues and the Service Bus Queues.
For more information about comparison between Azure Queues and the Service Bus Queues, consult
MSDN documentation:
http://go.microsoft.com/fwlink/?LinkID=298848&clcid=0x409
The following code shows how to delete an existing queue using the Windows Azure .NET SDK.
Like any other storage tasks, it is possible to create a queue by using the Windows Azure Storage HTTP-
based API.
To create a queue, you can send an HTTP PUT request according to the following pattern:
To delete a queue, you have to send a DELETE request according to the following pattern:
http://myaccount.queue.core.windows.net/myqueue.
The request must be authorized by setting the Authorization and the Date headers. The x-ms-version
header is optional.
It is possible to create and delete queues with all Azure Storage management tools and with Visual Studio.
9-28 Windows Azure Storage
The following figure shows how to create a new queue using Visual Studio Server Explorer.
The message content must be in a format that may be encoded with UTF-8. The request must be
authorized by setting the Authorization and the Date headers. The x-ms-version header is optional.
The following snippet shows how to add a new message into an existing queue by using HTTP.
Developing Windows Azure and Web Services 9-29
Headers:
x-ms-version: 2011-08-18
x-ms-date: Tue, 30 Aug 2012 04:03:21 GMT
Authorization: SharedKey myaccount:sr8rIheJmCd6npMSx7DfAY3L//V3uWvSXOzUBCV9wnk=
Content-Length: 100
Body:
<QueueMessage>
<MessageText>Hello World</MessageText>
</QueueMessage>
1. Two-phase dequeue
2. Peek messages
3. Batch dequeue / Batch peek
The two-phase dequeue pattern assures that if your code fails to process a message, another instance can
get the same message and try again. When a message becomes visible and is read by another instance
the FIFO ordering is disrupted. You have to design your code and messages in such a way that they
support multiple processing of the same message.
The following code shows how to dequeue a message from a queue.
Using the peek pattern, you call the method PickMessage of the CloudClient object to peek at the
message in the front of a queue without removing it from the queue. The method is non-blocking,
meaning that it will return even if there was no message in the queue. A message returned from
PickMessage is visible to any other client that tries to read messages from the queue.
The following code shows how to peek at a message in the front of a queue.
9-30 Windows Azure Storage
It is possible to dequeue or peek at messages from the queue in a batch operation to reduce the number
of network calls and improve performance.
The following code shows how to dequeue and peek messages in a batch.
messages = myQueue.PeekMessages(100);
foreach (var cloudQueueMessage in messages)
{
//Process message
}
Demonstration Steps
1. Open the Windows Azure Management Portal (http://manage.windowsazure.com)
2. If you did not perform the demo in the first lesson, create a new Windows Azure Storage account
named demostorageaccountyourinitials (yourinitials contains your names initials, in lower-case).
Locate the storage accounts primary key and copy it to clipboard.
4. Open the App.config file located in the WorkingWithAzureQueues.Sender project. Locate the
StorageConnectionString connection string and replace the place holders in the text with the
storage account name and the account access key you copied to the clipboard.
5. Open the App.config file located in the WorkingWithAzureQueues.Receiver project. Locate the
StorageConnectionString connection string and replace the place holders in the text with the
storage account name and the account access key you copied to the clipboard.
6. Open the Program.cs file from the WorkingWithAzureQueues.Sender project, and review the
Main method. The code in the method creates a new Windows Azure Queue named messagequeue
by calling the GetQueueReference and CreateIfNotExists methods, and then sends messages to the
queue by creating new CloudQueueMessage objects and sending them to the queue by calling the
AddMessage method.
Developing Windows Azure and Web Services 9-31
7. Open the Program.cs file from the WorkingWithAzureQueues.Receiver project, and review the
Main method. Focus on the use of the GetMessage and DeleteMessage methods of the
ClientQueue class.
8. Configure the WorkingWithAzureQueues.sln to have multiple startup projects, starting both the
WorkingWithAzureQueues.Sender and WorkingWithAzureQueues.Receiver projects.
9. Run the projects without debugging and view how each message sent to the queue in the Sender
console window is retrieved from queue in the Receiver console window.
10. Close the Sender console window. Wait for the Receiver application to finish handling the queued
messages, and then close the Receiver console window.
9-32 Windows Azure Storage
Lesson 5
Restricting Access to Windows Azure Storage
Information in Azure Storage can be private or sensitive. You can define elaborate access control policies
to ensure privacy while granting access to information owners and allowing them to perform actions.
In this lesson, you will explore Windows Azure Storage data access capabilities and learn how to define
access policies.
Lesson Objectives
After completing this lesson, you will be able to:
1. Full public read access. Container and blob data can be accessed for reads via anonymous requests
but enumeration of containers in the storage account is blocked. Enumeration of blobs inside a
container, however, is permitted.
2. Public read access for blobs only. Blob data can be accessed for read via anonymous request but
enumeration of blobs in a container is blocked.
Your client can call your web service which will return a Shared Access Signature for a specific resource.
Now the client has a short window of time in which they can perform the operation you allow on a
specific resource.
The access rights granted in a Shared Access Signature define which operations can be performed on the
resource.
For blobs:
Reading and writing blob content block lists, properties, and metadata
For queues:
Adding a queue message
For tables:
All the information about the granted access levels, the specific resource and the allotted time frame is
incorporated within the Shared Access Signature URL as query parameters. In addition, the Shared Access
Signature URL contains a signature that the storage services use to validate the request.
9-34 Windows Azure Storage
It is possible to specify all access control information in the URL or to embed a reference to an access
policy. With access policies, you can modify or revoke access to the resource if necessary.
For more information about the structure of the Shared Access Signature URL, consult MSDN
documentation:
http://go.microsoft.com/fwlink/?LinkID=298849&clcid=0x409
BlobContainerPublicAccessType.Off.
SharedAccessPolicies collection.
Create a SharedAccessPolicy object and set the appropriate access rights for the blob.
Finally, create a Shared Access Signature URL by calling GetSharedAccessSignature on the CloudBlob
object with the blob SharedAccessPolicy object and the key of the container policy in the containers
SharedAccessPolicies collection, as parameters.
The following code shows how to create a Shared Access Signature for a blob using a reference to a
container policy.
Create a Shared Access Signature for a Blob Using a Reference to a Container Policy
static string CreateReferencedSharedAccessSignature(string blobName, string path)
{
var storageClient = CloudStorageAccount.Parse(connectionString);
var blobClient = storageClient.CreateCloudBlobClient();
var container = blobClient.GetContainerReference("MyContainer");
var blob = container.GetBlobReference(blobName);
blob.UploadFile(path);
BlobContainerPermissions permissions = new BlobContainerPermissions();
permissions.PublicAccess = BlobContainerPublicAccessType.Off;
SharedAccessPolicy containerPolicy = new SharedAccessPolicy()
{
Permissions = SharedAccessPermissions.Read
};
permissions.SharedAccessPolicies.Clear();
permissions.SharedAccessPolicies.Add("MyReadOnlyPolicy", containerPolicy);
container.SetPermissions(permissions);
SharedAccessPolicy blobPolicy = new SharedAccessPolicy()
{
Permissions = SharedAccessPermissions.Read,
SharedAccessExpiryTime = DateTime.UtcNow.AddDays(1d)
};
var sas = blob.GetSharedAccessSignature(blobPolicy, "MyReadOnlyPolicy");
return (blob.Uri.AbsoluteUri + sas);
}
Demonstration Steps
1. Open the solution D:\Allfiles\Mod09\DemoFiles\SharedAccessSignature\SharedAccessSignature.sln in
Visual Studio 2012.
2. Open the AzureHelper.cs file from the SASDemo project, under the Model folder, and review the
code in the GetBlobContainer method. After the method creates the blob container, it sets its
permissions to prevent public access by setting the BlobContainerPermissions.PublicAccess
property to BlobContainerPublicAccessType.Off. In addition, the method creates a
SharedAccessBlobPolicy object and sets the shared access policy to be valid for one minute from
the time the container is created. Finally, the method applies the permissions to the blob container by
calling the SetPermissions method.
3. Locate the GetPicturesReferences method and view its content. The method iterates the list of blobs
and returns information about each blob. The information contains the public URL, which is not
accessible, and an accessible URL which is created with a shared access key. To create the shared
9-36 Windows Azure Storage
access key, the method calls the CloudBlobContainer.GetSharedAccessSignature method with the
name of the access policy which was created for the blob container.
4. Locate the ExtendPolicy method and view its content. The method updates the expiration time of
the access policy by updating the SharedAccessExpiryTime property.
5. Run the SharedAccessSignatureDemo project and use the file upload area to upload the
EmpireStateBuilding.jpg file from D:\Allfiles\Mod09\LabFiles\Assets folder. After the file is
uploaded to the blob, click the Will not work link and verify the public access to the blob is not
available.
6. Return to the home page. If the expiration time has passed, click Extend Policy. Click the Will Work
until link, and verify you see the uploaded photo.
7. Observe the address in the address bar. The query string contains several parameters which make the
shared access signature:
sv: Signed version. The version of the Windows Azure storage service
sr: Signed resource. Specifies whether the signature is for a single blob (b) or the entire container (c)
si: Signed identifier. The name of the shared access policy used for this signature.
sig: Signature. The hashed authentication signature.
8. Return to the home page, wait for the expiration time to pass, and then click the Will work until link.
Verify you receive an error message saying the authentication has failed.
9. Return to the home page, click Extend Policy, wait for the page to refresh the expiration time, and
then click the Will Work until link, and verify you see the uploaded photo.
Developing Windows Azure and Web Services 9-37
In addition, the app should support two types of upload, public upload which makes the photo available
publicly for everyone to see and private upload which only permits the owner of the photo to view it. In
this lab, you will create shared access signatures for the end users blob container to allow them to view
their private content.
Objectives
After completing this lab, you will be able to:
Lab Setup
Estimated Time: 60 minutes
For this lab, you will use the available virtual machine environment. Before you begin this lab, you must
complete the following steps:
1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.
3. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.
4. In the Action pane, click Connect. Wait until the virtual machine starts.
Password: Pa$$w0rd
6. Return to Hyper-V Manager, click 20487B-SEA-DEV-C, and in the Action pane, click Start.
7. In the Action pane, click Connect. Wait until the virtual machine starts.
Password: Pa$$w0rd
9. Verify that you received credentials to log in to the Azure portal from you training provider, these
credentials and the Azure account will be used throughout the labs of this course.
9-38 Windows Azure Storage
Note: Windows 8 Store Apps can write directly to Windows Azure Storage. However, due
to the business logic of how data is stored in Blob storage, and in Table storage, which will be
used in the next exercise, it was decided that these features will be implemented on the server
side.
Note: You may see a warning saying the client model does not match the server model.
This warning may appear if there is a newer version of the Windows Azure PowerShell Cmdlets. If
this message is followed by an error message, please inform the instructor, otherwise you can
ignore the warning.
3. Click STORAGE in the left navigation pane and create a new Windows Azure Storage account named
blueyonderlab09yourinitials (yourinitials contains your names initials, in lower-case).
Use the NEW and then the QUICK CREATE button.
Note: If you get a message saying the storage account creation failed because you reached
your storage account limit, delete one of your existing storage accounts and retry the step. If you
do not know how to delete a storage account, consult the instructor.
4. Open the configuration for the newly created account, click MANAGE ACCESS KEYS button, and
copy the PRIMARY ACCESS KEY.
Developing Windows Azure and Web Services 9-39
Key Value
Name BlueYonderStore
Note: To create the connection string for the storage account, click the ellipsis in the Value
box.
Use the CreateCloudBlobClient method of the _account member, and then use the
GetContainerReference method to get the container reference.
Create the container if it does not exist by using the CreateIfNotExists method of the container, and
then return the container.
3. In the GetBlob method, add code to get the container and return a block blob reference.
Get the container using the GetContainer method you implemented before.
Check against the isPublic field and if it is true, use the container's SetPermissions method to set the
access type to Blob.
Return a reference to the blob by using the container's GetBlockBlobReference. Use the fileName
parameter as the blob's name.
4. Explore the implementation of the UploadStreamAsync method. The method uses the previous
methods to retrieve a reference to the new blob, and then uploads the stream to it.
9-40 Windows Azure Storage
The method calls the UploadStreamAsync method asynchronously, and after the upload completes,
returns the response.
2. Explore the Public and Private methods of the FilesController class.
The methods use the UploadFile function with a boolean indication whether the blob container is
public or private.
Note: The client app calls these service actions to upload files as either public or private.
Public files can be viewed by any user, whereas private files can only be viewed by the user who
uploaded them.
Results: You can test your changes at the end of the lab.
Use the table client object to retrieve the table by calling the GetTableReference method. Use the
table name stored in the MetadataTable static field of the class.
Create the table if it does not exist using the CreateIfNotExists method of the CloudTable class.
Use the CloudTableClient. GetTableServiceContext method to return a new table service context.
Note: You should make sure the table exists before you return a context for it, otherwise
the code will fail when running queries on the table. If you already created the table, you can skip
calling the GetTableReference and CreateIfNotExists methods.
Developing Windows Azure and Web Services 9-41
3. Add the file data to the table's context in the SaveMetadataAsync method.
Use the TableServiceContext.AddObject method and supply it with the target table name and
object to add. Use the table name stored in the MetadataTable static field of the class.
Add the code before calling the asynchronous save changes method.
Set the entity's partition key to the locationID parameter. Convert the locationID from int to string.
Set the entity's row key to a URI encoded value of the entity's Uri property. Use the
HttpUtility.UrlEncode static method to encode the URI.
Note: The RowKey property is set to the files URL, because it has a unique value. The URL is
encoded because the forward slash (/) character is not valid in row keys. The PartitionKey property is
set to the locationID property, because the partition key groups all the files from a single location in
the same partition. By using the locations ID as the partition key, you can query the table and get all
the files uploaded for a specific location.
5. Explore the code in the Metadata method. The method creates the FileEntity object and saves it to
the table.
Note: The client app calls this service action after it uploads the new file to Blob storage. By
storing the list of files in Table storage, the client app can use queries to find specific images,
either by trip or location.
Get the table service context by using the GetTableContext method you implemented in the
previous task.
Use the context's CreateQuery<T> generic method to create a data source for a LINQ query. Use the
table name stored in the MetadataTable static field of the class.
Create a LINQ query that searches for files with a partition key identical to the locationId parameter.
Note: Recall that the location ID was used as the entity's partition key.
Note: The method queries the table for each row key and returns the matching FileEntity
object by using the yield return statement
3. In the BlueYonder.Companion.Controllers project, open the FilesController class, and review the
implementation of the LocationMetadata method.
9-42 Windows Azure Storage
Note: The method retrieves the list of files in the trips public blob container, and then uses
the GetFilesMetadata method of the AsyncStorageManager class to get the FileEntity object
for each of the files. The client app calls this service action to get a list of all files related to a
specific trip. Currently the code retrieves only the public files. In the next exercise you will add the
code to retrieve both public and private files.
Results: You can test your changes at the end of the lab.
Set the policy's permission to Read and the expiration time to one hour from the current time. To
calculate future time, use the DateTime.UtcNow.AddHours method.
3. Use the containers GetSharedAccessSignature to return a shared access signature string for the
new policy. Pass the policy you created a few steps back to the method as a parameter.
Developing Windows Azure and Web Services 9-43
Note: The shared access key signature is a URL query string that you append to blob URLs.
Without the query string, you cannot access private blobs.
4. In the BlueYonder.Companion.Controllers project, open the FilesController class, and update the
TripMetadata method to retrieve a list of private trip files in addition to the public trip files.
To get private files, duplicate the call to the GetFileUris and set the Boolean parameter to false. Store
the result in a variable named privateUris.
Use the Union extension method to combine the private and public collections to a single collection.
Store the collection in a variable named allUris.
Change the code so the allKeys collection will use the allUris collection instead of just the publicUris
collection.
5. Locate the ToFileDto method and explore its code. If the requested file is private, you create a shared
access key for the blob's container, and then set the Uri property of the file to a URL containing the
shared access key.
6. Use Visual Studio 2012 to publish the BlueYonder.Companion.Host.Azure project. If you did not
import your Windows Azure subscription information yet, download your Windows Azure credentials,
and import the downloaded publish settings file in the Publish Windows Azure Application dialog
box.
7. Select the cloud service that matches the cloud service name you wrote down in the beginning of the
lab, while running the setup script.
8. Finish the deployment process by clicking Publish.
3. In the Addresses class of the BlueYonder.Companion.Shared project, set the BaseUri property to
the Windows Azure Cloud Service name you wrote down in the beginning of this lab.
4. Run the client app, search for New, and purchase a flight from Seattle to New-York
5. Select the current trip from Seattle to New York, and then select Media from the app bar.
6. In the Media page, use the app bar to add the StatueOfLiberty.jpg file from the
D:\Allfiles\Mod09\LabFiles\Assets folder. Use the app bar to upload the file to the public storage.
7. In the Media page, use the app bar to add the EmpireStateBuilding.jpg file from the
D:\Allfiles\Mod09\LabFiles\Assets folder. Use the app bar to upload the file to the private storage.
8. Return to the Current Trip page and then enter the Media page again. Wait for a few seconds until
the photos are downloaded from storage, and verify you see both the private and public photos.
9. Return to the Blue Yonder Companion page (the main page). Under New York at a Glance, verify
you see the photo of the Statue of Liberty you uploaded to the public container.
2. In Server Explorer, expand the Blobs node and inspect the two folders which were created, one for
private photos and one for public photos.
3. Open the public blob container, copy the photos URL and browse to the copied address. Verify you
see the photo.
4. Open the private blob container, copy the photos URL and browse to the copied address.
The private photos cannot be accessed by a direct URL, therefore an HTTP 404 (The webpage cannot
be found) page is shown.
Note: The client app is able to show the private photo because it uses a URL that contains a
shared access permission key.
5. In Server Explorer, open the contents of the FilesMetadata table. The table contains metadata for
both public and private photos.
Results: After you complete the exercise, you will be able to use the client App to upload photos to the
private and public blob containers. You will also be able to view the content of the Blob and Table storage
by using Visual Studio 2012.
Developing Windows Azure and Web Services 9-45
Review Question(s)
Question: You have been approached by a local high school and asked to design an
application for tracking student grades. How would you use Windows Azure Storage for this
task?
9-46 Windows Azure Storage
10-1
Module 10
Monitoring and Diagnostics
Contents:
Module Overview 10-1
Module Overview
In the real world, most application failures often occur only in production environments and not on the
developers machine. Understanding why applications fail, and obtaining as much information as possible
from the runtime environment, is of paramount importance to operations engineers and developers
looking to resolve bugs or understand application performance. Additionally, security concerns frequently
require collecting audit information from production machines for accountability and analysis purposes.
This module discusses tracing, with a focus on web service tracing and on auditing technologies provided
by Windows Azure. The module begins with tracing in the .NET Framework by using System.Diagnostics,
and then describes tracing in web service infrastructures such as Windows Communication Foundation
(WCF) and ASP.NET Web API. Finally, it explains the information you can get from the host with Microsoft
Internet Information Services (IIS), as well as Windows Azure monitoring and diagnostics.
Note: The Management Portal UI and Windows Azure dialog boxes in Visual Studio 2012
are updated frequently when new Windows Azure components and SDKs for .NET are released.
Therefore, it is possible that some differences will exist between screen shots and steps shown in
this module, and the actual UI you encounter in the Management Portal and Visual Studio 2012.
Objectives
After completing this module, you will be able to:
Lesson 1
Performing Diagnostics by Using Tracing
This lesson provides an overview of the .NET Framework and the application programming interface (API)
provided by the System.Diagnostics namespace. You use tracing to instrument applications and produce
informative messages that can help diagnose problems or analyze performance. If your application emits
trace messages, you can record logs to a variety of destinations, including files. With the proper
instrumentation in place, IT professionals can look at the trace message logs and point out problems,
identify their sources, and suggest solutions.
Lesson Objectives
After completing this lesson, you will be able to:
Describe the role of tracing and the tracing infrastructure provided by the .NET Framework.
Write trace messages by using the Trace class.
You can associate each trace message with a trace level, which describes the importance and urgency of
the trace information. For example, the Error trace level is reserved for error messages, whereas the Info
level describes a message that is not of critical importance. When you configure tracing, you can define
which trace levels should be recorded to persistent storage for later analysis. In a typical production
environment, only a minimal set of information is stored, but when something goes wrong, the trace level
can be adjusted, so more information is collected.
Trace messages that the code emits are received by trace listeners. The role of a listener is to collect, store,
and route tracing messages to the appropriate target. The information can be presented in the user
interface or in Microsoft Visual Studio and at the same time saved in a text file or database. You can
configure trace listeners to add additional formatting and attach more information such as the time,
module, and executing method that emitted the trace. Trace listeners are one of the extensibility points of
the diagnostic infrastructure in the .NET Framework. There are a number of built-in listeners available, and
it is also easy to build a custom listener and attach it to the diagnostic pipeline.
Developing Windows Azure and Web Services 10-3
Application-level frameworks, such as ASP.NET and WCF, use tracing extensively. By attaching a trace
listener, you can access internal information, which will simplify diagnostics and facilitate monitoring of
the execution path at critical points.
Simple Tracing
catch (Exception ex)
{
Trace.WriteLine("There is a problem: " + ex.ToString() , "Error");
}
You use the Write, WriteIf, WriteLine, and WriteLineIf methods to write simple trace messages. The
Trace.WriteLine method, similar to the Console.WriteLine method, adds a new line at the end of the
output. The Trace.Write method does not add a new line at the end of the output. The WriteIf and
WriteLineIf methods provide conditional tracing capabilities, and will output the trace message only if
the condition evaluates to true.
The Trace class provides some additional methods:
The Fail method produces error messages to simplify error handling.
The Flush method immediately forces buffered data to be sent to the Listeners collection. This
method is described in topic 4, "Configuring Trace Listeners".
The Close method implicitly calls the Flush method and then closes the listeners.
The Assert method evaluates a condition, and displays a diagnostic message that shows the call stack
if the condition is false.
You can provide the various Write methods with a category string that will be used later to filter
messages by trace listeners. You can also use built-in categories and call the TraceInformation,
TraceWarning, or TraceError methods.
The Trace class defines tracing methods. It does not provide any mechanism for persisting or handling
trace messages. Persisting and handling of messages is the task of trace listeners, which are specified in
the application configuration file. Trace listeners are explained in topic 4, "Configuring Trace Listeners".
10-4 Monitoring and Diagnostics
Using the SourceSwitch class in an application configuration file, you can dynamically turn off tracing or
change the level at which tracing occurs. Trace listeners can introduce an additional layer of filtering
through trace filters, as will be described in the next topic.
The following code example shows how to configure a tracing policy and attach it to a trace source.
In the preceding configuration file, the <sources> element contains a <source> element that defines a
source named MyTraceSource, associated with a switch named SourceSwitch. Additionally, the trace
source is associated with a trace listener named console. The <switches> element declares the
SourceSwitch switch, and configures its trace level to Warning. As a result, any Warning or Error
messages emitted through the MyTraceSource trace source will be passed through the trace listeners.
The following code example presents some simple operations with a TraceSource class.
Developing Windows Azure and Web Services 10-5
The difference between the TraceData and TraceEvent methods is that the TraceData method is
intended for logging the contents of an object, whereas the TraceEvent logs an event description,
supplied as a string. The TraceInformation method is the same as calling the TraceEvent method with
the first parameter being TraceEventType.Information.
The preceding configuration declares a trace listener named myListener of the type
TextWriterTraceListener, which writes to a file named TextWriterOutput.log. A trace source can now
reference this listener, as explained in the previous topic, "Writing Trace Messages with
System.Diagnostics.TraceSource".
TraceListener Class
http://go.microsoft.com/fwlink/?LinkID=298850&clcid=0x409
You can add filters to listeners by using code or in the application configuration file. Filters introduce
message filtering at the listener level, in contrast to the message filtering at the TraceSource level that is
10-6 Monitoring and Diagnostics
introduced by a SourceSwitch. For example, an EventTypeFilter class can be added to a trace listener to
control the event types that are output by the listener.
The configuration section for the source named TraceSourceApp declares a trace listener named console
of type ConsoleTraceListener, which outputs trace information to the console. It also associates this
listener with a filter of type EventTypeFilter, which will only log messages with a Warning or Error trace
level. The <sharedListeners> element illustrates that you can declare trace listeners in a single section of
your configuration file, and then reference them by name from your trace source configuration, instead of
specifying their type and filter for each trace source.
Question: Why should you use the TraceSource class and not the Trace class?
Developing Windows Azure and Web Services 10-7
Lesson 2
Configuring Service Diagnostics
Web services implemented by using WCF or ASP.NET Web API apply a long messaging pipeline to every
incoming message before the actual service method executes. Diagnosing errors along this pipeline can
be difficult. You can use tracing to obtain internal information about the execution of the messaging
pipeline. This information can help solve problems and increase performance of the application.
Lesson Objectives
After completing this lesson, you will be able to:
System.Diagnostics.Trace.WriteLine(message, rec.Category);
}
}
Note that the preceding code uses the Trace.WriteLine method to emit trace messages from the
ASP.NET Web API pipeline. As a result, you can use trace listeners and filters in the application
configuration file to specify what to do with the emitted messages.
To attach your ITraceWriter implementation to the ASP.NET Web API pipeline, you use the
HttpConfiguration object.
To write a trace message, you can access the trace writer by using the
HttpConfiguration.Services.GetTraceWriter method. However, it is more convenient to use the
extension methods provided by the ITraceWriterExtensions class. For example, the Error method creates
a trace with trace level Error.
The following code shows how to write trace messages in ASP.NET Web API.
The message you emit using the Error method in the preceding code example is passed to the
ITraceWriter interface implementation you saw earlier. In addition to the messages you write, ASP.NET
Web API also writes trace messages for specific events in the message handling pipeline, such as before
and after invoking the controller's action.
For additional information regarding the ASP.NET Web API trace writer, refer to the following article.
Note: WCF fault messages are covered in, Module 5, "Creating WCF Services", Lesson 2,
"Creating and Implementing a Contract" in Course 20487.
Turning on tracing and message logging requires a simple configuration update, which you can perform
by using the Visual Studio 2012 WCF Service Configuration Editor tool. The tool adds dedicated trace
listeners to the <system.diagnostics> configuration section in the configuration file, which you can take
advantage of by using trace sources in your code. To open the WCF Service Configuration Editor tool, in
Solution Explorer, right-click the configuration file in your WCF project, and then click Edit WCF
Configuration. After the tool opens, click the Diagnostics node in the Configuration pane, and then
click the Enable Tracing link.
The following image illustrates how to turn on tracing using the WCF Service Configuration Editor.
ActivityID header is added to every message on the wire. This facilitates correlation between different
trace files collected on different machines and helps the viewer present the messages in the correct order.
Additionally, every message can be correlated with tracing information related to it. If an error occurred
while handling a message, it will be highlighted in red.
The SvcTraceViewer.exe tool has two display modes for message traces:
Activity (detail) view, which shows every message and its corresponding tracing information emitted
by the WCF messaging pipeline.
Graph view, which shows how messages are delivered from one machine to the other, and how they
are processed by the WCF messaging pipeline.
The following image illustrates the graph view in the SvcTraceViewer.exe tool.
In addition to tracing, you can use message logging to record every message delivered to and from your
WCF service. Monitoring communication on the wire not only helps solve problems, it will also give you a
greater understanding of WS-* protocols, such as WS-Trust and WS-ReliableMessaging. Network sniffers
are popular tools that inspect messages on the wire, but WCF message logging makes communication
monitoring much simpler. Unlike common sniffers, WCF message logging can decrypt and decode
messages, providing a clear view of their content.
Similar to tracing, you enable message logging through the applications configuration file. You can use
the interface provided by the WCF Service Configuration Editor to update the configuration. After the tool
opens, click the Diagnostics node in the Configuration pane, and then click the Enable
MessageLogging link. When you turn on message logging, WCF stores all messages in files for
subsequent inspection.
Note: The default setting of message logging is to only log message headers. This is due to
the security risk involved in logging possible sensitive content from the message body.
The following image illustrates a message logging report, opened in the SvcTraceViewer.exe tool.
10-12 Monitoring and Diagnostics
WCF performance counters are located under the performance categories ServiceModelService,
ServiceModelOperation, and ServiceModelEndpoint. ASP.NET performance counters are located under
the performance categories ASP.NET and ASP.NET Applications.
Note: There are two sets of performance categories for WCF, according to the .NET
Framework version. For WCF 3.5, use the categories ServiceModelService 3.0.0.0,
ServiceModelOperation 3.0.0.0, and ServiceModelEndpoint 3.0.0.0. For WCF 4 and on, use
the categories ServiceModelService 4.0.0.0, ServiceModelOperation 4.0.0.0, and
ServiceModelEndpoint 4.0.0.0. The categories for WCF 4 have several counters that did not
exist in prior versions of WCF.
Developing Windows Azure and Web Services 10-13
Counter measurements in WCF can be scoped according to service, endpoint, or operation. Similarly,
ASP.NET classifies its performance counters according to system and application. Unlike ASP.NET counters,
WCF counters are instance-based, meaning that they must be associated with a running process before
they can be presented by Performance Monitor or any other sampling tool. By default, only service
counters are enabled in WCF. You can enable other WCF counters by using the WCF Service Configuration
Editor.
To view performance counters using Performance Monitor, run Performance Monitor (perfmon.exe) and
then add the WCF performance counters in which you are interested. Performance Monitor can display
counter information as a graph, a histogram, or a line-by-line numeric report.
The following image illustrates WCF performance counters in Performance Monitor.
You can define custom performance counters, which augment the information provided by WCF,
ASP.NET, and other sources. For example, in a flight booking system, you might implement a performance
counter that displays the number of active bookings, the number of cancelled flights, and the number of
airlines for which there are outstanding bookings. To define custom performance counters, you use the
PerformanceCounterCategory and PerformanceCounter classes from the System.Diagnostics
namespace.
http://go.microsoft.com/fwlink/?LinkID=298853&clcid=0x409
Question: How can you combine WCF tracing information with performance counters to
diagnose errors?
Demonstration Steps
1. Open the D:\Allfiles\Mod10\DemoFiles\TracingWCFServices\CalcService\ CalcService.sln solution in
Visual Studio 2012.
Note: You can open the configuration editor from the Tools menu in Visual Studio 2012,
or by right-clicking the configuration file in Solution Explorer and then clicking Edit WCF
Configuration.
4. In the Diagnostics configuration, click Enable MessageLogging, and then configure the message
logging to log the entire message.
Note: To change message logging settings, expand the Diagnostics configuration node,
and click Message Logging.
5. Run the solution, start the WCF Test Client, and call the Div operation twice, once with valid values,
and again with the b property value set to 0.
6. Open the trace and message logs in the same service trace viewer window. The logs are located in the
D:\Allfiles\Mod10\DemoFiles\TracingWCFServices\CalcService\CalcService folder.
Note: Double-clicking the log file in File Explorer will open the service trace viewer tool.
After the tool opens, you can either drag-and-drop the second log file to the application window,
or from the File menu, click Add, and then add the second file.
7. Click the faulted request on the Activity tab, and observe the details of the exception on the pane to
the right. Click the Formatted tab to see the exception information.
8. On the pane to the right, click the message log trace and view the message content for the faulted
request.
Question: When should you use WCF message logging in addition to tracing?
Developing Windows Azure and Web Services 10-15
Logging
Logs provide information about client requests
and their matching responses, and are recorded to
text files. Each record of a request and its response
contains metadata about the structure of the
request and the status of the response. Logs provide a general picture of what the web server is doing.
The following is a sample IIS log file.
By inspecting log files, you can detect issues such as failed authentications and server errors. You can also
use the log files to calculate statistics, such as number of requests per service, client global distribution
(according to the client IP address), and daily bandwidth. You can use the Log Parser tool to parse the log
files and query them for the information you seek. You can download the Log Parser tool from the
following link.
For specific steps on how to enable logs and configure the logging options, refer to the TechNet
documentation.
events, and then log the event information to disk. After you turn on the failed request tracing feature,
you configure the rules when request-related traces are saved to disk. For example, you can configure
failed request tracing to save the traces of all the requests that got an unauthorized response (HTTP 401),
or all the traces of requests made to a specific service URL, whether the request processing succeeded or
failed.
Note: The term failed request tracing is somewhat misleading, because you can trace both
failed requests and successful requests. For example, you can trace all the requests that got an OK
response (HTTP 200).
When you configure the trace rule, you also configure which trace events you want to trace. For managed
code, you can log trace events from ASP.NET, as well as trace events that are emitted by IIS modules such
as the authentication, caching, compression, and URL rewrite modules.
Failed request tracing logs are XML-based files. However, IIS ships with an XSL (stylesheet) file that shows
the XML content in an easy-to-use UI. To use the XSL file, open the XML file with Internet Explorer.
Note: Only Internet Explorer will use the XSL file to render the output of the XML file.
Opening the XML file in other browsers will show the content of the XML file "as-is".
The following image shows the failed request tracing of a response with a 401 Unauthorized HTTP status
code. The client does not receive any additional data about the causes of this response, but failed request
tracing indicates that the user is disabled.
Performance counters that were previously widely used with IIS 6.0, such as Web Service Measures and
Web Service Cache Measures, are still available, and you can access them through the Internet
Information Services Global category.
10-18 Monitoring and Diagnostics
Lesson 3
Monitoring Services Using Windows Azure Diagnostics
In the cloud, services run in a multi-server environment located in a remote data center with no physical
access. Servers in such environments are based on volatile virtual machines that can be replaced in
runtime. This scenario introduces a complicated management challenge. Often, you cannot configure
tracing and logs in the same way you do on an on-premises server machine. Extracting diagnostic
information may be difficult as well, especially considering the typical sizes of log files on a heavily loaded
production system.
In this lesson, you will explore the infrastructure provided by Windows Azure to simplify to process of
collection diagnostics information when hosting services in the cloud.
Lesson Objectives
After completing this lesson, you will be able to:
By using Windows Azure Diagnostics, you can collect diagnostic information from a range of data sources
located on different servers, store that information in a single location in Windows Azure Storage, and
produce an aggregated view that depicts the behavior and state of the whole environment.
With Windows Azure Diagnostics in place, you can continue to use the same logging and auditing code
and infrastructure that you use in your on-premises deployments. The logging and auditing infrastructure
will continue to write information locally for each running instance, and Windows Azure Diagnostics will
collect and aggregate that information, storing it in Windows Azure Storage.
Developing Windows Azure and Web Services 10-19
You can then a variety of third-party tools to generate reports based on the diagnostic information that is
stored in the Windows Azure Storage. You can use these reports for debugging, troubleshooting,
measuring performance, monitoring resource usage, executing traffic analysis, and planning capacity.
Diagnostic information is also useful when implementing cloud elasticity. You can use a dedicated process
to automatically scale up or down the number of role instances in your application according to the
applications overall performance. The Microsoft Patterns and Practices (P&P) team released the Windows
Azure Auto Scaling Application Block (WASABi) for this purpose. For more information about the
Windows Azure Auto Scaling Application Block, see the MSDN documentation at:
If you enable Windows Azure Diagnostics, every role instance will have a dedicated process running the
diagnostic monitor. The diagnostic monitor collects diagnostic information from a collection of local data
sources and captures it in local storage buffers. When you deploy the service, you configure the type of
information that is captured. Windows Azure diagnostics also supports changing these settings after the
service has been deployed, without needing to redeploy it. Windows Azure Diagnostics can store
performance counters values, Windows event log entries, internal Windows Azure infrastructure logs, IIS
logs, failed request tracing logs, and any other log file your application creates. After the diagnostic
information is captured, it can be transferred either on a schedule or on demand to be persisted in
Windows Azure Storage.
For more information regarding Windows Azure Diagnostics, see the MSDN documentation at:
Windows Azure Diagnostics
http://go.microsoft.com/fwlink/?LinkID=313745
The following code shows how to configure the .NET tracing infrastructure to use
DiagnosticMonitorTraceListener as a trace listener.
In addition to adding the DiagnosticMonitorTraceListener to the list of trace listeners, you also need to
configure Windows Azure Diagnostics to transfer the collected data to storage. This will be covered in the
next topic.
Configuring Diagnostics
When you create a new cloud project in Visual
Studio 2012, each role is configured to use
diagnostics. Diagnostics is also automatically
enabled for every role you add to an existing
cloud project.
By default, the diagnostics level for a role is set to
collect only errors and critical events from .NET
tracing and Windows event logs. To open the
diagnostics configuration, right-click the role in
Solution Explorer, click Properties, and then click
the Configuration tab. On the Configuration tab
you can set the level of collected traces, create a
custom collection plan, or disable diagnostics entirely for the specific role.
Note: Because the default level of diagnostics is set to collect errors only, other data that is
informative, such as performance counters and IIS logs, will not be collected. By default, only the
Application events are collected from the Windows event logs.
The following screenshot shows the diagnostics section on the Configuration tab of a role.
Developing Windows Azure and Web Services 10-21
Buffer size. Size of the local allocated buffer that stores the source output, before uploading it to
storage. The total buffer size of all collected sources cannot exceed the overall quota, which is 4GB by
default. If a collected source exceeds its buffer size before begin uploaded, older items will be
removed from the buffer.
Transfer period. The interval for uploading the collected information to storage.
For each of the sources, you can specify additional settings, depending on the source type.
The following screenshot shows the Performance counters tab in the Diagnostics configuration dialog
box.
To configure performance counters collection, select the counters you want to collect and specify the
sampling rate for each counter. Then, configure the buffer size and transfer period, as explained before.
10-22 Monitoring and Diagnostics
Whether you choose to use the Errors, All information, or Custom plan option, the diagnostics
configuration is written to an XML configuration file named diagnostics.wadcfg. This file is located in
Solution Explorer under the role node, and is included as part of your deployment.
Note: The Windows Azure Storage account connection string where the diagnostic data is
collected is not located in the diagnostics.wadcfg file. The connection string is located in the
ServiceConfiguration.*.cscfg file, and may have different values, depending on whether you
deploy to a cloud environment or to the local Windows Azure Emulator.
The following code is an example of the Errors only diagnostics configuration in a diagnostics.wadcfg file.
The preceding example contains configuration for infrastructure logs, IIS logs, crash dumps, trace logs,
and Windows event logs. However, only the trace logs and Windows event logs have the
scheduledTransferPeriod attribute, meaning only these two sources will be collected.
The <DiagnosticMonitorConfiguration> root element has two important attributes which cannot be
editing from the configuration UI:
overallQuotaInMB. Defines the maximum size of all buffers.
configurationChangePollInterval. Defines the time interval period for looking for configuration
changes. The diagnostics.wadcfg file is also uploaded to a blob container named wad-control-
container. If you want to change the diagnostics configuration of a running role, you only need to
replace the configuration file in the container, and wait for the diagnostics monitor process to reload
the configuration.
You will also need to manually edit the configuration file if you want to collect IIS failed request tracing
files or custom log files that you create in the server, since the UI has option to control these settings.
<DiagnosticMonitorConfiguration
xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration"
configurationChangePollInterval="PT1M" overallQuotaInMB="4096">
Developing Windows Azure and Web Services 10-23
The <Absolute> element configures the path where log files are collected from. Every log file you create
in that path will upload to storage.
Note: You can also collect local storage resources, which are files you create from code in a
dedicated local folder. Local storage resources are useful when you want to store data locally, but
you do not know the disks and folder structure in the hosting VM, or you are not sure if you have
permissions to write to a specific folder. Creating local storage resources and collecting them
with the Windows Azure Diagnostics is outside the scope of this course.
For more information about the Windows Azure Diagnostics configuration file schema, consult the MSDN
documentation:
Windows Azure Diagnostics Configuration File Schema
http://go.microsoft.com/fwlink/?LinkID=298858&clcid=0x409
There are two other ways to configure diagnostics, without using the diagnostics.wadcfg file:
Local configuration with the Windows Azure Diagnostics API. You can initialize and configure
Windows Azure Diagnostics in code by using the Windows Azure Diagnostics API in the role startup
execution. For more information about the Windows Azure Diagnostics API, consult the MSDN
documentation.
Windows Azure Diagnostics API
http://go.microsoft.com/fwlink/?LinkID=298859&clcid=0x409
Note: The default container name for IIS logs if wad-iis-logfiles. If you used a different
container name to store IIS log files, you will need to look for that blob container in the list of
containers shown in Server Explorer. Using Visual Studio 2012 to browse Windows Azure Storage
resources, such as tables and blobs, was demonstrated in Module 9, "Windows Azure Storage", in
Course 20487.
You can also view other sources, such as Windows event logs and IIS failed request trace files, by opening
the matching tables and blob containers. However, you cannot easily view performance counters, as this
type of data source contains a lot of data collected over time and Visual Studio 2012 is only capable of
showing this data in its raw data, rather than in a bar graph, as you would usually choose to view this type
of collected data. You can create bar graphs based on the raw data by copying the tabular content to
Microsoft Office Excel, and then create a graph based on the data.
Instead of using Visual Studio 2012, you can use 3rd-party tools which were designed especially for
showing all types of collected sources, including performance counters. For a partial list of such tools,
refer to the MSDN documentation.
In addition to your collected counters, the Management Portal is also capable of monitoring your cloud
service instances for CPU, network, and disk throughput. Cloud service monitoring is described later in this
module in Lesson 5, "Collecting Windows Azure Metrics".
The following image illustrates the diagnostic view in the Windows Azure Management Portal.
Note: Changing the monitoring level to verbose increases the amount of data collected by
the cloud service monitoring infrastructure, and therefore may increase the billing for your
storage account.
Demonstration Steps
1. Open the
D:\Allfiles\Mod10\DemoFiles\AzureDiagnostics\Begin\AzureDiagnostics\AzureDiagnostics.sln solution
using Visual Studio 2012.
2. Open the DiagnosticsWebRole role's Properties window. On the Configuration tab, select Custom
plan, and then click Edit.
3. On the Application logs tab, change the Log level from Error to Verbose.
4. On the Log directories tab, set the Transfer period to 1 minute, the Buffer size to 1024MB, and the IIS
logs to a 1024MB quota.
5. Click OK to close the dialog box, and save the changes you made to the role configuration.
6. Run the cloud project without debugging and use the HTML form to write data to the logs.
10-26 Monitoring and Diagnostics
7. In Storage Explorer, expand the Windows Azure Storage node, and then expand the Development
node. Explore the logs in the WADLogsTable table. It might take a minute or two for the logs to
upload.
Note: Log data will be transferred to the Windows Azure storage emulator once a minute.
If you cannot see the Log data, please wait one minute before checking again.
8. In Server Explorer, under the Development storage account, expand Blobs, and then open the wad-
iis-logfiles blob container. Explore the list of log files and open one of them using Notepad.
You can enable Windows Azure Storage Analytics by using the CONFIGURE page of your storage
account, in the Windows Azure Management Portal.
The following image illustrates how to enable storage logging.
You configure logging separately for each service you want to monitor, which can include blobs, tables,
and queues. If there is no activity in the services for which you enabled logging, no logs will be created.
You can find the collected logs under a special blob container named $logs.
For more information about Windows Azure Storage Analytics, consult the MSDN documentation:
Windows Azure Storage Analytics
http://go.microsoft.com/fwlink/?LinkID=298860&clcid=0x409
IntelliTrace records information in a special log file that you can open in Visual Studio. With IntelliTrace
logs, you can perform post-failure debugging or diagnose problems at production sites that are difficult
to reproduce in the development environment.
You can enable IntelliTrace in Windows Azure and collect valuable execution information while the
application is running on the cloud. After you download the IntelliTrace logs, you step through your code
from Visual Studio as if it were running in Windows Azure. IntelliTrace has an effect on the applications
performance and you should refrain from using it in production environment.
To enable IntelliTrace, you have to create a dedicated deployment package, make sure it is built in Debug
mode, enable IntelliTrace, and publish it using Visual Studio. A running application that was published
without IntelliTrace enabled has to be published again from Visual Studio.
The following image shows how to enable IntelliTrace when publishing an application to Windows Azure.
10-28 Monitoring and Diagnostics
IntelliTrace log files are stored on the file system of your roles virtual machine. When you request to
download the log files, a snapshot is taken at that point in time and downloaded to your local machine.
To download IntelliTrace logs for a role instance, open the Windows Azure Compute node in Visual
Studio Server Explorer, right-click the instance of interest, and then click View IntelliTrace Logs.
For more information about IntelliTrace in Windows Azure, consult the MSDN documentation:
Question: When should you use each of the diagnostic mechanisms explained in this lesson?
Developing Windows Azure and Web Services 10-29
Lesson 4
Collecting Windows Azure Metrics
Windows Azure presents an accessible view of performance counters in the Windows Azure Management
Portal. This graph-based view is known as Windows Azure Metrics. This lesson describes how to view
metrics for all Windows Azure deliverables, such as cloud services, web sites, and storage services.
Lesson Objectives
After completing this lesson, you will be able to:
Note: The information presented in the Windows Azure Management Portal is based on
performance counters only. Other types of diagnostics information, such as log files and
Windows Event Log data, cannot be presented by the Windows Azure Management Portal and
require a different tool, such as Visual Studio 2012. Viewing collected diagnostics data was
discussed in the previous lesson.
The monitoring displayed in the Management Portal is configurable. You can choose the metrics you
want to monitor and these metrics will be displayed in the Monitor and Dashboard tabs. You can switch
between displaying relative and absolute values, and change the time range of the metrics charts.
Note: Using absolute values will display a y-axis in the bar graph. If you only shown one set
of metrics, such as CPU usage percentage, or disk throughput, you will see the graph normally.
However, if you use the absolute value display with different types of metrics, such as percentage
and KB/sec, the results will be aligned to the same y-axis and may be harder to interpret. To view
different types of metrics in the same graph, you should use the relative value display.
10-30 Monitoring and Diagnostics
To configure the monitoring mode, click the Configure tab and choose between the Minimal and
Verbose modes. To use verbose mode, you must enable diagnostics in your cloud service deployment.
Metrics are stored in six storage tables per role, using the storage connection string you provided for the
deployment
Developing Windows Azure and Web Services 10-31
The six tables that are created are made of two tables for each aggregation interval: 5 minutes, 1 hour,
and 12 hours. For each aggregation interval, one table stores role-level aggregations and the other table
stores role instance aggregations. The tables are named according to the following format:
WAD*deploymentID*PT*aggregation_interval*[R|RI]Table
In the table name format:
Question: What are the benefits of using metrics for cloud services?
Viewing metrics is not the only way to diagnose Web Sites. You can also collect IIS logs, failed request
tracing logs, and trace logs for your web site. For more information on Windows Azure Web Sites
monitoring, consult the MSDN documentation.
Web Site Monitoring
http://go.microsoft.com/fwlink/?LinkID=298863&clcid=0x409
For more information about Windows Azure Storage monitoring, consult the MSDN documentation:
Windows Azure Storage Monitoring
http://go.microsoft.com/fwlink/?LinkID=298862&clcid=0x409
Demonstration Steps
1. Open the Windows Azure Management Portal (https://manage.windowsazure.com/).
2. Create a new Windows Azure Web Site named metricsdemoYourInitials (Replace YourInitials with
your initials). Select the region closest to your location for the web site.
3. Browse to the new web site, open the DASHBOARD tab, and download the web site's publish profile
file.
4. On the MONITOR tab, configure monitoring to collect only the Http Successes metric. Make sure
the metric is shown in the graph.
10-34 Monitoring and Diagnostics
5. Open the
D:\Allfiles\Mod10\DemoFiles\WebSiteMonitoring\SimpleWebApplication\SimpleWebApplication.sln
solution in Visual Studio 2012.
6. Publish the SimpleWebApplication web site to Windows Azure, using the publish profile file you
downloaded. Browse to the web site and submit HTTP requests using the ClickHere button.
7. Return to the Management Portal, refresh the graph, and verify the number of Http Successes metric
increased.
Question: What are some uses of the Windows Azure Web Sites metrics?
Developing Windows Azure and Web Services 10-35
Objectives
After completing this lab, you will be able to:
Lab Setup
Estimated Time: 45 minutes
Virtual Machine: 20487B-SEA-DEV-A, 20487B-SEA-DEV-C
1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.
3. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.
4. In the Action pane, click Connect. Wait until the virtual machine starts.
5. Sign in using the following credentials:
6. Return to Hyper-V Manager, click 20487B-SEA-DEV-C, and in the Action pane, click Start.
7. In the Action pane, click Connect. Wait until the virtual machine starts.
Password: Pa$$w0rd
9. Verify that you received credentials to log in to the Azure portal from you training provider, these
credentials and the Azure account will be used throughout the labs of this course.
log messages at the service level, which will log the messages in their decrypted form, instead of at the
transport level, where messages are logged in the encrypted form.
Note: You may see a warning saying the client model does not match the server model.
This warning may appear if there is a newer version of the Windows Azure PowerShell Cmdlets. If
this message is followed by an error message, please inform the instructor, otherwise you can
ignore the warning.
Note: To open the WCF Configuration Editor, in Solution Explorer, right-click the
Web.config file, and then click Edit WCF Configuration.
3. Save the changes to the configuration and close the Service Configuration Editor window.
Results: You can test your changes at the end of the lab.
2. Open the TraceWriter.cs file from the BlueYonder.Companion.Host project and implement the
Trace method. Use .NET Diagnostics tracing to write the trace messages.
Note: You can see an example for implementing the ITraceWriter interface in lesson 2,
"Configuring Service Diagnostics".
Note: You can see an example for registering the TraceWriter class in lesson 2,
"Configuring Service Diagnostics".
Note: You can see an example for tracing messages with the TraceWriter class in lesson 2,
"Configuring Service Diagnostics".
2. On the Application logs tab, change the Log level from Error to Verbose.
3. On the Log directories tab, set the Transfer period to 1 minute, the Buffer size to 1024MB, and set the
IIS logs to a 1024MB quota.
4. Click OK to close the dialog box, and save the changes to the role's properties.
If you did not import your Windows Azure subscription information yet, download your Windows
Azure credentials, and import the downloaded publish settings file in the Publish Windows Azure
Application dialog box.
2. Select the cloud service that matches the cloud service name you wrote down in the beginning of the
lab, while running the setup script.
3. Finish the deployment process by clicking Publish. The publish process might take several minutes to
complete.
Note: In addition to the trace message your code writes to the log, ASP.NET Web API
writes several other infrastructure trace messages.
3. Open the wad-iis-logfiels blob container in Server Explorer and explore the log files. Verify you see
the requests for the Travelers, Locations, Flights, and Reservations controllers.
Note: It is possible it will take more than a minute from the time the request is sent and
until it is logged by IIS. If you do not yet see any logs, or the requests are missing from the log,
wait for another minute, refresh the blob container, and then download the log again.
Note: You can view the messages by clicking the Message tab in the left pane, selecting the
message to view (either the
http://blueyonder.server.interfaces/IBookingService/CreateReservation or
http://blueyonder.server.interfaces/IBookingService/CreateReservationResponse message), and
then clicking the Message tab in the bottom-right pane.
Developing Windows Azure and Web Services 10-39
Results: After you complete the exercise, you will be able to use the client App to purchase a trip, and
then view the created log files, for both the Windows Azure deployment and the on-premises WCF
service.
Best Practice: Invest considerable time in instrumenting your application with tracing and
performance counters. Make sure you can successfully monitor the application in the
development environment. This will make it easier to monitor in Windows Azure, and guarantee
that you can diagnose problems that occur only in the production environment, such as under
heavy load.
Review Question(s)
Question: How can you monitor applications running in Windows Azure?
Tools
Visual Studio 2012
WCF Service Configuration Editor
SvcTraceViewer.exe
11-1
Module 11
Identity Management and Access Control
Contents:
Module Overview 11-1
Module Overview
Managing identities in distributed systems can be challenging. Identities are often shared across
application and organization boundaries. Claims-based identity is a modern approach designed to
overcome these challenges in distributed systems. This module describes the basic principles of modern
identity handling. The module also demonstrates how to use infrastructures such as Windows Azure
Access Control Service (ACS) to implement authentication and authorization with claims-based identity in
ASP.NET Web API services and Windows Azure Service Bus brokered messaging.
By applying the concepts and technologies covered in this module, you can simplify authentication and
authorization in your distributed applications integrating with modern identity providers.
Note: The Management Portal UI and Windows Azure dialog boxes in Visual Studio 2012
are updated frequently when new Windows Azure components and SDKs for .NET are released.
Therefore, it is possible that some differences will exist between screen shots and steps shown in
this module, and the actual UI you encounter in the Management Portal and Visual Studio 2012.
Objectives
After completing this module, you will be able to:
Lesson 1
Claims-based Identity Concepts
Identity is a set of information that describes an entity. Identities are used to authorize user access to
resources and services, personalize applications, and to provide better experience for the client as well as
the IT professional.
Windows Identity Foundation (WIF) is the Microsoft .NET Framework infrastructure that you can use to
create, validate, and handle claims-based identities. This lesson will provide a high-level overview of WIF
and its capabilities.
Lesson Objectives
After completing this lesson, you will be able to:
Describe WIF.
Introduction to Identities
In the physical world, the term identity is used to
describe a set of attributes that belongs to a
specific entity. For example people have names,
passport numbers, and musical taste. Buildings
have addresses, number of stories and more
attributes. In the virtual world, different entities
might also have identities. Users often have user
names, email address and roles. Computers have
also have names as well as a network address,
operating system, and so on. In Windows (and
most other operating systems), different processes
and even threads, run under a specific identity.
Identity is an important concept and is used to authorize access to resources and services, personalize
applications, and provide better experience for the client as well as the IT professional.
Systems often need to store identity information in identity stores such as Active Directory Domain
Services (AD DS) or a Microsoft SQL Server database. Applications query these stores for identity
information, which can be used to authorize the entity and personalize applications, and to provide better
experience.
Identity information is both personal and sensitive, and therefore should be carefully managed. Some
information should be kept private and some can be shared based on a variety of security policies. An
identity store should properly secure the identity and provide identity information to trusted entities only
according to predefined security and privacy policies.
Due to the sensitive nature of identities, there is a need for a verification mechanism. Such mechanisms
rely on credentials, a set of attributes that are used to verify an identity. Some of the common credentials
types in modern systems include: username and password, smart cards, and certificates. The process of
verifying credentials is called authentication.
Organizations and applications often need to share identities, which can complicate common identity
related scenarios. For example, if an identity is revoked in one organization or application, such as when
Developing Windows Azure and Web Services 11-3
an employee is fired, all partners must revoke the identity as well. Synchronizing identity across
organizations and applications can be a complex task and therefore, a single identity view for all
organizations and applications can help simplify identity management.
Question: Why would you need different types of credentials, such as smart card, or
username and password?
Traditional applications often base their own identity store on a custom database. The most popular
credentials that applications use to authenticate their clients are username and password. The result is
that clients own a large set of username-password credentials for all the applications they subscribed to. It
is difficult to remember and handle a large set of credentials so people tend to reuse credentials. This
compromises their security because if one application is hacked and their credentials are stolen, these
credentials could be used by other unauthorized individuals to log on to other applications.
Another disadvantage of this approach occurs when identity is shared across applications and
organizations. In order to share identity across applications, the applications must expose their identity
store creating tight coupling and potentially undesired overhead.
In conclusion, identity management is a complex process that should not be taken lightly. When using a
custom identity store, applications have to secure the identities and handle all the appropriate scenarios
such as registration, credentials reset, update details, identity revocations, and more. Traditional identity
management has many disadvantages for the user as well as for the application. Therefore, another
approach is required.
Claim based identity is a modern approach, in which the client provides token containing a set of
attributes (claims), describing its identity. The token is signed by an issuer which is a service in charge of
authenticating the user and providing the token.
Using a token that is being presented by the client decouple the service or application from the identity
provider. The service or application rely on verifying the authenticity of the token rather than having an
actual connection to the identity provider. Services and applications can support multiple identity
providers by using standards, such as WS-Federation and OAuth2, for defining such tokens.
0. In federated identity, before a service or application uses an identity provider, it must form a trust
relation with that identity provider. To do this, the application and identity provider exchange their
public keys. The application sends its public key to the identity provider and the identity provider
sends its public key to the application. Once a trust relation is established, claims-based identity can
be used. The identity provider uses the application's public key to encrypt the token, if required. The
identity provider's public key is used to validate the digital signature placed in the token by the
identity provider.
1. The client sends a request to the service or application. The service or application checks the request
for a token. Since this is the first time the client accesses the application, the token is not present, and
the client is required to access the identity provider.
2. The client calls the identity provider for authentication. The client presents its credential and gets
authenticated.
3. Once the client is authenticated by the identity provider, the client identity provider returns a token
containing the different claims. The identity provider also signs the token it generates using the
identity provider's private key and then encrypts the token with the applications public key. This
prevents the client and any unauthorized parties from reading and altering the content of the token.
4. The client sends a request to the service or application for the second time, this time providing the
token. The service or application open the token using the private key and verify the token signature
using the identity providers public key. After verifying the token, the service or application can
extract the claims from it.
5. Based on the claims, the service or application can now authorize the client, and decide whether to
process the request and/or return a response.
For more information about digital signatures, consult the MSDN documentation:
Digital Signatures
http://go.microsoft.com/fwlink/?LinkID=298864&clcid=0x409
In a federation scenario, a client authenticates against an identity provider by using credentials and
receives a token. The token is then forwarded to another identity provider, which generates a new token.
The client then uses the new token to access the application. Both identity providers must have mutual
trust. Similar to the basic scenario, the application opens the token and uses the claims to authorize the
client before granting access.
Introduction to WIF
Claims-based identity standards are complex and
involve advanced cryptography. To simplify the
development of applications that use claims-
based identity, you need to use an infrastructure
that will not require you to implement the
standards and understand advanced
cryptography.
In WIF terminology, an identity provider is referred to as a Security Token Service (STS). The application
has to trust and rely on the STS, and therefore it is referred as a Relying Party (RP). WIF provides the
necessary classes and tools for all claims-based identity tasks, such as creating a custom STS, configuring
an RP to require a token generated by a specific STS, and creating the appropriate client configuration.
WIF uses the WS-Federation standard to implement the communication between all the parties. To learn
more about WS-Federation, refer to the MSDN documentation:
11-6 Identity Management and Access Control
WS-Federation
http://go.microsoft.com/fwlink/?LinkID=298865&clcid=0x409
WIF simplifies the process of creating a claims-based Web application. STS and RPs exchange metadata
documents to establish the initial trust. Metadata documents adhere to the WS-MetadataExchange
standard and contain details about the location of the STS and the RP, the required claims, and the
signing and encryption keys. WIF provides tools integrated with Visual Studio 2012 to generate metadata
documents and configuration files (either App.config or Web.config) for the RP. These tools simplify the
integration of Web applications with an existing STS.
Typically, you will use a well-known STS infrastructure, such as Active Directory Federation Services (AD
FS) 2.0. However, sometimes you may be required to create a custom STS. WIF provides the infrastructure
to do that. WIF handles all the cryptographic heavy-lifting for creating, encrypting, and signing tokens.
WIF API has simple abstractions that can be used as an entry point to create a custom STS that supports
the WS-Trust protocol.
WIF also handles all the cryptographic work for dispatching the tokens provided to the RP and creating
the appropriate security context. WIF decrypts the token, validates its signature, and finally dispatches all
its claims. WIF introduces the ClaimsIdentity class, which implements the IIdentity interface and also
provides access to the full list of incoming claims. The ClaimsIdentity object is placed into a
ClaimsPrincipal object and attached to the current security context. Finally, WIF provides a rich set of
APIs to help the user make authorization decisions based on the incoming claims.
The following code demonstrates how to access claims by using the ClaimsIdentity class in an ASP.NET
Web API service.
When Microsoft introduced WIF, they provided WIF as both a separate download and an SDK. Since .NET
Framework 4.5, WIF is integrated into the core of the .NET Framework and is now part of the mscorlib.dll
and System.IdentityModel assemblies. WIF has close integration with WCF and provides relevant classes
in the System.IdentityModel.Services and System.ServiceModel assemblies.
WIF API
http://go.microsoft.com/fwlink/?LinkID=298867&clcid=0x409
WIF is fully compatible with other Microsoft claims-based infrastructures such as AD FS 2.0, Windows
Azure Active Directory, and Windows Azure ACS. WIF simplifies the integration of those infrastructures
into .NET Framework-based applications and helps developers build claims-aware solutions.
Developing Windows Azure and Web Services 11-7
Demonstration Steps
1. Create a new ASP.NET MVC 4 Web Application project, using the Internet Application template.
2. Run the application without debugging. Observe the Log in link on the upper-right corner of the
web page.
Note: The default login functionality is implemented by using ASP.NET Membership, which
uses a local identity store. ASP.NET Membership has a default implementation using SQL Server
as its identity store.
3. Open the Identity and Access dialog box, by right-clicking the project, and then clicking Identity
and Access. Configure the project to use the Local Development STS, and review the test claims that
will be used during development. Click OK to save the changes.
4. Add code to the Index method of the HomeController to pass the user's claims to the HTML page.
Retrieve the identity object stored in the Thread.CurrentPrincipal.Identity property.
Retrieve the claims from the identity's Claims property.
Store the claims in the controller's ViewBag dynamic property, under a new property named Claims.
5. In the ClaimsApp project, open the Index view for the Home controller and output the claims you
stored in the ViewBag.
6. Run the application and verify that STS is running. Observe the value of the name claim, which now
appears on the upper-right corner of the web page and the list of claims that is outputted in the
HTML.
Question: What are the advantages of claims-based identity and STS in comparison to
traditional methods of managing identities, such as using a custom store of usernames and
passwords?
11-8 Identity Management and Access Control
Lesson 2
Using the Windows Azure Access Control Service
Windows Azure ACS is a cloud-based STS created to provide federated claim-based identities. ACS
simplifies identity federation because it integrates with standards-based identity providers, such as AD FS
2.0, and web identities such as Microsoft Live ID, Google, Yahoo!, Facebook, and OpenID providers.
Note: Windows Azure also provides an STS through the Windows Azure Active Directory
service, which is not covered in this course.
For more information about Windows Azure Active Directory, please see the following documentation:
This module describes how to use and manage Windows Azure ACS.
Lesson Objectives
After completing this lesson, you will be able to:
Describe Windows Azure ACS functionality.
Interfacing with so many identity providers is not an easy task. Each provider can use a different protocol
and expose different claims. ACS is a cloud-based service that provides a reliable and available STS. ACS is
designed for federation. It has no identity store of its own. Instead, it shims other identity providers,
forwards authentication requests to them and maps tokens from different providers to a unified standard,
based on the relying party choice. ACS also supports various types of tokens to match your application's
needs, such as SAML, SWT (Simple Web Token), and JWT (JSON Web Token).
Developing Windows Azure and Web Services 11-9
It should be signed by the ACS and encrypted by the RPs public key.
Identity Providers
The first step for mapping tokens is to select which identity providers the service will support. You can do
this on the Identity providers configuration page, in the ACS portal. You can choose from a list of well-
known web identity providers such as Windows Live ID, Google, and Yahoo!, or provide the address of the
metadata document of a custom identity provider such as AD FS 2.0. Metadata documents contain all the
information required to establish trust with an identity provider. Additionally, you can configure the ACS
namespace as a Facebook application to integrate with Facebook and use it as an identity provider.
Note: All the functionality of the ACS portal is also available in an HTTP-based API, which is
beyond the scope of this module.
Mapping Rules
Mapping rules is the ACS mechanism for claim forwarding. This means that ACS will analyze incoming
claims, transform them (if needed) and write them as output claims. ACS can produce multiple types of
tokens. Mapping rules let ACS act as an adapter that transforms one token type to another. For example,
you can use ACS to transform a SAML 2.0 token to a light-weight SWT and use it to call HTTP-based web
services.
After the transformation is complete, a new token, which is the token signed by ACS is passed to the RP.
Each identity you create has a name, a description, and a credential. ACS supports several types of
credentials that you can attach to the identity, such as symmetric key, password, and X.509 certificates.
For additional information about ACS and Service Identities, refer to MSDN documentation.
Service Identities
http://go.microsoft.com/fwlink/?LinkID=313749
In comparison to other identity stores, such as Active Directory Domain Services (AD DS), service identities
provide less functionality and robustness. If you want to manage a large list of user identities, consider
using the Windows Azure Active Directory. Windows Azure Active Directory is beyond the scope of this
course.
Demonstration Steps
1. Open the Windows Azure Management Portal website (http://manage.windowsazure.com).
You can view the list of ACS namespaces by clicking ACTIVE DIRECTORY in the navigation pane, and
then by clicking the ACCESS CONTROL NAMESAPCES tab.
4. Open the list of identity providers and Add Google as a trusted identity provider for your namespace.
Realm: http://localhost/WebApplication/
7. Locate and copy the WS-Federation Metadata endpoint address in the Application integration
page.
10. Open the Web.config file and explore the configuration added to the <system.identityModel> and
<system.web> configuration sections.
11. Run the application without debugging and verify you are redirected to ACS for authentication.
Question: Why should you use the Windows Azure ACS?
11-12 Identity Management and Access Control
Lesson 3
Configuring Services to Use Federated Identities
Services that use claims-based identity need to be properly configured for the correct address of the
identity provider, the required claims, and keys to sign and encrypt the token.
This module describes how to configure a service application, such as a WCF or ASP.NET Web API service,
to use federated claims-based identities.
Lesson Objectives
After completing this lesson, you will be able to:
1. The client submits a simple HTTP request to the RP, which responds with an HTTP 302 redirect
message because the original request has no token. The redirect message contains all the information
the identity provider needs about the relying party such as its realm (unique identifier of the
application), the required claims, and return address.
2. To accept the redirected request, the identity provider asks the user to authenticate by submitting
credentials (which is done using an HTML form, since you are in a browser-based environment) or
attaching an authentication cookie, which might be available from previous requests.
3. The identity provider generates a token, attaches it to the response as a cookie, and redirects the
clients request back to the RP.
Note: Mobile and desktop-based client applications often use browser-based controls,
such as the Web authentication broker for Windows Store apps (which is shown later in the lab)
to enable passive federation.
Developing Windows Azure and Web Services 11-13
To execute passive federation, the client must know how to handle HTTP redirect messages. Browsers can
handle redirect messages but smart clients and mobile applications usually do not have redirection
capabilities. This is why web applications targeted to serve clients running on a browser usually use
passive federation and applications targeted for other clients use active federation. For example, a service
that needs to authenticate against another service, using claims-based authentication, will use active
federation instead of passive federation.
2. The client then attaches the token to the request before sending it to the RP. For example, in HTTP-
based web services, the client attaches the token in the HTTP Authorization header. In WCF SOAP-
based services, the token is sent to the service on top of a WS-Trust conversation before establishing
a secure channel with the service.
Some technologies, such as WCF and ASP.NET MVC support automatic handling of tokens with the
support of WIF. ASP.NET MVC supports passive federation, whereas WCF uses active federation. In
contrast to WCF and ASP.NET MVC, in ASP.NET Web API, WIF is not automatically integrated into the
message handling pipeline. To validate tokens, you have to write the code manually and integrate it into
the request pipeline by implementing a delegating handler. Delegating handlers are Web API extensibility
points that process all incoming and outgoing requests. Delegating handlers are covered in Course 20487,
Module 4, "Extending and Securing ASP.NET Web API Services" Lesson 1 "The ASP.NET Web API Pipeline".
The SendAsync method of the DelegatingHandler class sends an HTTP request to the inner handler as
an asynchronous operation. This is where you implement token processing and authorization using the
following logic:
1. Inspect the incoming request. For requests that contain an Authorization header, try to authenticate
the token presented by the client:
a. If authentication is successful, create a claims principal, and call the next handler.
b. If authentication fails, return a 401 (unauthorized) HTTP status code, and set the www-
Authenticate header to the required token type.
11-14 Identity Management and Access Control
2. After the next handler has processed the request, inspect the outgoing response. If the status code is
401, set the www-Authenticate header to the required token type.
Note: Although the process of authenticating tokens is explained in this topic, it is not
recommended to implement your own delegating handler to do so. There are several open
source projects that implement token validation and claim parsing. Later, in the lab, you will use
the Thinktecture.IdentityModel open source project to implement token validation and claim
parsing.
The following code is an example of how to implement an authentication delegation handler with
ASP.NET API.
Although WIF has no automatic support for ASP.NET Web API, you can use WIF in your implementation
to validate credentials and turn security tokens into claims.
Developing Windows Azure and Web Services 11-15
Listen. Clients with this claim value can listen on nodes, such as queues, topics or relays.
Send. Clients with this claim value can send messages.
Manage. Clients with this claim value can create and delete nodes, as well as managing permissions of
other users.
As you saw in the demos and lab in Module 7, "Windows Azure Service Bus" of Course 20487, the
authentication against the Service Bus Relays, Queues, and Topics, is with an identity named owner. The
owner identity is the default identity created by ACS, and it has the Manage permissions. The owner
identity therefore is capable of doing more than just send and receive messages, and therefore you
should refrain from using it. Instead, create two identities, one for the service, with Listen permissions,
and another for the client, with Send permissions.
For more information on securing Service Bus endpoints, refer to the MSDN documentation.
Service Bus Authentication and Authorization with the Access Control Service.
http://go.microsoft.com/fwlink/?LinkID=313750
The Service Bus authentication support is one of the major differences between Windows Azure Service
Bus Queues and Azure Storage Queues. In Azure Storage Queues, you need the primary storage key to
authenticate against the storage account.
Question: Why would you use different identities for sending and receiving messages with
Service Bus Queues?
Demonstration Steps
1. Open the Windows Azure Management Portal website (http://manage.windowsazure.com).
2. Create a new Service Bus Queue named BlueYonderQueue in a new Service Bus namespace named
BlueYonderServerDemo11YourInitials (YourInitials will contain your initials).
3. Open the Service Bus ACS website for the Service Bus namespace you created in the preparation step.
Use the address https://BlueYonderServerDemo11YourInitials-sb.accesscontrol.windows.net
(YourInitials will contain your initials).
Note: The browser should log you on automatically to the portal. If you are redirected to
the Windows Live ID Sign in page, type your email and password, and then click Sign in.
Name: BlueYonderQueue
Realm: http://blueyonderserverdemo11YourInitials.servicebus.windows.net/blueyonderqueue
(YourInitials will contain your initials in lowercase).
Token lifetime: 1200
Identity providers: none (uncheck Windows Live ID)
Rule groups: check both Create new Rule Group and Default Rule Group for ServiceBus.
5. Create a new Service Identity. Set the service identity name to QueueClient and generate a
symmetric key for the service identity. After you generate the symmetric key, copy its value to the
clipboard.
6. Add a password credential for the new service identity, and set the password to be the same as the
symmetric key you created before.
7. Edit the default rule group for BlueYonderQueue and add a claim rule with the following settings:
8. Open the
D:\AllFiles\Mod11\DemoFiles\ACSForServiceBus\begin\ServiceBusQueue\ServiceBusQueue.sln
solution file.
9. Examine the Main method in the Program class, and observe the use of the two identities: the
QueueClient is used for sending messages to the queue, and the owner is used for listening to the
queue and receiving messages.
10. Replace the service bus namespace with the service bus namespace that you created in step 2 of this
demonstration.
11. Replace the {Password} values with the passwords of each of the service identities. You can find the
password in the ACS portal, under the Service identities link on the pane to the left.
12. Run the application and verify you are able to send messages to the queue and read from the queue.
The last step should throw an unauthorized exception, because the QueueClient identity is not
permitted to listen to the queue.
Developing Windows Azure and Web Services 11-17
Question: What is passive and active federation, and when should you use them?
11-18 Identity Management and Access Control
To address this issue, you decided that users should be able to log on to the app, so that the booking is
saved for a user and not for a device. Users would then be able to log on to the app from other devices
and still see their future and past flights.
To reduce the amount of work required to manage user identities and passwords, you decided that the
authentication process will be accomplished by using known identity providers, such as Windows Live ID,
with the help of Windows Azure ACS. In this lab, you will configure ACS to support user authentication
with Windows Live ID, and configure the ASP.NET Web API service and client app to support this
authentication process.
Objectives
After completing this lab, you will be able to:
Configure Windows Azure ACS.
Lab Setup
Estimated Time: 60 Minutes.
For this demo, you will use the available virtual machine environment. Before you begin this demo, you
must complete the following steps:
1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.
3. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.
4. In the Action pane, click Connect. Wait until the virtual machine starts.
Password: Pa$$w0rd
6. Return to Hyper-V Manager, click 20487B-SEA-DEV-C, and in the Action pane, click Start.
7. In the Action pane, click Connect. Wait until the virtual machine starts.
Password: Pa$$w0rd
Developing Windows Azure and Web Services 11-19
9. Verify that you received credentials to log in to the Azure portal from you training provider, these
credentials and the Azure account will be used throughout the labs of this course.
In this lab, you will install NuGet packages. It is possible that some NuGet packages will have newer
versions than those used when developing this course. If your code does not compile, and you identify
the cause to be a breaking change in a NuGet package, you should uninstall the NuGet package and
instead, install the old version by using Visual Studio's Package Manager Console window:
1. In Visual Studio, on the Tools menu, point to Library Package Manager, and then click Package
Manager Console.
2. In Package Manager Console, enter the following command and then press Enter.
3. Wait until Package Manager Console finishes downloading and adding the package.
The following table details the compatible versions of the packages used in the lab:
Wif.Swt 0.0.1.4
Thinktecture.IdentityModel 2.2.1
You can view the list of ACS namespaces by clicking ACTIVE DIRECTORY in the navigation pane, and
then then clicking the ACCESS CONTROL NAMESAPCES tab.
Name: BlueYonderCloud
Realm: urn:blueyonder.cloud
11-20 Identity Management and Access Control
To generate a rule group, in the ACS portal, open the Rule Groups page, select the default rule
group for BlueYonderCloud, and then click the Generate link.
Make sure you generate the rules for the Windows Live ID identity provider.
Results: After you complete this exercise, you would have created a new ACS namespace and configured
an RP for the ASP.NET Web API services. You will test the RP configuration at the end of the lab.
To install a specific version of a NuGet package, first open the Packager Manager Console window
(on the View menu, under Other Windows).
In Package Manager Console, type the following command and then press Enter:
install-package ThinkTecture.IdentityModel -version 2.2.1 -ProjectName BlueYonder.Companion.Host
Developing Windows Azure and Web Services 11-21
Note: The last known version of the ThinkTecture.IdentityModel NuGet package that
supports the SWT token is 2.2.1. Therefore, you need to use the Package Manager Console to
install this NuGet package, rather than using the Manage NuGet Packages dialog box.
2. Add another string setting to the web role to store the realm of the relying party.
Name the new setting ACS.Realm, and set its value to urn:blueyonder.cloud.
3. Add another string setting to the web role to store the token signing key of the relying party you
used. Name the new setting ACS.SigningKey.
To find the token siging key, in the ACS portal, open the Certificates and Keys page, click
BlueYonderCloud, and then click Show Key.
5. Create a new class named AuthenticationConfig in the new folder you created.
6. In the AuthenticationConfig class, create a static method named CreateConfiguration, which
returns an object of type AuthenticationConfiguration, and start implementing the method.
Make sure you add the code before the return statement.
8. Continue implementing the method by adding a new SWT token to the configuration.
Call the config's AddSimpleWebToken method to add a definition for the SWT token.
Set the method's issuer and signingKey parameters to the strings you retrieved from the role's
settings.
Set the method's audience parameter to the ACS.Realm string you retrieved from the role's settings.
Note: Realm is the unique identifier of your RP. Audience refers to the realm of the RP that
redirected the client to the STS. In most cases the realm and audience are the same, because you
are redirected back to the application you came from. There are scenarios where the RP that got
the token is not the same RP that requested the token.
11-22 Identity Management and Access Control
AuthenticationOptions.ForAuthorizationHeader("OAuth"));
Make sure you add the simple web token configuration before the return statements.
9. Set the default authentication scheme to OAuth, and enable the use of session tokens.
Set the config's DefaultAuthenticationScheme property to OAuth. This setting will define the
authentication scheme returned for requests without any HTTP Authorization header.
Set the config's EnableSessionToken property to true to support client requests for session tokens.
Clients can use session token instead of including the SWT token in each request. Session tokens are
usually stored in cookies.
Make sure you set the two properties before the return statement.
3. In the FederationCallbackController, add a method to handle POST requests which hold the
authentication token.
Name the method Post, and set its return type to HttpResponseMessage.
Replace the {0} placeholder with the token passed in the request. Retrieve the token by using the
ClaimsPrincipalExtensions.BootstrapToken extension method.
Note: The client application extracts the token from the response and uses it to
authenticate against the service in future requests.
The special redirect to FederationCallback/end indicates to the client that the authentication
process has completed successfully. This flow is part of the passive federation process.
4. In the BlueYonder.Companion.Host project, open the Web.config file, and in the <appSettings>
section set the SwtSigningKey application setting to the relying party token signing key you
generated in the first exercise.
You can either locate the signing key in the Relying party configuration in the ACS portal, or copy it
from the role's settings, from the ACS.SigningKey setting.
Developing Windows Azure and Web Services 11-23
5. In the Web.config file, locate the <microsoft.identityModel> section and in the <audienceUris>
element, replace the [yourrealm] placeholder with urn:blueyonder.cloud.
Note: WIF 4.5 uses the <system.identityModel> section. However, the WIF.SWT NuGet
package you installed still uses WIF 4, which uses the <microsoft.identityModel> section.
6. Locate the <trustedIssuers> element and replace the [youracsnamespace] placeholder with your
ACS namespace. Type the namespace in lowercase letters.
7. Add the following federated authentication configuration to the <service> element under the
<microsoft.identityModel> section.
<federatedAuthentication>
<wsFederation passiveRedirectEnabled="false" issuer="urn:unused" realm="urn:unused"
requireHttps="false" />
</federatedAuthentication>
Click the Microsoft.IdentityModel reference, and in the Properties window, change Copy Local to
True.
Note: WIF 4 is not installed by default in Windows Azure VMs. Therefore you need to make
sure the assembly is included in the deployed package.
Parameter Value
name callback
routeTemplate FederationCallback
defaults a new anonymous type, with a Controller property set to the string
FederationCallback
Make sure you add the route before any other call to the MapHttpRoute method.
Note: The order of routes is important; you must add the federation callback route before adding
the default route ({controller}/{id}), which handles all the other calls to the controllers. If you add the
default route first, it will be used even when you use a URL that ends with FederationCallback.
The config parameter of the Register method contains the global configuration object.
Note: The authentication handler is not used for the first two routes you just added, because the
requests to the FederationCallback controller are sent before the client is authenticated. The
authentication handler is not used for the location's weather route because the GetWeather action is
public and does not require any authentication.
Results: After completing this exercise, you will have configured your ASP.NET Web API services to use
claims-based identities, authenticate users, and authorize users. You will test this configuration at the end
of the lab.
In this exercise, you will deploy your ASP.NET Web API services to Windows Azure, and then configure the
client to the location of your ACS and cloud service. The client-side code for active federation is already
written in the client app. In this exercise, you will also examine the client-side code to understand the
process of active federation.
If you did not import your Windows Azure subscription information yet, download your Windows
Azure credentials, and import the downloaded publish settings file in the Publish Windows Azure
Application dialog box
2. Select the cloud service that matches the cloud service name you wrote down in the beginning of the
lab, after running the setup script.
3. Finish the deployment process by clicking Publish.
Developing Windows Azure and Web Services 11-25
2. In the BlueYonder.Companion.Shared project, open the Addresses class, and in the BaseUri
property replace the {CloudService} placeholder with the Windows Azure Cloud Service name you
wrote down at the beginning of this lab.
3. In the BlueYonder.Companion.Client project, expand the Helpers folder, and open the
DataManager class. Set the ACS namespace constant to the namespace you created in Exercise 1,
"Configuring Windows Azure ACS".
Task 3: Examine the Client Code That Manages the Authentication Process
1. In the DataManager class, locate the GetLiveIdUri method, and examine its code. Observe how the
method retrieves the address of the identity provider logon page.
2. Locate the AuthenticateAsync method, and examine its code. Observe how the method uses the
WebAuthenticationBroker class to handle the authentication process.
3. Locate the GetSessionToken method and examine its code. Observe how the method uses the SWT
it received from the federation callback to start a secure session with the ASP.NET Web API service.
4. Locate the CreateHttpClient method, and examine its code. Observe how the method creates a new
HTTP request object with the HTTP Authorization header.
5. Run the client app and verify the client app requests a Windows Live ID identity. Enter your Windows
Live ID credentials and verify you see the main windows of the app.
6. Display the app bar, log out from the client app, and then close the app.
Results: After completing this exercise, you will be able to run the client app, and log in using your
Windows Live ID credentials.
Question: In this lab, you used the ASP.NET Web API and authenticated with a SWT. On the
other hand, you could use WCF service over any of the WS-Federation bindings. When
should you use each of these technologies?
Question: What types of token can you use when calling a REST-based web service?
11-26 Identity Management and Access Control
Next, you saw how to configure WCF and ASP.NET Web API services to use federated identities. You
integrated WIF to the WCF and ASP.NET Web API pipelines using configuration and code, and saw how to
configure ACS for Service Bus endpoints.
Finally, you learned about configuring federated identities on the client side, and how to call REST-based
services with the Authorization HTTP header. In the lab, you added federated identity capabilities to the
ASP.NET Web API project for the booking service, and to the companion Windows Store app.
Best PracticeS:
Use passive federation for websites and active federation for web services.
Use well known infrastructures such as ADFS 2.0, ACS, WAAD, and WIF.
Review Question(s)
Question: What are the advantages of using claims-based identity?
Tools
Visual Studio 2012
Windows Azure Management Portal
Module 12
Scaling Services
Contents:
Module Overview 12-1
Module Overview
Services that are successful in providing business value are likely to experience growth in the number of
users and the amount of data that they need to handle. Developers should know how to make sure that
their services can handle the increasing workload while still maintaining a high level of performance and
good user experience. You will learn about the need for scalable services and how to handle increasing
workloads using load balancing and distributed caching.
You will learn about scaling services in both on-premises and cloud deployments, along with the
challenges that such services face while they are growing.
Note: The Management Portal UI and Windows Azure dialog boxes in Visual Studio 2012
are updated frequently when new Windows Azure components and SDKs for .NET are released.
Therefore, it is possible that some differences will exist between screen shots and steps shown in
this module, and the actual UI you encounter in the Management Portal and Visual Studio 2012.
Objectives
After completing this module, you will be able to:
Explain the need for scalability.
Describe how to use distributed caching for on-premises as well as Windows Azure services.
Describe how to use Windows Azure caching.
Lesson 1
Introduction to Scalability
Scalability is a critical aspect of any service-oriented software. It has a direct impact on how users view the
reliability and trustworthiness of a service and therefore has a bearing on the business.
In this lesson, you will be introduced to the two approaches for scaling large applications and understand
the required components.
Lesson Objectives
After completing this lesson, students will be able to:
A scalable system can handle such peaks and spikes in demand without any degradation in service quality,
as experienced by customers. This is very important from a business perspective, as it has direct impact on
how customers perceive the reliability and trustworthiness of the service.
Developing Windows Azure and Web Services 12-3
Scaling Approaches
There are two different approaches for scaling
services:
Scaling Out
To scale out, you add additional nodes to an
existing system. With the increased computing
power and decreased cost of commodity hardware, which is hardware that is easily available to
consumers), adding more processing and storage capacity to a distributed application is a very simple
undertaking. Modern distributed applications often run on large clusters of low-cost computers that are
interconnected into a single cluster. Such applications need to be aware of the fact that they run in a
clustered environment.
Scaling Up
To scale up, you add additional resources (processing or storage) to a single node of the system. This is
often the easiest option to apply, but has inherent limitations such as the maximal memory capacity or
the number of network cards that can be installed in a single computer. At this point, there is no choice
but to replace the node with a better and more capable node. Scaling up might also require the
application to scale up along with the hardware for instance, the application must be able to take
advantage of multiple cores in a single CPU.
Shared Configuration. Shared Configuration is used to store and administer configuration settings in
a single location. This location can then be used to automatically configure server software on
multiple nodes, such as Internet Information Services (IIS)
Centralized SSL Certificate Support. Centralized SSL Certificate Support, a new feature in IIS 8, allows
the storage of SSL certificates in a single location. Multiple IIS-hosting nodes can then use this
location for gaining access to the certificates. This makes the administration and management of
secure distributed applications much easier than was previously possible.
Performing an automated scale out. Once a system has identified the need to scale out, it must then
be able to provision additional machines and nodes in a fully automatic manner and add them to the
pool of resources available to the application. Failure to do so may mean that increasing load
demands cannot be met in time, which can have adverse effects on the business. This problem is
often simplified considerably when running on a cloud platform, as the platform will usually contain
APIs and services that are specifically designed for such purposes.
Dealing with failure. A distributed system, by its very nature, runs on multiple hardware elements. This
means that the probability of a hardware failure increases proportionally with the number of physical
machines that are used. Hence, the possibility of a hardware failure and its associated effect on the
application - becomes a very real possibility that must be expected and planned for. In such cases, the
system needs to take steps to isolate the problem and restore full service as soon as possible. Here
too cloud platforms are useful, as hardware failures are detected and new virtual machines
provisioned automatically.
Developing Windows Azure and Web Services 12-5
Lesson 2
Load Balancing
Load balancing is a technique that enables applications to scale and be more resilient to failure. For large-
scale, distributed applications this is an extremely important issue.
In this lesson, you will learn about the different ways in which you can perform load balancing and how to
load-balance your Windows Azure application.
Lesson Objectives
After completing this lesson, students will be able to:
DNS Round Robin is an additional method of load balancing that does not require dedicated software or
hardware. When clients make calls to some domain such as www.blueyonder.com, the domain name is
resolved into a numerical IP address through the use of a Domain Name System (DNS) server. When using
DNS Round Robin, the DNS server resolves the domain name into a different IP address for each
individual request. The major disadvantage of this technique is that clients are then aware of the existence
of multiple machines.
You can also implement load balancing by using Web Farm Framework (WFF) for IIS. The WFF provides
load balancing, scaling, management, and provisioning solutions for IIS-based Web farms. The WFF also
supports application-related solutions, such as connection stickiness, and central output caching.
For additional information about the WFF, refer to the IIS documentation.
Note that in a load-balanced scenario, each request may arrive at a different instance. This means that any
common data, such as session information in an ASP.NET application, needs to be accessible to all
instances. You can use a database or a distributed cache for this purpose.
A further approach for load balancing is through the use of message queues: either Windows Azure
Service Bus Queues or Windows Azure Queue Storage. In this scenario, you bring up multiple worker roles
that read from a single queue. Because each instance reads a single message, the processing load is
distributed across those workers. For further details on queues, see Modules 7, "Windows Azure Service
Bus" and 9, "Windows Azure Storage".
Demonstration Steps
1. Open D:\AllFiles\Mod12\DemoFiles\ScalingWebApplications\ScalingWebApplications.sln in Visual
Studio.
4. Use Visual Studio 2012 to publish the WebApplication.Azure project to the cloud service you
created.
5. Open the Web Role properties and note that the Web application is deployed to three instances.
6. Open the HomeController class and observe the Index method. The method prints the name of the
role instance, which contains the instance number.
8. Return to the management portal and delete the production deployment you created in this demo.
12-8 Scaling Services
Lesson 3
Scaling On-Premises Services with Distributed Cache
Distributed cache is a basic component for implementing high scale distributed applications. Application
servers can store a large set of information in a collection of servers forming a cache cluster. The
information is stored in-memory across the cluster to provide low latency and high throughput.
This module describes Windows Sever AppFabric Cache and the API for executing data access operation.
Lesson Objectives
After completing this lesson, students will be able to:
In high-scale scenarios, you have to store data in an independent data store that is accessible to all
execution machines. One option is to store the data in the database, but each data access will suffer from
long delays. Another solution is to create a dedicated server for storing data in-memory for all other
execution machines. A single server is limited in its memory capacity and is unreliable by design. High
scalable applications often store much more memory then a single machine can handle and cannot afford
a single point of failure.
The solution is a distributed cache that spans over multiple servers. Data is stored in-memory on multiple
machines so the cache can grow in size and in transactional capacity. However, clients work against a
single logical cache without knowing where data is actually stored.
Cache can be useful to store temporary data. All data items in the cache are automatically removed
according to expiry periods and cleanup policy. The developer is free from handling garbage collection of
Developing Windows Azure and Web Services 12-9
unnecessary data stored in the cache. Applications can store intermediate data in the cache, use it in their
calculations, and then forget about it. It will be automatically cleaned.
Distributed cache simplify the execution of parallel tasks across servers in high-performance computing or
map-reduce applications. A complex job can be divided into simpler tasks, distributed across servers and
executed in parallel. Intermediate results produced by such tasks can be stored in the cache before being
used by other tasks in the execution flow.
If data reliability is required, you can use replication and store the same data on multiple cache servers. If
one server fails the data will be still available.
With distributed cache, you can improve the performance of high-scale applications that span multiple
servers. Distributed cache is as simple to use as traditional in-memory cache but can grow in size
according to demand and can serve multiple applications simultaneously.
Applications such as ASP.NET web sites deployed on a web farm with multiple servers can store their
session state in a distributed cache and gain fast data access across the web farm as well as automatic
cleanup.
Caching services
Cache client and API
Administration tools
Named Caches
A named cache is the basic caching unit that applications use to store their data.
You can create one or more named caches for each of your applications.
Named caches are independent of the others, which lets you optimize the policies of each cache for your
applications. Each named cache spans all cache hosts in the cluster, meaning that object from the same
named cache can be stored on different servers this way resources are evenly distributed between the
servers.
You do not have to create named caches because a default cache is provisioned for you on startup.
Regions
Regions are an additional data container or a subgroup for cached items that your applications can use to
store and retrieve cached objects by using descriptive strings called tags that you can associate to each
cached object. The ability to search all cached objects in the region dictate that objects in a region are
limited to a single cache host.
12-10 Scaling Services
Named cache are a collection of key value pairs stored in-memory across the cluster where regions are
key value pairs constrained to a single server but with search by tags capabilities. When choosing between
storing data in a region or in a named cache you have to choose between functionality and scalability.
High Availability
When creating a named cache or region you can configure it to run with high availability enabled.
All cached objects will be copied to a secondary cache on creation. The secondary copy of the cached
object is acknowledged on all changes to maintain consistency. If the primary cache host fails on request
for a cached object the cache cluster re-routes the request to the cache host that maintained the
secondary copy of the object. The secondary copy of the object is elevated to become the new primary
object and a new secondary copy is created on another healthy cache host. This is why the cache cluster
must contain at least three cache hosts for high availability to function.
Caching API
When using a distributed cache objects have to be communicated between the application server and the
cache cluster. Window Server AppFabric Cache API abstracts the communication with the cluster and
simplify the interaction with the cache. You can use a large number of methods and interfaces provided
by the Windows Server AppFabric API to configure the cache and access the data it contains.
Cache Notifications
With Windows Server AppFabric cache notifications, you can receive asynchronous notifications when a
variety of cache-related events occur on the cache cluster. You can use cache notifications for automatic
invalidation of locally cached objects. To receive asynchronous cache notifications, enable notification on
a cache level and register a cache notification callback.
Notifications can be grouped as follows:
Region Operations:
Item Operations:
To simplify notification handling it is possible to narrow the scope of cache notifications from the cache
level down to the region level and item level.
Administration Tools
Windows Server AppFabric caching administration is provided on top of Windows PowerShell.
There are more than 130 standard command-line tools that you can use to for managing your distributed
cache environment. Scripting on top of Windows PowerShell is simple and powerful but if required you
can use the Windows Server AppFabric Caching API to build your own custom administration tool.
To learn more about Windows Server AppFabric Cache, refer to the following MSDN
documentation:
http://go.microsoft.com/fwlink/?LinkID=298873&clcid=0x534
Developing Windows Azure and Web Services 12-11
Cache host
Cache cluster
Cache client
Local cache
Cache Host
The Cache Host is a Windows service that runs on one or more servers and hosts WCF endpoint that the
cache clients use to access cached objects. The cache host is the process that stores the cached objects in-
memory.
Cache Cluster
The cache cluster is a collection of cache host instances running on cache servers. The cluster is managed
by a cluster management role which is responsible for keeping the cache cluster running, monitoring the
availability of all cache hosts in the cache cluster, and adding new cache hosts to the cluster. The cluster
management role can run on a designated lead hosts or on an SQL server, which stores the cluster
configuration.
Cluster Configuration
You can store the cluster configuration in a SQL Server database, XML file on a shared network location or
on a custom storage. The cluster configuration contains the following information:
Cluster settings: Configuration settings concerning the cache cluster such as the cluster size.
Cache settings: Configuration settings concerning each of the caches instances such as name, type
and timeouts.
Host settings: Configuration settings concerning each of the cache host services such as server name,
port numbers and watermarks.
Cache Client
Any application that uses the Windows Server AppFabric Cache is considered as a Cache Client.
To access data on the cache applications, use the Windows Server AppFabric Cache Client API provided in
the AppFabric caching assemblies and specify the appropriate configuration. All the communication and
serialization details are abstracted from the client to keep the data access code as simple as possible.
Local Cache
Distributed Cache stores serialized information on a set of dedicated servers referred as the cache cluster,
meaning that each data access to the cache involves communication and serialization. To speed up the
data access process, an additional read-though caching layer can be introduced in the application
process. This is known as the local cache.
When local cache is enabled, the application store reference to cached objects is retrieved from the cache
cluster in a dedicated in-memory collection. This keeps the object active in the memory space of the client
application. When the application requests the object, the cache client first checks whether the object
12-12 Scaling Services
resides in the local cache. Only if the object does not exist, the cache client establishes communication
with the cache server to retrieve the object and store it in the local cache.
The lifetime of objects in the local cache is controlled by an invocation policy based on the number of
objects in the local cache, timeouts, and invalidation notifications.
DataCacheItem. Wrapper around the cached object which contain information such as key, tags,
version, and the name of the cache and region it is stored in.
DataCacheTag. String identifier for searching cached objects in regions.
To access data on the cache, you have to begin with creating a DataCache object, and then you can use it
to insert, retrieve, or delete objects in the cache. You can use the following DataCache methods to
execute basic data access operations on a named cache.
Add. Adds a new object to the cache. If the item already exists an exception will be thrown.
Put. Adds or replaces an object in the cache.
You can use optimistic concurrency as well as pessimistic concurrency when updating cached objects.
Developing Windows Azure and Web Services 12-13
To ensure consistency, the cache client sends the version information together with the updated object.
The server executes the update only if the version it receives matches the current version of the object.
Finally, the server updates the cached object version number.
In a pessimistic concurrency model, the client explicitly locks objects before running updates.
Other operations that request locks are rejected until the locks are released.
When locking cached objects, cache returns a lock handle to be used for releasing the object when
possible.
You can use the following DataCache methods to lock and release objects.
GetAndLock. Returns and locks the cached object. The method returns a lock handle (as an output
parameter) to be later used for releasing the object
PutAndUnlock. Updates and release the locked object using a lock handle.
Unlock. Explicitly unlocks a cached object.
This following code is an example how to lock and release a cached object.
GetObjectsByAnyTag. Returns objects in a region that have tags matching any of the tags provided.
GetObjectsByAllTags. Returns objects in a region that have tags matching all of the tags provided.
GetCacheItem. Returns a DataCacheItem object. One if this method overloads includes the
associated tags.
Add. Adds an object to cache. One if this method overloads supports associating tags.
12-14 Scaling Services
The following code example shows how to attach tags to a cached object, and how to retrieve objects
according to tag values.
Lesson 4
Windows Azure Caching
Windows Azure applications store state information in independent data stores. You can use a distributed
cache to store and share information between roles instances while maintaining low data access latencies.
Lesson Objectives
After completing this lesson, students will be able to:
In-Role Caching
Distributed caching can improve the performance
and scalability of Windows Azure applications by
caching state and other information originated in
slow data stores such as Windows Azure SQL
Database and Windows Azure Storage. This
caching enables role instances to become
stateless, which simplifies execution of multiple
instances.
Provisioning a distributed cache is simple in Windows Azure. You can host caching nodes within your
Windows Azure roles or define a worker role that is dedicated to Caching.
In contrast to Windows Server AppFabric Cache, all caching administration in Windows Azure is
automatically executed for you and so there is no need to use administration tools such as Windows
PowerShell. The API for data access is similar to Windows Server AppFabric Cache so you can use your
existing knowledge for working with distributed cache in the cloud.
Shared Caching
Apart from dedicated caching running on your roles, Windows Azure provides a separate service called
Windows Azure Shared Caching. With shared caching, you can register a cache through the Windows
Azure Management Portal. Similar to other Windows Azure services, shared caching is hosted on a shared
infrastructure in a multitenant environment. After provisioning your cache, you can access your cache by
using a Service URL and Authentication token.
Windows Azure distributed cache, which is hosted on your application or dedicated roles, supports the
same programming model as in-role caching, but lacks some of the features in-role caching offers, such
as high availability, regions and tagging, and cache notifications.
12-16 Scaling Services
To learn more about Caching in Windows Azure, refer to the following MSDN
documentation:
http://go.microsoft.com/fwlink/?LinkID=298875&clcid=0x536
To enable caching, open the Caching tab on the role properties dialog, apply the appropriate settings
and check the Enable Caching checkbox. If the cache is colocated in an applicative role, set the
percentage of physical resources that should be saved for the cache.
Finally, you have to supply a valid connection string to a storage account that will be used to persist the
cluster configuration in Windows Azure Storage.
You can create one or more named caches and set their properties. To enable High Availability on a
named cache, it is required that the role instance count will be greater than 1.
The following screenshot shows a co-located cache configuration with High Availability and
Notifications enabled.
Developing Windows Azure and Web Services 12-17
The following configuration shows how to configure a Windows Azure Cache client
After the client cache configuration is in place, it is possible to create an instance of a DataCache and
interact with the cache.
The following code shows how to call basic cache operations on Windows Azure in-role Cache using the
DataCache class.
Demonstration Steps
1. Open the
D:\Allfiles\Mod12\DemoFiles\WindowsAzureCaching\begin\WindowsAzureCaching\Windows
AzureCaching.sln solution file.
2. Open the LocationsController and examine the implementation of the Get method. The method
currently retrieves the Location entity from the database, but you will change this code to try and
retrieve the entity from a distributed cache, before attempting to call the database.
3. Add a Cache Worker Role to the MvcApplication1.Azure cloud project, and build the solution.
Name the worker role CacheWorkerRole.
4. Add the Windows Azure Caching NuGet package to the MvcApplication1 web application project.
5. Open the web application's Web.config file, and in the <dataCacheClients> section, replace the
[cache cluster role name] string with CacheWorkerRole (the name of the cache worker role you
created).
6. In the MvcApplication1 project, open the LocationsController class, and locate the Get method. In
the method locate the comment // TODO: Place cache initialization here, and after it add the code to
create a DataCache object for the default cache. Use the DataCacheFactory.GetDefaultCache
method to get the data cache object.
7. In the Get method, locate the comment // TODO: Find the location entity in the cache, and after it
add the code to check whether the cache already contains the location. Use the DataCache.Get
method to get the cached item, and store the result in the location variable. For the key, use a string
of the format location_{0}, where {0} is the value of the id parameter.
8. In the Get method, locate the comment // TODO: Add the location to the cache. Add code between it
and the end of the using statement, to store the location variable in the cache. Use the
DataCache.Put method with the cache key as you created it before for the DataCache.Get method.
Putting the item in the cache will make it available the next time the user requests the same location
entity.
9. Place a breakpoint in the beginning of the Get method, set the MvcApplication1.Azure project as
the startup project, and run the project in debug.
10. After the browser opens, browse to the locations controller to get the location with ID 1, and debug
the code to verify that in the first request for the entity, it is retrieved from the database. Use the URL
suffix api/locations/1.
11. Return to the browser and refresh the page. In Visual Studio 2012, debug the code and verify that this
time, the entity is retrieved from the cache.
Developing Windows Azure and Web Services 12-19
To learn more about the difference between in-role caching and shared caching in Windows
Azure, refer to the following MSDN documentation:
http://go.microsoft.com/fwlink/?LinkID=298877&clcid=0x538
To register for a shared cache service, click Service Bus, Access Control, & Caching in the Management
portal. In the left pane, click Cache and create a new namespace. Verify that the Cache check box under is
selected under Available Services.
Note: At the time of writing, Windows Azure Shared cache can only be provisioned in the
previous management portal (Silverlight).
This screenshot presents how to create a shared cache namespace in Windows Azure management portal.
The first step for creating a cache client is to provide the necessary configuration.
You can get the configuration required to access the shared cache by selecting the namespace in the
management portal and clicking the View Client Configuration button. The client configuration
provided by the management portal contains the configuration required to create a DataCacheFactory
and a DataCache and the configuration for an ASP.NET session provider implemented on top of the
cache.
This screenshot presents the client configuration provided by the management portal.
To reference the Windows Azure Caching API assemblies, you can download the Windows Azure Shared
Caching NuGet package. After installing the NuGet package you will find the Cache Client configuration
in your application configuration file in which you have to provide the details of your shared cache
subscription. You can copy the dataCacheClients configuration section from the configuration you
received in the management portal and paste it in the application configuration file.
The following configuration shows how to configure a client of Windows Azure Shared cache.
authorizationInfo="YWNzOmh0dHBzOi8vc2hhcmVkY2FjaGUtY2FjaGUuYWNjZXNzY29udHJvbC53aW5kb3dzLm
5ldC9XUkFQdjAuOS8mb3duZXImVy9HRjV4R3RsSnRqd2tGNncxUU1RRng0SjJwUzExN2ZUbkEyVHUvaGxtbz0maHR
0cDovL3NoYXJlZGNhY2hlLmNhY2hlLndpbmRvd3MubmV0">
</messageSecurity>
</securityProperties>
</dataCacheClient>
<dataCacheClient name="SslEndpoint">
<hosts>
<host name="sharedcache.cache.windows.net" cachePort="22243" />
</hosts>
<securityProperties mode="Message" sslEnabled="true">
<messageSecurity
authorizationInfo="YWNzOmh0dHBzOi8vc2hhcmVkY2FjaGUtY2FjaGUuYWNjZXNzY29udHJvbC53aW5kb3dzLm
5ldC9XUkFQdjAuOS8mb3duZXImVy9HRjV4R3RsSnRqd2tGNncxUU1RRng0SjJwUzExN2ZUbkEyVHUvaGxtbz0maHR
0cDovL3NoYXJlZGNhY2hlLmNhY2hlLndpbmRvd3MubmV0">
</messageSecurity>
</securityProperties>
</dataCacheClient>
</dataCacheClients>
Once the client cache configuration is in place, it is possible to create an instance of a DataCache and
interact with the cache.
The following code shows how to call basic cache operations on Windows Azure Shared Cache by using
the DataCache class.
Lesson 5
Scaling Globally
Scaling services so that they are operating at their optimal level for users in different countries or
continents can be a challenge. Cloud platforms such as Windows Azure provide several features to
simplify the process of scaling.
In this lesson, you will learn about the issues that apply to services that need to scale on a global scale and
how Windows Azure can help.
Lesson Objectives
After completing this lesson, students will be able to:
Describe how to load balance resources by using Content Delivery Networks (CDNs).
Describe the Windows Azure options for load balancing applications across data centers.
For users. Static content is delivered quickly and user experience is enhanced. Long round-trip times
are only required for accessing the actual dynamic portions of the application.
For developers. Traffic to the applications servers is reduced to only such requests that require
dynamic content. Scalability is enhanced and costs are lowered. It is the CDN that bears most of the
traffic for the application.
United States
Holland
Ireland
Developing Windows Azure and Web Services 12-23
United Kingdom
Russia
France
Sweden
Austria
Switzerland
Hong Kong
Brazil
South Korea
Singapore
Australia
Taiwan
Japan
Qatar
The Windows Azure CDN may be enabled on a Windows Azure Storage account or hosted service. The
first time a specific object is requested from the CDN, it will be retrieved from its blog or service and
cached at the CDN endpoint. It will subsequently be served directly from the CDN. Note that only blobs
that are available are cached in the CDN. Similarly, a service must be hosted in the production
environment, provide content on port 80 using HTTP, and place the relevant content in the /cdn folder.
Also note that query strings are ignored when caching blobs in the CDN, but not ignored when caching
service content.
Performance. Traffic is routed to the service which is closest to the users geographical location.
Round Robin. Traffic is routed to each service in turn. If your services are hosted in different data
centers, users may be routed to a service that is geographically distant from them.
Failover. Services are ordered in a list, with traffic being routed to the service at the top of the list. If
this service goes offline for any reason, traffic will be routed to the second service on the list, and so
on.
To use the Traffic Manager, you define one or more policies based on the above criteria. Multiple hosted
services are then assigned to each policy. When the Windows Azure load balancer is queried with the
appropriate policy, it replies with the address of the hosted service that matches the policy.
Developing Windows Azure and Web Services 12-25
Lab: Scalability
Scenario
The final task that you need to perform in the Blue Yonder Airlines application is to reduce the ASP.NET
Web API back-end service database load by storing the static data that was fetched from the database in
a distributed cache. In this lab, you will add a distributed caching mechanism to the ASP.NET Web API
service.
Objectives
After completing this lab, you will be able to:
Lab Setup
Estimated time: 30 minutes
Virtual Machine: 20487B-SEA-DEV-A, 20487B-SEA-DEV-C
1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.
3. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.
4. In the Action pane, click Connect. Wait until the virtual machine starts.
Password: Pa$$w0rd
6. Return to Hyper-V Manager, click 20487B-SEA-DEV-C, and in the Action pane, click Start.
7. In the Action pane, click Connect. Wait until the virtual machine starts.
Password: Pa$$w0rd
9. Verify that you received credentials to log in to the Windows Azure portal from you training provider,
these credentials and the Windows Azure account will be used throughout the labs of this course.
In this lab, you will install NuGet packages. It is possible that some NuGet packages will have newer
versions than those used when developing this course. If your code does not compile, and you identify
the cause to be a breaking change in a NuGet package, you should uninstall the NuGet package and
instead, install the old version by using Visual Studio's Package Manager Console window:
1. In Visual Studio, on the Tools menu, point to Library Package Manager, and then click Package
Manager Console.
2. In Package Manager Console, enter the following command and then press Enter.
(The project name is the name of the Visual Studio project that is written in the step where you were
instructed to add the NuGet package).
3. Wait until Package Manager Console finishes downloading and adding the package.
The following table details the compatible versions of the packages used in the lab:
Microsoft.WindowsAzure.Caching 2.0.0.0
Note: You may see a warning saying the client model does not match the server model.
This warning may appear if there is a newer version of the Windows Azure PowerShell Cmdlets. If
this message is followed by an error message, please inform the instructor, otherwise you can
ignore the warning.
Task 2: Add the Windows Azure Caching NuGet Package to the ASP.NET Web API
Project
1. Add the Windows Azure Caching NuGet package to the BlueYonder.Companion.Controllers project.
2. Add the Windows Azure Caching NuGet package to the BlueYonder.Companion.Host project.
3. Open the Web.config file from the BlueYonder.Companion.Host project, and in the
<dataCacheClients> section, replace the [cache cluster role name] string with
BlueYonder.Companion.CacheWorkerRole.
Developing Windows Azure and Web Services 12-27
2. Still in the Get method, locate the comment // TODO: Place cache check here, and after it add the
code to check whether the cache already contains the requested list of flight schedules. Use the
DataCache.Get method to get the cached item, and store the result in the routesWithSchedules
variable. For the key, use a semicolon-separated string containing the source, destination and date
parameters of the method.
Note: The date parameter of the Get method is a nullable DateTime. You should make
sure the cache key you create will not get set to null if the date parameter is null.
3. Still in the Get method, locate the comment // TODO: Insert into cache here, and after it add the code
to store the routesWithSchedules variable in the cache by using the DataCache.Put method. Use
the cache key as you created it before for the DataCache.Get method call.
4. Run the client without debugging, search for a destination that contains the letter N, and go back to
20487B-SEA-DEV-A virtual machine to see the code execution breaks. Debug the code in the Flights
controller to verify it retrieves data from the database.
Note: Normally, the Windows Azure Emulator is not accessible from other computers on
the network. For the purpose of testing this lab from a Windows 8 client, a routing module was
installed on the server's IIS, routing the incoming traffic to the emulator.
5. Close the client app, re-open it without debugging, search again for a destinations that contains the
letter N, and verify the Flights controller retrieves data from the cache.
Results: You will have added a caching worker role to the Cloud project, and implemented other
Windows Azure caching features.
Question: What are the options for adding a distributed cache in Windows Azure?
12-28 Scaling Services
Appendix A
Designing and Extending WCF Services
Contents:
Module Overview 13-1
Module Overview
Windows Communication Foundation (WCF) is the framework for developing Simple Object Access
Protocol (SOAP)-based services. In the You learned how to create a WCF service by creating a service
contract, implementing it, and hosting it.
When you create services in WCF, there are additional design principles and techniques that you can
apply to your code to enhance the reliability and performance of your service.. For example, you can
create asynchronous service operations to better utilize how WCF uses the managed Thread Pool. You can
also create services that can be a part of a distributed transaction that spans over several services and
databases. Or you can create your own custom error handlers that log any unhandled exception thrown in
your service code.
This module describes how to design service contracts with various patterns, such as one-way operations
and asynchronous operations, and then explains how to implement those contracts in your service
implementation. You will also learn about distributed transactions, what they are and how you can create
a WCF service that supports distributed transactions. The last lesson in this module is about the WCF
pipeline, where you will learn how you can extend the message handling pipeline by creating your own
custom runtime components and extensible objects, and then apply them to services and operations.
Objectives
After completing this module, you will be able to:
Design and create services and clients to use different kinds of message patterns.
Lesson 1
Applying Design Principles to Service Contracts
Message patterns describe how clients and services exchange data between each other. The most
common message pattern in WCF is the Request-Response pattern, where the client sends a request and
waits until a response returns from the service. However, WCF supports several other kinds of message
patterns that you can use in your service contract. For example, you can define a service contract with
one-way operations, which clients can call without waiting for any response (even if it is a fault response).
In this lesson, you will learn about the different message patterns that you can choose from when you
design your service contract.
Lesson Objectives
After completing this lesson, you will be able to:
One-Way Operations
Defining an operation as one-way means that the
operation is not expected to return a value.
Therefore, clients can send a request to the service
and continue to execute without waiting for a
response. This pattern is also known as fire-and-
forget. Because one-way operations require that
no value is returned, operations that are marked
as one-way must have void as their return type.
This nonblocking behavior of the client differs
from the behavior when the client calls an
operation that uses the request-response pattern.
This is because in request-response patterns, the
client blocks until the service responds, even if the operation returns void.
You can use one-way operations for any of the following scenarios:
When the client does not require the operation to return a value, either successful or failed.
When the operation is long-running and you do not want to block your client's execution.
When the service has a different way of notifying the client of the operation's result, such as
responding with an email message.
For example, a client tracking system can send one-way messages to a service with the Global Positioning
System (GPS) location of the client every second. The service operation only logs the coordinates and
therefore does not send any response back to the client. The business logic of the application handles the
possibility that some calls fail, because if a call fails, a new location is sent shortly afterward. In this case, it
is also expected that the business logic handles the scenario where the client loses connectivity and does
not send updates for long periods of time.
Developing Windows Azure and Web Services 13-3
Note: The client and service implementation of one-way operations differs between
transports. When using a bidirectional transport such as Hypertext Transfer Protocol (HTTP) or
Transmission Control Protocol (TCP), the client sends a request and waits for an
acknowledgement from the service. When the service receives the request, it immediately
responds with an acknowledgement and only then handles the request. For example, when using
HTTP transports with one-way messaging, the service immediately responds with an HTTP 202
(Accepted) response, and only then handles the request. If the service is unavailable to receive
the request, the client fails and throws an exception. On the other hand, when using a
unidirectional transport such as User Datagram Protocol (UDP), the client sends a request and
does not wait for any service acknowledgement. Therefore, the client cannot verify whether the
service is online and available.
To mark an operation as one-way, add the IsOneWay = true initialization to the [OperationContract]
attribute decorating the operation in the service contract.
The following code example demonstrates how to create a one-way operation in a service contract.
[OperationContract]
List<LogLine> GetLog();
}
Note: If the IsOneWay parameter is set to false or omitted from the attribute, the default
messaging pattern becomes request-response. If the operation does not return void, an
exception is thrown in runtime when opening the service host. You can have both one-way and
request-response operations in the same service contract.
When you use one-way messages, the client is unaware of what has occurred during the execution of the
service operation. Therefore, the client does not know whether the service operation completed execution
or failed because of an exception. If the client requires information on the operation's result, the service
has to send the information to the client. The service can send this information by using either an out-of-
band communication, such as an email or a text message (SMS), or by sending a message to the client
application For example, by hosting a WCF service in the client application and sending it a message that
contains the result.
One-Way Services
http://go.microsoft.com/fwlink/?LinkID=298778&clcid=0x409
13-4 Appendix A: Designing and Extending WCF Services
Note: To prevent services from failing due to large memory allocation attempts, the default
configuration of WCF limits the maximum size of a received message to 64 kilobytes (KB). An
additional benefit of this restriction is that it helps prevent denial of service attacks. You can
change this limit by setting the maxReceivedMessageSize and maxBufferSize attributes in the
binding configuration. This configuration also exists on the client-side, and affects the size of
responses that clients can receive from services.
In addition, buffered transfers require the receiving side to wait until the whole message arrives before it
can be read and used. When you send large messages on slow networks, it can cause your service and
client to stop responding for a long time, and even time out if the message takes too long to be received.
If you have to send large amounts of data from the client to the service or from the service to the client
such as when uploading or downloading a file, you might consider sending the data using a streamed
transfer instead of using the standard buffered transfer.
In streamed transfers, both the receiving and sending sides work with streams. When you use streams, you
can send small amounts of data, which are known as chunks, from one end to the other without having to
allocate memory for the whole message in advance. This is possible because the allocated memory is only
as big as a single chunk. In addition, the receiving end can handle each chunk, while it is being received,
without having to wait for the whole message to arrive. WCF can use streamed transfers in most of its
bindings, including HTTP, TCP, and Named pipes.
Streaming is supported on both ends of a WCF service call: an operation can receive a stream, return a
stream, or do both. The first part in creating an operation that supports streaming is to define the
operation contract to use stream, either as an input, output, or both.
The following code example demonstrates how to define operations that work with stream content.
[OperationContract]
Guid UploadVideo(Stream video);
[OperationContract]
Stream DecodeVideo(Stream videoInput);
Developing Windows Azure and Web Services 13-5
Note: If your operation receives a stream, it cannot receive any other parameter other than
that stream. Instead of using the Stream class for the input/output, you can also use the
Message class or any other type that implements the IXmlSerializable interface. If you plan on
passing very large data structures, consider working with the Message class or implementing the
IXmlSerializable interface, because these types provide streamed access to Extensible Markup
Language (XML) content by using the XmlReader abstract classes for reading XML and
XmlWriter abstract classes for writing XML.
For a demonstration on how to use the Message class for streamed transfers, see:
Setting an operation to accept or return a Stream class is not enough. You also have to change the
binding configuration of your endpoint to use streamed transfers. Each part of the communication can be
streamed: the request (StreamedRequest), the response (StreamedResponse), or both (Streamed).
The following code example demonstrates setting the binding configuration to use streaming on both
request and response.
<bindings>
<basicHttpBinding>
<binding
name="HttpStreaming"
maxReceivedMessageSize="2147483647"
transferMode="Streamed"/>
</basicHttpBinding>
</bindings>
</system.serviceModel>
</configuration>
In this example, the maxReceivedMessageSize is changed from the default size of 65KB to
int32.MaxInt, to enable the service to receive large files.
When you change the transferMode of the binding to any of the streamed options, it affects all the
operations in the contract that you defined for the endpoint. This includes operations that do not use
streams for either input or output. In this case, the streamed content is buffered before
serializing/deserializing it. If your contract has a mix of operations that are more suitable for buffering and
operations that you want to stream, consider splitting the contract to two contracts. You can then expose
each contract through a different endpoint with a different binding configuration: one that uses buffering
and the other that uses streaming.
13-6 Appendix A: Designing and Extending WCF Services
After you configure your service contract for streamed transfers, you also have to make sure that your
client-side configuration is configured for streamed transfers. For TCP and Named pipes, WCF includes
policy settings in the service metadata so clients can create matching binding configuration. If you use the
Add Service Reference dialog box of Visual Studio 2012, your client-side configuration can be
configured for streamed transfer. However, for the HTTP transport, WCF does not include policy settings
in the service metadata, and after you use the Add Service Reference dialog box, you have to manually
edit the client configuration file and change the binding configuration to set which part is streamed: the
request (StreamedRequest), the response (StreamedResponse), or both (Streamed).
Duplex Services
The WCF duplex messaging pattern implements
the callback design pattern. In this pattern, a
callback function is invoked after a certain
function finishes its work. The callback pattern is
used widely in the .NET Framework, for example:
The .NET Asynchronous Programming Model
uses callbacks (BeginInvoke and EndInvoke).
You can view the duplex channel in WCF as the implementation of the callback pattern in the distributed
world.
For a service to use a callback that resides on the client-side, the service has to open a connection to the
client. To make this possible, the client must perform two actions:
1. Create a service, and host it inside the client application. The remote service can open a channel to
the client-side service and send a message to it.
2. Send information about the client's address to the service so that the service knows how to call the
client.
When you use a duplex channel, such as a TCP channel, WCF automatically performs the above two
actions.
To call a client's callback from the service-side, the client must perform the following steps:
1. When the client calls the service, save the callback channel until it is needed.
2. When the service is ready to send the message, open the saved channel, and then call the callback
operation.
To create a duplex service, start by creating the client contract to implement in the client. Your service
uses the client contract when calling the client.
After you create the client contract, create the service contract, which is implemented on the service-side.
[OperationContract(IsOneWay=true)]
void UpdateStockQuote(string stockId, float newValue);
}
You can correlate the service and client contracts in the service contract by adding the CallbackContract
parameter to the [ServiceContract] attribute, and setting it to the kind of the client contract interface.
There are two contracts that have to be implemented: the service contract and the client contract (also
known as the callback contract). Both contracts are designed when the service is built, but only the service
contract is implemented by the service itself.
Note: The callback and service contracts use only one-way operations, but you can also
declare operations that use the request-response messaging pattern in these contracts.
When you add a reference to the service on the client-side, you have access to both the service and
callback contracts. You use the service contract when you create a proxy for calling the service and you
implement the callback contract on the client-side.
The following code example demonstrates how the service can call the client by using the callback when it
has information for it.
}
}
}
When the client sends a request to the RegisterForQuote operation method, the method retrieves the
callback channel from the WCF context by calling the GetCallbackChannel<T> generic method, and
stores it in a generic dictionary. When the UpdateStockQuote method is called by a different client (or
even a different service), the callback channel is retrieved from the dictionary, and a message is sent to the
selected client.
Note: The preceding code example registers a single callback for every stock ID to keep the
code short. In real-world scenarios, you should support multiple registrations per stock, as shown
in the demonstration that follows this topic.
To handle the message sent from the service, the client must implement the callback contract.
The following code example demonstrates how to implement the IStockCallback callback contract.
Implementing the callback contract is not enough. You also have to host this implementation in the client
before calling the service for the first time. However, you do not have to manually create a service host
and host the implementation, because WCF can automatically handle the hosting for you if you create the
client proxy with duplex communication support.
The following code example demonstrates how to create a duplex channel for the IStock service contract
with the StockCallback client callback class.
When you have to call a duplex service, use the DuplexChannelFactory<T> generic class instead of the
standard ChannelFactory<T> generic class. The DuplexChannelFactory<T> has a constructor that
expects an InstanceContext parameter. The InstanceContext class manages the local service hosting
context. You call the InstanceContext constructor with an instance of the class that you want to host,
which in this case is the StockCallback class.
If you use the Add Service Reference dialog box of Visual Studio 2012 to add reference to a duplex
service, the generated proxy is generated with a constructor method that expects an InstanceContext
object.
The only part that remains is to verify that you are using a binding that supports duplex communication.
TCP and Named pipes support duplex communication, but HTTP and UDP transports cannot handle
duplex communication. If you want to use duplex communication with HTTP, you have to use either the
WsDualHttpBinding or the NetHttpBinding.
Developing Windows Azure and Web Services 13-9
Note: The specification of HTTP, which is defined by the World Wide Web Consortium
(W3C), prevents the usage of HTTP as a duplex channel. Therefore, you cannot use standard
HTTP channels, such as those used with BasicHttpBinding and WsHttpBinding, for duplex
communication. You can use the WsDualHttpBinding with duplex communication, because this
binding creates two request-response channels: one for calling the service, and another for
receiving calls from the service. You can also use the NetHttpBinding for duplex
communication, because this binding can upgrade its HTTP channel to use WebSockets, which is
a new protocol that supports bidirectional duplex communication.
Duplex Services
http://go.microsoft.com/fwlink/?LinkID=298781&clcid=0x409
Demonstration Steps
1. Open the D:\Allfiles\Apx01\DemoFiles\DuplexServices\DuplexServices.sln solution file.
2. View the callback contract and service contract. Observe how the correlation is made between the
two contracts.
The contracts are located under the Contracts project, in the IStockCallback.cs and IStock.cs files.
3. View the service implementation. You set the instancing mode of the service to Single because the
service host directly calls the UpdateStockQuote method, and the reason that you set the
concurrency mode to Multiple is to support multiple concurrent registration calls.
The service implementation is located in the StockService.cs file, under the Service project.
4. Observe the use of the GetCallbackChannel<T> generic method in the RegisterForQuote method.
The GetCallbackChannel<T> generic method retrieves the callback channel to the client and stores
the channel in the dictionary.
The lock around the dictionary access is there to prevent multiple requests from writing the same key
to the dictionary.
5. View the code in the UpdateStockQuote method, and observe how the callback channel is casted to
the ICommunicationObject interface.
6. View the service host code, and observe that it uses TCP-based endpoints because TCP supports
duplex communication.
The service host code is located in the Program.cs file under the Service project.
7. View the implementation of the callback contract in the Client project, in the StockCallback.cs file.
13-10 Appendix A: Designing and Extending WCF Services
8. View the client proxy creation. Note the use of the DuplexChannelFactory<T> generic classes
instead of the ChannelFactory<T> generic class.
9. Run both projects without debugging and wait until both console windows open.
Wait until the message "Enter stock name followed by new price. Enter Q to quit." appears in the
Service Host console window, and until the message "Press Enter when the service is ready" appears
in the Client console window.
Move to Client console window and press Enter. Wait until the following message appears in the
window: "Waiting for stock updates. Press Enter to stop".
10. Enter stock changes in the Service Host console window, and view how the updates appear in the
Client console window.
In the Service Host console window, type MSFT 1200, and then press Enter. Verify the message
"MSFT = 1200" appears in the Client console window.
In the Service Host console window, type MSFT 1400, and then press Enter. Verify the message
"MSFT = 1400" appears in the Client console window.
If you have an operation that does blocking I/O work, you can switch it to an asynchronous service
operation. Asynchronous service operations act as follows:
1. The operation starts to execute in the context of a thread from the thread pool.
2. When the operation requires some I/O-bound work, it executes an asynchronous I/O call instead of a
synchronous I/O call.
3. As soon as the I/O call starts, the thread is released back to the thread pool during the asynchronous
I/O call. As soon as the thread is back in the thread pool, the thread can be assigned to other
incoming requests.
Developing Windows Azure and Web Services 13-11
4. When the I/O call completes and the control is returned to the operation, a thread is requested from
the thread pool, assigned to the operation, and the operation continues its execution.
When you use the asynchronous operation call pattern with I/O-bound operations, you can better use the
thread pool and decrease the memory consumption of your service.
You can change your service operation from being synchronous to asynchronous by changing the way
the operation method is declared in the service contract and by the way that you implement it in your
code.
The following code example demonstrates how to declare a service contract with an asynchronous service
operation.
In this example, the return kind of the operation method is the Task<T> generic class. When the service
host sees that the operation uses a Task, it treats this operation as asynchronous.
Note: Although the operation contract's signature uses the Task<T> type, the contract
exposed by the service, by using the WSDL file, shows this operation as synchronous. This is
because the asynchronicity of the operation is internal to the service implementation and is
transparent to clients.
The following code example demonstrates how to implement the IFileHandler service contract.
return result;
}
}
To implement the asynchronous service operation, you use the async keyword in the signature of the
method, and in the implementation, you signify any asynchronous operation with the await keyword.
13-12 Appendix A: Designing and Extending WCF Services
When the await keyword is encountered during execution, the thread is released and returned to the
thread pool, until the asynchronous operation is complete. In this sample, the FileStream.ReadAsync
method starts the asynchronous I/O call and returns an instance of the Task<T> class.
Note: Before WCF 4.5, asynchronous service operations were implemented by creating two
methods: BeginXXX and EndXXX, marking the BeginXXX method with the
[OperationContract] attribute, and setting the AsyncPattern parameter of the attribute to true.
This declaration technique, although not obsolete, is more difficult to implement.
For additional information about the different asynchronous programming patterns that you can use in
the .NET Framework, see:
All the built-in Stream types in the .NET Framework, such as FileStream and NetworkStream, expose
asynchronous methods for both reading and writing that return a Task object. ADO.NET also provides
asynchronous operations with the DbCommand class. For example, although the
DbCommand.ExecuteNonQuery method is a synchronous blocking call, the
DbCommand.ExecuteNonQueryAsync method provides a nonblocking asynchronous call that you can
use to run lengthy SQL statements in the database.
If your service operation has to call another WCF service, and you do not want to block the thread while
you wait for the other service to respond, you can use asynchronous client calls. You can generate a client
proxy with asynchronous methods with the Add Service Reference dialog box of Visual Studio 2012.
When you add a service reference in the Add Service Reference dialog box, click Advanced, and then
select the Allow generation of asynchronous operations option button.
The following code example demonstrates how to call a WCF service by using an asynchronous WCF
client call.
In this example, the asynchronous ProcessDataAsync method was generated in addition to the
synchronous ProcessData method. You can use the same async/await pattern here as you would use it
when working with streams, because asynchronous WCF client calls also return a Task<T> instance.
For additional information about the async and await keywords, see:
For additional information about how to create asynchronous service and client operations, see:
Developing Windows Azure and Web Services 13-13
If you choose to use channel factories instead of generating proxy classes, and the service contract
interface that you have only contains synchronous operations, you have to create a new service contract
interface that returns Task<T>, and replace the T generic type with the current return kind of the
method. If the method returns a void, substitute it with the Task return type. The IFileHandler service
contract shown at the beginning of this topic is the result of replacing a synchronous method signature
returning an int value with an asynchronous method signature returning Task<int>.
Note: When you use client-side asynchronous operations, the operations on the service-
side will remain synchronous, unless they were also replaced with asynchronous operations
(which requires changing to implementation of the service as well).
13-14 Appendix A: Designing and Extending WCF Services
Lesson 2
Handling Distributed Transactions
The most used type of transactions is a database transaction. When you use a database, you can start a
transaction, execute several insert/update/delete commands, and then choose whether to commit the
transaction or rollback the transaction, canceling the changes that you made. Although you might think
of transactions as only being relevant to databases, there are other transactional resources that you can
manage, such as the file system of your computer and registry settings.
A single transaction only spans one resource (for example, a specific database), a distributed transaction
can span multiple resources, such as two databases or a database and a file system of machine.
In distributed transactions, the scope of the transaction is not limited to a single or a chain of resources.
Instead, the distributed transaction spans multiple systems, collaborating the work that is performed on
the client-side and service-side, to create a transaction that can even span multiple databases in different
networks. For example, a client can start a distributed transaction and call two WCF services, each working
in an internal transaction. If the second service call fails, the distributed transaction will rollback, causing
the transaction in the first service to rollback.
In this lesson, you will learn how to configure your service to support distributed transactions, and how to
use transactions in your service implementation. You will also learn how to call a service from the client
while in a distributed transaction.
Lesson Objectives
After completing this lesson, you will be able to:
Describe what a distributed transaction is, and how the two-phase commit protocol works.
Note: The
System.Transactions.TransactionScope class is
responsible for locally managing the transaction
and communicating with the DTC. You have
already used this class to create local transactions with Entity Framework in Module 2, Querying
and Manipulating Data Using Entity Framework, Lesson 4, Manipulating Data, in Course 20487.
This class can also handle distributed transactions.
Developing Windows Azure and Web Services 13-15
When the computer that started the distributed transaction finishes its work, its coordinator starts the
distributed transaction commit process by using a two-phase commit protocol.
2. The coordinator of the client sends a commit preparation request to the coordinator of the service.
3. The coordinator of the service instructs the service to prepare itself to commit.
4. If the service can commit the transaction, an approval message is sent to the coordinator of the client.
5. After the coordinator of the client makes sure both parties are prepared to commit, it sends the client
application a request to commit the transaction.
6. In addition to sending the client a request, the coordinator also sends a request to commit the
transaction to the coordinator of the service.
7. The coordinator of the service passes the request to commit the transaction to the service
implementation.
For a distributed transaction to work, both computers must have a DTC installed. In Windows operating
systems, the Microsoft DTC (MSDTC) service processes distributed transactions. Therefore, you have to
make sure that the DTC service is installed and is in the Started state. To verify this, open the services list
by clicking the Administrative Tools tile on the Start screen, and then opening Services.
WCF supports distributed transactions in the following modes:
OLE Transactions (OleTx). The OleTx protocol is a Microsoft transaction protocol that is supported
by clients and services developed in the .NET Framework, and by MSDTC. OleTx uses Remote
Procedure Calls (RPC) to coordinate the transaction between the client and the service and therefore
is not interoperable. OleTx is the default transaction mode of WCF when using non-interoperable
bindings such as the NetTcpBinding and NetNamedPipesBinding.
Configuring non-interoperable bindings to use WS-AT is useful (for example, when OleTx RPC calls
are blocked because of firewall rules).
If you plan on using WS-AT with MSDTC, you have to configure MSDTC. For the MSDTC configuration
steps required for using WS-AT, see:
Configuring WS-Atomic Transaction Support
http://go.microsoft.com/fwlink/?LinkID=298786&clcid=0x409
WCF prefers to use OleTx when possible, because it provides better performance than WS-AT and requires
fewer configurations in MSDTC. Even if your binding is configured for WS-AT, WCF tries to upgrade the
transaction flow to OleTx, if possible.
You can change the way WCF coordinates transactions by changing the binding configuration of your
service endpoints. This setting will be discussed later in this lesson. Before you decide to use distributed
transactions in your services, remember that transactions have coupling and performance costs caused by
resource locks and additional message traffic. One way to avoid this coupling is to use mechanisms other
than transactions for undoing changes, such as compensations, as explained in the following link.
Compensation vs. Transactions
http://go.microsoft.com/fwlink/?LinkID=298787&clcid=0x409
[OperationContract]
[TransactionFlow(TransactionFlowOption.Allowed)]
bool Transfer2(Account from, Account to, decimal amount);
}
You can use the TransactionFlowOption enumeration to determine whether a transaction can flow from
the client to the service. The possible values for this enumeration are as follows:
NotAllowed. The transactions of the client may not flow to the service. If the client sends a request
with transaction information, the request is rejected with a protocol exception. This is the default
setting for the [TransactionFlow] attribute.
Mandatory. The transactions of the client must flow to the service. If a request is made from a client
without transaction information, the request is rejected with a protocol exception.
The [TransactionFlow] attribute controls only whether the service can receive flow transactions. It does
not state anything about whether and how the service uses the transaction. That depends on the service
implementation.
To enable transaction flow through your service, you have to set the TransactionFlow setting of your
binding to true. The TransactionFlow setting can be applied to any binding that supports SOAP header.
For example, you can set it with WSHttpBinding and NetTcpBinding, but you cannot set
TransactionFlow in BasicHttpBinding.
The following code example demonstrates how you can configure a NetTcpBinding to enable
transaction flow.
In some supporting bindings, such as NetTcpBinding, you can also set the transactionProtocol attribute
to configure, which protocol WCF uses to coordinate the transaction: OleTransactions,
WSAtomicTransactionOctober2004, or WSAtomicTransaction11. The first protocol uses the OleTx
transaction protocol, and the latter two protocols use different versions of the WS-AT standard.
WSAtomicTransactionOctober2004 uses version 1.0 from October 2004 and WSAtomicTransaction11
uses version 1.1 from July 2007.
The following code example demonstrates how you can use the <transactionFlow> binding element if
you build your own custom binding.
The [TransactionFlow] attribute should also appear in your client-side binding configuration. If you use
the Add Service Reference dialog box of Visual Studio 2012, this attribute is added automatically to the
client configuration. If you use the ChannelFactory<T> generic class and manually create the client
configuration file, you have to add this attribute yourself.
13-18 Appendix A: Designing and Extending WCF Services
Note: If you recall the DTC two-phase commit diagram, when the coordinator for the
service checks that the service can complete its transaction, it actually checks that the transaction
scope was marked as completed.
When you use transactions in your service implementation, you have to specify how scopes are created
and used, and when a scope is marked as complete. You can control these settings by using the
[ServiceBehavior] attribute and the [OperationBehavior] attribute.
The following code example demonstrates how you can set transaction-related settings by using the
[ServiceBehavior] attribute.
In addition to setting your service behavior, you can configure each of your operations to define how they
handle transactions.
The following code example demonstrates how you can use the [OperationBehavior] attribute to
configure transactions in service operations.
TransactionAutoComplete = false)]
public bool Transfer1(Account from, Account to, decimal amount)
{
bool result = true;
// Transactional database code goes here
OperationContext.Current.SetTransactionComplete();
return result;
}
Setting the TransactionScopeRequired property to true indicates that the operation requires a
transaction for its work. If a transaction scope does not flow from the client, the service creates its own
scope. Creating a scope depends on how you have set the TransactionFlow attribute in your contract,
and whether the client flows a transaction into the service. The default value of the
TransactionScopeRequired property is false.
The following table describes the different scenarios of configuring the TransactionScopeRequired
property, and how it is affected by transactions that flow from the client.
Client passes
TransactionScopeRequired TransactionFlow Result
transaction
Note: Incorrect settings that may throw exceptions are omitted from the table. For
example, a service throws an exception if a transaction scope is required and the transaction flow
is mandatory but the client did not pass a transaction to the service.
When an operation uses a transaction scope, you can mark it as complete in one of two ways:
Set the transaction completion in code. If the TransactionAutoComplete property is set to false, you
can set the transaction to complete by calling the SetTransactionComplete method, as shown in the
earlier example. You have to manually complete a transaction when certain execution paths require
the transaction to be aborted.
To call a WCF service within a transaction, you have to wrap your code in a transaction scope, and then
set the transaction to a completed state before ending it.
The following code example demonstrates an example of how to use a distributed transaction on the
client.
When you create a transaction scope, you can involve several services in one distributed transaction.
Because all participating parties in a distributed transaction have to mark the transaction as completed,
the client also must do so by calling the TransactionScope.Complete method.
Demonstration Steps
1. Open the D:\Allfiles\Apx01\DemoFiles\Transactions\Transactions.sln solution file and observe the
client-side code.
Open the Program.cs file from the Client project and examine the contents of the Main method.
Observe how the client application first tests a successful transaction, and then tests an unsuccessful
transaction.
2. View the service contract and the use of the [TransactionFlow] attribute. Observe the use of the
different flow levels (Mandatory and Allowed).
3. In the Service project, open the App.config file, and view the binding configuration where the
transactionFlow attribute is set to true.
4. Open the TransferService.cs from the Service project and inspect the service implementation.
View the [OperationBehavior] attribute decorating the Transfer method. Note how the
TransactionScopeRequired and TransactionAutoComplete parameters are set to true.
5. Observe how Entity Framework is used in the Transfer method. The transaction started by the
SaveChanges method is elevated to a distributed transaction.
6. In the Client project, open the Program.cs file. View how the channel factory is created and how the
binding is configured to flow the transaction from the client to the service. Note that in the first
transaction scope block, the Complete method is called, but in the second transaction scope block it
is not called.
7. Run both client and service projects, and view the printouts in the Service console window. The first
distributed transaction is committed, and the second distributed transaction rolls back.
Developing Windows Azure and Web Services 13-21
Lesson 3
Extending the WCF Pipeline
When a WCF service receives a message, the message travels a long way through various parts of the WCF
infrastructure until it reaches the service operation to which it is addressed. Most of the work performed
on the messagesuch as inspecting it, extracting information from it, and deserializing itis performed
by built-in mechanisms of WCF. Nevertheless, you can add custom message handling implementation to
different parts of the infrastructure. As soon as you gain access to the message, WCF offers you an easy
application programming interface (API) to examine and change the WCF contents, and to store relevant
message information for you to use later when processing the message in the operations of the service.
In this lesson, you will learn how WCF controls the behavior of the service and its components, and how
you can customize this behavior to suit your needs. You will also learn how you can inspect messages and
save state information by using various techniques.
Lesson Objectives
After completing this lesson, you will be able to:
Describe the architecture of the WCF pipeline.
Apply custom runtime components to the WCF pipeline by using custom behaviors.
Attach custom behaviors to services, endpoints, contracts, and operations.
Create and use extensions objects.
After all the protocols in the channel stack have verified the message, the message has several more steps
to pass before reaching the service implementation. For example, some part of the pipeline has to
13-22 Appendix A: Designing and Extending WCF Services
deserialize the message to the appropriate common language runtime (CLR) types, and inspect it to
determine which service method to invoke and which service instance to use.
These decisions and actions are performed by the dispatchers, which are the components responsible for
translating the content of the message to a method call. The way the dispatcher is constructed is defined
by behaviorsservice behaviors, endpoint behaviors, contract behaviors, and operation behaviors.
Instancing, serialization, throttling, and authorization are only a few examples of the different behaviors
that control dispatcher operations.
The throttling behavior is not covered in this course. However, you should become familiar with it as it can
help you control your service's performance.
For more information about this service behavior and how to configure it, see:
You define and configure behaviors by using code or configuration. For example, adding the
<serviceMetadata> element under the <serviceBehavior> element in the service's configuration file
adds the metadata publishing behavior to your service. The [OperationBehavior] attribute decorates
your service operations and configure how to create transaction scopes. Each of these behaviors changes
the operation of the dispatcher.
In addition to using the behaviors that are built-in to WCF, you can write custom behaviors. With custom
behaviors, you can add message handling logic, and change the way messages process in the dispatcher.
For example, you can change the way errors are handled and logged, or provide custom message
validation.
You can apply custom behaviorsjust like standard behaviorsto your services, endpoints, contracts, and
operations, by using either code or configuration. The ability to extend dispatchers by using custom
behaviors is the most important extensibility point in WCF.
The transport channel, which is responsible for taking the message and transmitting it to the client,
contains both the encoder binding element and the transport binding element. For example, if the
binding elements are HTTP transport, text encoding, and security and reliability protocols, the channels
that are used are as follows: protocol channel (reliability element), protocol channel (security element),
Developing Windows Azure and Web Services 13-23
and transport channel (HTTP and text elements). When a message is sent, it passes from the outermost
channel to the transport channel. The binding you select for your endpoint is a representation of a
composition of channels.
In WCF, you can extend both the Channel Stack and the Dispatchers by creating new channels and
runtime components. For example, you can create new transport channels for protocols that are not
currently supported by WCF, such as the Pragmatic General Multicast (PGM) protocol.
Creating new channels is an advanced topic and will not be covered in this course. However, the other
topics in this lesson will explain in depth how to extend the Dispatchers.
Channel Dispatcher
The ChannelDispatcherwhich is created by the service hostlistens to messages on a specific channel,
and associates messages from the channel with specific endpoints by using the appropriate endpoint
dispatcher.
When a service host opens, it creates a ChannelDispatcher object for every combination of Uniform
Resource Identifier (URI) and binding elementsuch as transfer, encoding, and protocolsto which it
listens. For example, if a host is listening on its base address for service metadata requests and has two
more endpoints with WSHttpBinding and NetTcpBinding bindings, then it has three channel
dispatchers: one channel dispatcher for the base address, one for the WSHttpBinding binding, and
another for the NetTcpBinding binding.
A single ChannelDispatcher can be created for more than one endpoint. For example, if your service has
endpoints for two contracts of the service, and both endpoints use the same binding and same address,
then your host creates a single ChannelDispatcher for both endpoints. When a message is received by
the ChannelDispatcher, it checks the message to see which endpoint should receive the message.
Each channel dispatcher contains information about the URI on which it listens, its channel stack, and the
endpoints (represented by EndpointDispatcher objects) that are used by this channel. When the channel
dispatcher receives a message, the dispatcher passes the message across to the endpoint dispatchers of
which it contains, to see which of them can handle the message. When the matching endpoint dispatcher
is found, the channel dispatcher passes the message to it for processing.
The channel dispatcher is also responsible for various behaviors of the service, such as handling error,
throttling, and time-out settings for receiving and sending messages. For example, you can extend the
channel dispatcher by creating a new error handler that logs all uncaught exceptions to a log file.
13-24 Appendix A: Designing and Extending WCF Services
Endpoint Dispatcher
The EndpointDispatcher class is responsible for receiving messages that are sent to a specific endpoint,
and then passes them to the appropriate service operation by using a DispatchOperation object. The
endpoint dispatcher receives a message from the channel dispatcher, and then checks if the message was
sent to it by checking the To and the Action elements that are specified in the message header. The
check is performed by using the AddressFilter and the ContractFilter properties. After the endpoint
dispatcher receives a message, it passes it to its DispatchRuntime object. In turn, it passes the messages
to the relevant DispatchOperation object, which is responsible for invoking the service instance method.
You can extend the endpoint dispatcher by supplying your own implementation of address and contract
filter. For example, you can create a new address filter that ignores the host name in the address, to
support services that are hosted behind load balancers.
Dispatch Runtime
Each endpoint dispatcher holds a DispatchRuntime object, which is responsible for selecting the most
suitable DispatchOperation object according to the content of the message. In addition to selecting the
DispatchOperation object, the dispatch runtime is also responsible for the following tasks:
Performing message inspection by using custom message inspectors.
Applying the role provider and authorization manager on the service instance.
Note: The role provider and authorization manager will be discussed in Appendix B,
Implementing Security in WCF Services in Course 20487.
You can extend the dispatch runtime and add your own custom processing by adding message inspectors
and instancing providers to the dispatcher. The extensions that you add to the dispatch runtime affect all
the messages that relate to the endpoint that contains the dispatch runtime. For example, you can use the
dispatch runtime to add a custom message validation that guarantees that messages directed to a specific
endpoint contain a custom SOAP header. Or it can add a special service instance provider that supplies a
pool of service instances instead of using the default service instance creation techniques the dispatch
runtime offers (per-call, per-session, and single).
Dispatch Operation
The DispatchOperation object is responsible for invoking the service method that is specified in the
message, and passing the parameters contained in the message to it. To do this, the DispatchOperation
object performs the following tasks:
1. Obtains the service implementation instance from the dispatch runtime.
2. Deserializes the message to the matching Common Language Runtime (CLR) types.
You can customize and extend each of those tasks. For example, you can add a custom parameter
inspector that validates CLR data after it is deserialized, to check for irregular values such as string length,
and integer values range.
Note: You can also use message inspectors to validate data. However, it is more difficult
than using the parameter inspector, because message inspectors are called when the message is
Developing Windows Azure and Web Services 13-25
still in XML format. The XML format requires more work to parse and validate specific parts of the
message.
By extending the dispatch operation, you can add custom processing to the operation level. It affects all
the messages sent to a specific operation that is specified in the contract, regardless of the endpoint
through which the message was received.
Client-Side Dispatchers
The dispatcher components that were mentioned earlier are used on the service side. On the client side,
other components are responsible for creating and handling messages that are sent to the service. The
proxies, generated in WCF clients by the Add Service Reference dialog box of Visual Studio 2012 and by
the ChannelFactory<T> generic class, are responsible for converting method calls and parameters to
outgoing messages, and for converting response messages back to return values.
When the client calls an operation, such as a proxy method, the ClientOperation objectwhich is a part
of the proxytakes the parameters that are sent to the method, inspects them, and then serializes the
parameters to a message. After the client operation finishes building the message, it sends it to the
ClientRuntime object. The ClientRuntime object then inspects the message, creates the outgoing
channel, and passes the message to the channel and from there to the service.
You can use the ClientRuntime and the ClientOperation objects to customize the behavior of your
service. You can customize the ClientRuntime to change the behavior of all the operations in the
contract that are exposed by the proxy, whereas the ClientOperation offers extensibility for a specific
operation. For example, you can build a message inspector that adds a custom SOAP header to all
outgoing messages, and then put this inspector in the MessageInspectors collection of the
ClientRuntime object.
For additional information about the client runtime, see:
Extending Clients
http://go.microsoft.com/fwlink/?LinkID=298789&clcid=0x409
Parameter inspectors. You can use parameter inspectors to check the value of parameters before
invoking the operation's method. You can use them to validate constraints on your data types, such
as maximum length of strings or the size of arrays. To implement a parameter inspector, you have to
create a class that implements the IParameterInspector interface. You can add the parameter
inspector component by adding it to the ParameterInspectors collection of the ClientOperation
object (on the client-side) or the DispatchOperation object (on the service-side).
Message formatters. You can use message formatters to customize how messages deserialize to
parameters, how return values are serialized to messages on the service-side, how parameters
serialize to messages, and how response messages deserialize to return values on the client-side. If
you want to build a custom message formatter, you can implement either the
IDispatchMessageFormatter or the IClientMessageFormatter. IDispatchMessageFormatter is
required for customizing the service-side, and IClientMessageFormatter is required customizing the
client-side. You can apply this runtime component by setting the Formatter property of your
DispatchOperation object on the service-side, or of the ClientOperation object on the client-side.
Message inspectors. You can use message inspectors to validate and extract information that is
contained inside the message sent from a client to the service, and inside the message sent from the
service to the client. You can implement the inspectors on either side: client or service. If you want to
build an inspector that you use in the service, you have to implement the
IDispatchMessageInspector interface. If you want to build an inspector for the client-side, you have
to implement the IClientMessageInspector interface. To use the message inspector runtime
component, add it to the MessageInspectors collection of your DispatchRuntime object on the
service-side, or of the ClientRuntime object on the client-side. You can have multiple custom
message inspectors in your service pipeline, each handling a different aspect of the message. For
example, you can have one message inspector to verify the presence of a SOAP header, and another
message inspector that adds missing elements in messages sent by older versions of the clients.
Operation selectors. You can create custom operation selectors to change the way the dispatch
runtime finds the dispatch operation that can process the incoming message. You can use operation
selectors when you have a new version of a service that has changes to operation names, and you
want the service to be backward compatible with older clients that still use the old operation names.
If you want to build a selector for the service-side, you have to implement the
IDispatchOperationSelector interface. If you want to build a selector for the client-side, you have to
implement the IClientOperationSelector interface. You can use the client-side operation selector to
map between proxy methods and service operations. You can apply this runtime component by
setting the OperationSelector property of your DispatchRuntime object on the service-side, or of
the ClientRuntime object on the client-side.
Operation invokers. You can change the way the operation invocations translate to method calls by
using the operation invoker runtime component. For example, you can change the orders of
parameters sent to the method or log each method call. The operation invoker runtime component
can only be applied on the service-side. To build an operation invoker, you have to implement the
IOperationInvoker interface, and apply it in your service by setting the Invoker property of your
DispatchOperation object.
Error handlers. You can create custom error handlers to extend the way WCF handles exceptions. For
example, you might want to catch every exception and log it to the database, or replace any
unhandled exception with a general fault message that contains the message "Please call the help
desk at (555) 555-5555 to report this problem". To create a custom error handler, you have to
implement the IErrorHandler interface, and apply it in your service by adding it to the
ErrorHandlers collection of the ChannelDispatcher object. Error handlers can only be applied on
the service-side, and you can have multiple error handlers in the same channel. For example, you can
create one error handler that is specific to Structured Query Language (SQL) Server exceptions, and
another error handler to handle other kinds of exceptions.
Developing Windows Azure and Web Services 13-27
The following code example demonstrates how to create a custom operation invoker.
As you can see in this example, the operation invoker has a constructor that receives the previous invoker.
The previous invoker is the invoker currently used by the OperationDispatcher. If this is the first custom
operation invoker that is attached to the pipeline, the previous invoker becomes the default operation
invoker of WCF. Then, the previous invoker has the required code to invoke the service method, send it
the parameters it requires, and return its return value.
Note: If you create several operation invokers for an operation, you must connect each of
them by sending each invoker to the constructor of its successor. The last custom operation
invoker receives the default operation invoker to its constructor. For example, you can have an
operation invoker that logs the result of each service method, followed by an operation invoker
that calculates the time of each method execution, followed by the default operation invoker that
performs the actual invocation of the service method.
13-28 Appendix A: Designing and Extending WCF Services
Through the code, the previous invoker is used to perform the actual invocation of the service method.
The custom invoker adds code before and after invoking the service method, to calculate how much time
that it took for the service method to execute.
IsSynchronous. This property returns a value that specifies if the operation is to be invoked
synchronously or asynchronously.
InvokeBegin and InvokeEnd. These methods are called if the operation is to be invoked
asynchronously.
When you build a custom behavior, you have to decide to which scope you want to apply the behavior.
For example, you can build a custom behavior and attach it to a specific endpoint so that it only applies to
the operations of that endpoint. Or you can build a custom behavior and attach it to the whole service so
that it applies to all the operations in all the different endpoints exposed by the service.
Service behaviors. By creating a service behavior and applying it to your service, you can change the
runtime components of all the operations in all the endpoints of your service. For example, if you
want to add a custom error handler runtime component that handles exceptions from all the service
operations, you have to add it using a service behavior. Only service behaviors have access to the
channel dispatcher where this runtime component is declared. In addition to customizing the runtime
components, you can also customize the service host itself. To create a custom service behavior, you
have to implement the IServiceBehavior interface. You can apply service behaviors to a service by
creating the custom behavior as a custom attribute and then adding it on the service implementation.
Or you can add the behavior to the service's behavior configuration in the configuration file.
Endpoint behaviors. You can use endpoint behaviors to apply changes to a specific endpoint and its
operations. Building an endpoint behavior instead of a service behavior is useful if you only have to
customize the runtime components of a specific endpoint. For example, you can take a message
inspector runtime component that performs message logging, create an endpoint behavior for it, and
only apply the behavior to endpoints that do not require clients to authenticate. To create a custom
Developing Windows Azure and Web Services 13-29
endpoint behavior, you have to implement the IEndpointBehavior interface. You cannot apply
endpoint behaviors by using attributes, because they must be applied directly to an endpoint
declaration. Endpoint behaviors can be added to the endpoint's behavior configuration in the
configuration file.
Contract behaviors. When using a contract behavior, you can apply changes to all the operations of
a specific contract, regardless of the endpoint in which it is declared. To create a custom contract
behavior, you have to implement the IContractBehavior interface. You can apply contract behaviors
to your contract or service implementation only by using custom attributes, because there is no
contract configuration in the configuration file.
Operation behaviors. You can use an operation behavior to change the runtime components that
are used for a specific operation, regardless of the endpoint dispatcher through which it was invoked.
For example, you can create a custom operation invoker that logs the duration of time it takes an
operation to execute, and then apply it to several operations while testing the service. To create a
custom operation behavior, you have to implement the IOperationBehavior interface. Like the
contract behavior, operation behaviors can only be attached to an operation by using custom
attributes.
The following table lists which behavior type is most suitable to the scope that you want to control with
the custom runtime component.
Contract behavior All the messages sent to a specific contract in the service,
regardless of the endpoint that exposes it
Each interface mentioned earlier contains the ApplyDispatchBehavior method, which gives you access to
the dispatchers so that you can apply custom runtime components to them. If you want to customize the
runtime components on the client-side, you can create an endpoint behavior, a contract behavior, or an
operation behavior. Each behavior contains the ApplyClientBehavior method, which you can use to gain
access to either the client runtime or the client operation to apply the necessary runtime component to
them.
The following code example demonstrates how to build a custom operation behavior that attaches the
custom operation invoker shown in the previous code example, Creating a Custom Operation Invoker".
OperationDescription operationDescription,
System.ServiceModel.Dispatcher.DispatchOperation dispatchOperation)
{
dispatchOperation.Invoker = new
TimingOperationInvoker(dispatchOperation.Invoker);
}
You can create operation behaviors for either the client side or service side. This is the reason why the
interface exposes both the ApplyClientBehavior and ApplyDispatchBehavior methods. In the previous
example, the operation invoker can be used in the service, and that is why the ApplyClientBehavior
method is not implemented. By implementing the ApplyDispatchBehavior method, you can customize
the runtime components used by the operation dispatcher. In this example, the method is used to plug-in
the new operation invoker by wrapping the default invoker supplied by the WCF infrastructure.
To build your custom behavior so that you can use it as a custom attribute, you have to derive your
custom behavior from the Attribute class.
The following code example demonstrates how to create a custom service behavior by using a custom
attribute.
You can create any custom behaviorwhether service, endpoint, contract, or operation behavioras a
custom attribute by deriving from the Attribute class. You can also decorate the custom behavior class by
using the [AttributeUsage] attribute to specify where this attribute can be used. The rest of the
implementation of the class remains the same as it was.
Note: In the previous example, the name of the class is changed from
TimingOperationBehaviorAttribute to TimingOperationBehaviorAttribute, because the
naming convention for custom attribute classes is to add the Attribute suffix to the name of the
class.
After you create the new custom attribute, all that remains is to decorate the service, contract, or
operation with the new attribute.
The following code example demonstrates how to apply the custom behavior to a service operation.
If you do not want to apply the custom behavior by using custom attributes, or if you build a custom
endpoint behavior that cannot be applied by using custom attributes, you can apply the behavior by
using the configuration. To use custom behaviors in configuration files, you have to create a class that
derives from the BehaviorExtensionElement class.
The following code example demonstrates a behavior extension element.
The BehaviorExtensionElement declares two abstract members that you must override when you build
your custom configuration extension:
1. BehaviorType: You must override this property to return a Type object that represents the kind of
the custom behavior this configuration element creates.
2. CreateBehavior: You must override this method to return an instance of your custom behavior.
13-32 Appendix A: Designing and Extending WCF Services
Note: The preceding code example creates a behavior extension element class for the
TimingEndpointBehavior endpoint behavior class. The content of this class is not shown here.
However, the implementation of this class resembles the operation behavior that was
demonstrated before.
To apply this behavior in the configuration, you have to introduce this behavior as an XML element. To do
this, you have to add the newly created type to the extension element of the <system.serviceModel>
element.
The following code example demonstrates how to add the previous configuration extension to the
configuration file.
First you add the behavior extension and set its name and type by setting the name and the type
attributes. Then you can use its name to apply the custom behavior in the configuration file of your
service or in your endpoint configuration, depending on the kind of custom behavior that you create.
The following code example demonstrates how to apply the custom endpoint behavior in configuration.
Note: The type attribute must be set to the fully qualified name of the class, including its
assembly name. If the class is from a different assembly, you should use the fully qualified name
of the assembly.
Developing Windows Azure and Web Services 13-33
Attach. This method is called when the extension is added to the extensions collection of the
extensible object. You can use this method to apply the required functionality changes.
Detach. This method is called when the extension is removed from the extensions collection. Use this
method to undo the changes that you have made to the extensible object.
IContextChannel. This interface is implemented by the service and client channels. You can add
extensions to the channels to attach custom data to them, and use it at some point by custom
behaviors and runtime components. For example, you can create an extension for the service-side
that keeps a list of message headers (name, namespace, and value) that are added to every outgoing
message. Because the channel can be accessed from anywhere in the service, any service operation
and runtime component can add headers to the extension object. When a message is ready to be
sent to the client, a message inspector can take all the headers from the extension, and include them
in the returned message.
In addition to adding functionality to the runtime classes by customizing the dispatchers and the host,
you can also use the extensions to hold state information that you use in your runtime components or in
your service implementation, just as the example described for the OperationContext and
IContextChannel extension objects.
Extensible Object
http://go.microsoft.com/fwlink/?LinkID=298790&clcid=0x409
You can use extensions to maintain state for different scopes of your service, to configure the behavior of
your service, and to add functionality to various parts of your service. For example, you can create a
service host extension that performs special processing when a host opens or closes.
The following code example demonstrates an extension that provides a singleton instance of a log
manager that can be accessed from anywhere in the service.
Creating an Extension
public class SingletonLoggerExtension : IExtension<ServiceHostBase>
{
public LogManager Logger { get; private set; }
After you attach the extension to the host, a logger instance is created and becomes available to anyone
who needs it.
Note: Using extensions for singletons gives you more flexibility than by using the standard
singleton design pattern or static classes. That is because with extensions, you can also declare
singleton objects for scopes other than the whole service. For example, you can create a
singleton extension that you can apply to the operation context. Meaning, all the runtime
components and the service implementation code share the same singleton object, whereas
other operation contexts have their own singleton object. This behavior can be very difficult to
achieve when using the standard singleton design pattern or when using static classes.
To add this extension to the service host, you have to use the service host's Extensions collection.
The following code example demonstrates how to add the new extension to the service host.
After you add the extension to the host, you can use it anywhere you want inside the service.
The following code example demonstrates how to use this extension in a service operation.
Developing Windows Azure and Web Services 13-35
Using an Extension
public class SimpleService : ISimpleService
{
public string PerformLengthyTask()
{
string result;
// Do some work
return result;
}
}
By using the Find<T> generic method, you can locate a specific extension inside the extension collection.
If you add several instances of the same extension typefor example, if the logger extension is added
several times to create different output filesyou can use the FindAll<T> generic method that returns a
collection of extension objects.
Demonstration Steps
1. Open the D:\Allfiles\Apx01\DemoFiles\CustomOperationInvoker\CustomOperationInvoker.sln
solution file. This solution shows how to create a custom operation invoker that prints time taken to
execute each service operation.
2. Open the operation invoker code, and observe how the custom operation invoker uses the previous
operation invoker for the methods and properties.
In the Service project, open the TimingOperationInvoker.cs and observe how the class implements
the IOperationInvoker interface.
View the code of the constructor method. The constructor receives the previous operation invoker, in
this case, the default invoker of WCF.
Inspect the AllocateInputs, Invoke, InvokeBegin, and InvokeEnd methods, and the IsSynchronous
property. Each method calls the matching method or property from the underlying
_previousInvoker object.
3. Observe how you retrieve the extension object and log the execution time in the Invoke method by
calling the LogMessage method of the SingletonLoggerExtension extension object.
4. View the code of the SingletonLoggerExtension class, and how it implements a service host
extensible object.
5. Open the Main method and observe how the SingletonLoggerExtension instance is created and
added to the service host's Extensions collections.
6. View the implementation of the custom operation behavior (the TimingOperationBehavior class),
and observe how the new operation invoker replaces the existing one.
13-36 Appendix A: Designing and Extending WCF Services
7. View the SimpleService class and observe how the custom operation behavior is applied to the
PerformLengthyTask operation method.
8. Run the project without debugging, open the WCF Test Client utility, and then connect to the service
by using the address http://localhost:8080/SimpleService. Verify the logger prints the execution
time in the console window after you invoke the service.
Open the WCF Test Client utility by double-clicking the WcfTestClient shortcut in the D:\AllFiles
folder.
After adding the service in the WCF Test Client, call the PerformLengthyTask method.
After you invoke the method, switch to the console window, and verify that you see the message
"Operation took XXXX milliseconds to execute" (XXXX will be replaced by the time that it took the
operation to execute).
Developing Windows Azure and Web Services 13-37
In addition, Blue Yonder Airlines wants to enable its Blue Badge members (from the Blue Yonder Airlines
frequent flyer program) to earn miles for checked-in flights. In this lab, you will update the WCF booking
service to call the frequent flyer WCF service, and update the two databasesreservations and frequent
flyersusing a single distributed transaction.
Objectives
After completing this lab, you will be able to:
Create an error handling runtime component and apply it to a WCF service.
Lab Setup
Estimated Time: 60 minutes
Password: Pa$$w0rd
For this lab, you will use the available virtual machine environment. Before you begin this lab, you must
complete the following steps:
1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.
3. In Hyper-V Manager, click the 20487B-SEA-DEV-A virtual machine.
4. In the Snapshots pane, right-click the StartingImage snapshot and then click Apply.
6. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.
7. In the Action pane, click Connect. Wait until the virtual machine starts.
9. Verify that you received credentials to log in to the Azure portal from you training provider, these
credentials and the Azure account will be used throughout the labs of this course.
stored parameters, serialize them to XML, and output the exception and parameter values to a log file.
The last step of this exercise is to apply these custom runtime components to your service by using the
configuration file of your service host.
2. Create a Parameter Inspector that Stores the Parameter Values in an Operation Extension
3. Create an Error Handler that Traces Parameter Values for Faulty Operations
4. Create a Custom Service Behavior for the Error Handler and Apply it to the Service
Create the Attach and Detach methods, but leave them empty, as you will not use them.
In the class, create a constructor which receive an array of objects. Store the array in a publicly
available property named Parameters.
Note: You do not have to add any code to the Attach and Detach methods, because you
only use the extension for state management, not to add functionality to the operation context.
Task 2: Create a Parameter Inspector that Stores the Parameter Values in an Operation
Extension
1. In the BlueYonder.BookingService.Implementation project, create and implement a new
parameter inspector class named ParametersInspector, under the Extensions folder.
In the BeforeCall method, store the list of parameters sent to the operation in a new extension object
of type ParametersInfo.
Add the new extension object to the current operation context's Extensions collection.
The method should return null when it completes.
Note: You have to implement the BeforeCall method to save the parameters of the
operation before the operation is invoked. You do not have to implement the AfterCall method,
because it only executes after the operation is complete without exceptions.
Task 3: Create an Error Handler that Traces Parameter Values for Faulty Operations
1. To the BlueYonder.BookingService.Implementation project, add
D:\Allfiles\Apx01\Labfiles\Assets\Extensions\ErrorLoggingUtils.cs under the Extensions folder.
Set the access modifier of the class to public and implement the IErrorHandler interface.
Add a new private field of type TraceSource to the class and name it _traceSource. Initialize the
trace source to use the trace source ErrorHandlerTrace.
Implement the ProvideFault method and leave it empty.
Implement the HandleError method by retrieving the parameters that you stored in the extension
object.
To retrieve the parameters, use the Find method of current operation context's Extensions collection.
If the parameters were found, create a string containing the type and message of the exception, and
the values of the parameters.
ParametersInfo parametersInfo =
OperationContext.Current.Extensions.Find<ParametersInfo>();
if (parametersInfo != null)
{
string message = string.Format
("Exception of type {0} occurred: {1}\n operation parameters are:\n{2}\n",
error.GetType().Name,
error.Message,
parametersInfo.Parameters.Select
(o => ErrorLoggingUtils.GetObjectAsXml(o)).Aggregate((prev, next) => prev +
"\n" + next));
_traceSource.TraceEvent(TraceEventType.Error, 0, message);
}
return true;
Note: The IErrorHandler interface provides two methods, ProvideFault and HandleError.
You can implement the ProvideFault method to provide a fault message to WCF based on the
thrown exception. The HandleError is called after WCF returns the fault message to the client so
that you can log the thrown exception without making the client wait until the logging
procedure is complete.
Task 4: Create a Custom Service Behavior for the Error Handler and Apply it to the
Service
1. In the BlueYonder.BookingService.Implementation project, create and implement a new service
behavior class named ErrorLoggingBehavior, under the Extensions folder.
Implement the IServiceBehavior interface.
Add the AddBindingParameters and Validate methods of the interface and leave them empty.
In each channel dispatcher iteration, iterate each of the endpoints, and in each endpoint, iterate the
endpoint's DispatchRuntime.Operations collection.
13-40 Appendix A: Designing and Extending WCF Services
In each dispatch operation iteration, add a new ParametersInspector object to the operation's
ParameterInspectors collection.
Set the access modifier of the class to public and inherit it from the BehaviorExtensionElement
class.
Override the BehaviorType property by returning type Type object for the ErrorLoggingBehavior
class. Use the typeof operator to get the Type object of a class.
Note: You can use a custom behavior in the configuration file only if you create a class for
its configuration element. The configuration element class has to provide two things: the kind of
the custom behavior class and an instance of it.
In the <behaviorExtensions> element, add an <add> element with the following values.
Attribute Value
name errorLoggingBehavior
type BlueYonder.BookingService.Implementation.Extensions.ErrorLoggingBehavior
ExtensionElement, BlueYonder.BookingService.Implementation
Note: The type attribute must be set to the qualified name of the configuration element
class, including the name of its containing assembly. You can set the value of the name attribute
to any name that you think best represents your custom behavior.
Note: Visual Studio Intellisense uses built-in schemas to perform validations. Therefore, it
will not recognize errorLoggingBehavior behavior extension, and will display a warning. Please
disregard this warning.
In the newly added <system.diagnostics> element, add a <trace> element and set its autoflush
attribute to true.
Note: The autoflush attribute controls whether log messages are immediately written to
the log, or cached in memory and periodically flushed. The value of the attribute is set to true so
that you can view the results immediately without waiting for the log to flush its content to the
file.
In the <system.diagnostics> element, add a <sharedListeners> element, and in it, add an <add>
element with the following values.
Attribute Value
name ServiceModelTraceListener
type System.Diagnostics.XmlWriterTraceListener
initializeData D:\AllFiles\Apx01\LabFiles\WCFTrace.svclog
3. In the <system.diagnostics> element, add two sources, one for System.ServiceModel and another
for ErrorHandlerTrace. Set both sources to use the shared listener you created before.
In the <system.diagnostics> element, add a <sources> element, and in it, add two <source>
elements.
switchValue Error,ActivityTracing
switchValue Error,ActivityTracing
In each of the <source> elements, add a <listeners> element with the following configuration.
<listeners>
<add name="ServiceModelTraceListener">
<filter type="" />
</add>
</listeners>
Note: The System.ServiceModel source is used for tracing WCF activities, and the
ErrorHandlerTrace source is used by the LoggingErrorHandler class, in the TraceSource
constructor.
WCF tracing is covered in Lesson 2, Configuring Service Diagnostics", of Module 10,
Monitoring and Diagnostics.
4. Run the BlueYonder.BookingService.Host project without debugging, and test the service using the
WCF Test Client utility
After running the project, open the WCF Test Client utility by double-clicking the WcfTestClient
shortcut from the D:\AllFiles folder.
5. From D:\AllFiles\Apx01\LabFiles, open the WCFTrace.svclog trace log file and verify that you see
the exception with the XML of the TripUpdateDto parameter. Close the Microsoft Service Trace
Viewer utility and the service host console window, and return to Visual Studio 2012.
Results: You can use the WCF Test Client utility to test the service, cause exceptions to be thrown in the
code, and check the log files to verify that the exception message is logged together with the parameters
that are sent to the service operation.
5. Execute the Service Call and the Reservations Database Updates in a Distributed Transaction
6. Update the WCF Client Configuration with the Frequent Flyer Service Endpoint and the Support for
Transaction Flow in the Bindings
Developing Windows Azure and Web Services 13-43
Task 1: Add Transaction Flow Attributes to the Frequent Flyer Service Contract
1. In the BlueYonder.FrequentFlyerService.Contracts project, open the IFrequentFlyerService.cs file, and
set the AddFrequentFlyerMiles and RevokeFrequentFlyerMiles methods to allow the flow of
transactions.
In the <system.serviceModel> element, add a new <bindings> element with the following
configuration.
<bindings>
<netTcpBinding>
<binding name="TcpTransactionalBind" transactionFlow="true" />
</netTcpBinding>
</bindings>
Parameters Value
TransactionAutoComplete true
TransactionScopeRequired true
Task 4: Add Code to the WCF Booking Service that Calls the Frequent Flyer WCF
Service
1. In the BlueYonder.BookingService.Implementation project, open the BookingService.cs file, and
add a private field named _frequentFlyerChannnelFactory to store the channel factory for the
IFrequentFlyerService service contract. Use the FrequentFlyerEP configuration name in the channel
factory constructor.
2. In the UpdateTrip, check whether the traveler is checking in, and if this is the case, call the Frequent
Flyer service to update their miles.
To check if the traveler is checking in, verify the original and new status are different, and that the
new status is FlightStatus.CheckedIn.
Call the Frequent Flyer service's AddFrequentFlyerMiles method, and pass it the traveler's ID and
the earned miles.
Call the service before saving the local changes to the database.
Task 5: Execute the Service Call and the Reservations Database Updates in a
Distributed Transaction
1. Add a reference to the System.Transactions assembly, and surround the service call and the
database update with a transaction scope. Make sure that you set the Complete flag before leaving
the scope. The resulting code segment should resemble the following code.
Task 6: Update the WCF Client Configuration with the Frequent Flyer Service Endpoint
and the Support for Transaction Flow in the Bindings
1. In the BlueYonder.BookingService.Host project, open the App.config file, add a client endpoint for
the Frequent Flyer service, and configure its binding to flow transactions. Name the client endpoint
FrequentFlyerEP to match the name that you used in the Booking service implementation
Note: You can use the service endpoint configuration from the Frequent Flyer service host
to set the address, binding, and contract settings of the client endpoint. You can also copy the
binding configuration from the Frequent Flyer service host configuration.
2. Make sure that the MSDTC service is running, and start both service hosts (Booking and Frequent
Flyer) without debugging.
To open the services list, on the Start screen, click the Administrative Tools tile, and in the
Administrative Tools window, double-click Services.
In the Services window, look for the Distributed Transaction Coordinator service and check its
Status column. If the status of the service is not Running, right-click it, and then click Start.
After you run both service hosts, wait until both console windows show the "service is running"
message.
3. Start the WCF Test Client utility, add the two services, and verify the distributed transaction works by
calling the UpdateTrip.
Open the WCF Test Client utility by double-clicking the WcfTestClient shortcut from the D:\AllFiles
folder.
Developing Windows Azure and Web Services 13-45
Parameter Value
FlightDirection Departing
ReservationConfirmationCode Aa123
TripToUpdate Value
Property
Class First
FlightScheduleID 1
Status CheckedIn
4. Verify the miles were added to the traveler by invoking the GetAccumulatedMiles operation with
traveler ID 1. Verify the returned miles are 5026. Close the WCF Test Client utility and the two console
windows when done.
Results: You can run the WCF Test Client utility, call an operation in the Booking service that starts a
distributed transaction, and verify that the Frequent Flyer service indeed committed its transaction.
Question: Why did you log the error in the HandleError method of the error handler class
and not in the ProvideFault method?
13-46 Appendix A: Designing and Extending WCF Services
Review Question(s)
Question: When is it useful to use asynchronous operations on the service-side?
Tools
Microsoft Service Configuration Editor, Microsoft Service Trace Viewer
14-1
Appendix B
Implementing Security in WCF Services
Contents:
Module Overview 14-1
Module Overview
Security is one of the major concerns for many distributed applications. Key security issues that you must
address when you design a web service include authentication, authorization, and secured
communication. Windows Communication Foundation (WCF) provides you an effective and extensible
infrastructure that can meet these challenges. To achieve flexibility, extensibility, and maintainability, WCF
separates the security infrastructure from the business implementation of the service operations.
Developers do not need to implement authentication and secure communication within the service
implementation because WCF manages this aspect. You can write code that concentrates only on the
business aspect and let the WCF infrastructure handle security.
This module provides an overview of web application security and WCF security capabilities, and then it
explains how to configure and consume WCF services that use the security infrastructure provided by
WCF.
Objectives
After you complete this module, you will be able to:
Lesson 1
Introduction to Web Services Security
Before you understand how to implement security in WCF services, it is important that you understand
why securing services is important and what security features are available to secure web services. This
lesson provides you with an overview of application security and the key features of WCF security.
Lesson Objectives
After you complete this lesson, you will be able to:
Human. People have access to information when they interact with an application. People can be
manipulated and can reveal information. Organizations educate their staff, and implement privacy
and security policies to reduce the risk of exposing sensitive information.
Infrastructure. Applications run on infrastructure such as operating systems and networks. You can
protect this infrastructure by regularly installing security updates and by using security networking
components such as firewalls and other dedicated equipment.
Application. Only the application layer knows the business of its organization, such as business rules,
system use-cases, user's permissions, and valid workflows. By validating user input and monitoring the
application for atypical behavior, you can minimize the risk of applicative attacks, such as invalid user
input, and irregular transactions frequency.
Given the preceding security aspects, you need to keep in mind the following considerations when
designing and implementing secure applications:
Developing Windows Azure and Web Services 14-3
Input validation. You must validate all the inputs of your service before you use them. Because clients
can sometimes be impersonated, and malicious users might even use a fake client application, it is
not enough if you implement validation on the client side, and you should repeat those validations
on the server side. Implementing validation is usually simple and inexpensive, but highly effective in
preventing many present and future attacks.
There are two main strategies that you can use to implement validation: the allow list and the block
list.
o Allow List. This list defines a pattern of valid inputs. All inputs that follow the pattern are
accepted, whereas other inputs are rejected.
o Block List. This is a list of bad inputs. If bad inputs are discovered, they are rejected.
Authentication. You need to make sure that you know who your client is. Using authentication
techniques for client identification is a common way to secure the boundary of the service, and to
prevent unknown users from accessing your service.
Authorization. Knowing your client is important, but you also must make sure your client can perform
the action that they requested. Often, different users will have access to certain services and
operations, but not to others. Sometimes even the value of a parameter that is sent to an operation is
valid if sent by a certain client, but not valid when sent by another client. For example, a bank loan
operation can receive values up to $1,000 if requested by a clerk, and up to $100,000 if requested by
a manager.
Cryptography. To protect the content of your data, you will need to encrypt it and make sure no
attacker can access or change it. You can protect your data from being changed by signing it
digitally, and encrypting it by using symmetric or asymmetrical cryptography.
Sensitive Data. You will need to identify which data is sensitive, such as contact information, credit
card numbers, salaries, and confidential phone numbers. Sensitive information must be handled with
care. You will need to restrict access to it, encrypt it when persisted, and use a secured channel to
transmit it. Most of the previously mentioned aspects (validation, authentication, and cryptography)
serve the purpose of handling sensitive data.
Session Management. If you want clients to perform several related operations in a session, and you
plan to save state information for that session, you will need to secure both your data and the session
information. A common attack type is client spoofing, where an external attacker assumes the role of
your client and sends requests on its behalf. You will need to ensure that your session cannot be
compromised to prevent such attacks.
Configuration Management. The configuration of your application might contain sensitive data - such
as server names, database passwords, default values for operations - and other data that you would
not want anyone to obtain. You will need to protect your configuration by applying authorization
and encryption.
Data Manipulation: When a client calls your service, and when your service responds, the message is
sent over the network, where every resourceful malicious user can see and manipulate it. To protect
yourself from these types of threats, you will need to find a way to prevent changes to the data, or at
least be able to detect if changes were made to the data while in transit. Encrypting or digitally
signing the data can help prevent this threat.
Exception Management: When your service throws an exception, it might contain information you do
not want to reveal to the outside world, such as server or database table names. Revealing these
details can help potential attackers gather information on how your application works, and on the
resources it uses. You need to make sure that exceptions (faults) returned by your service contain a
minimal amount of detail - only the detail required for the client to understand what has happened -
without revealing the inner workings of the service.
14-4 Appendix B: Implementing Security in WCF Services
Auditing and Logging. Preventing an attack is important, but knowing that you were attacked is also
equally important. You need to audit any failed attempts for using the service, security violations, or
any abnormal situation your service encounters. Auditing can also help you determine the identity of
the attacker.
Additional Reading: For more information about securing software security, see:
http://go.microsoft.com/fwlink/?LinkID=298791&clcid=0x409.
Authentication. To prove the identity of a user, you can implement an authentication process. When
users authenticate, they use some sort of a credential that identifies them uniquely. A credential is a
set of claims that states who the user is, usually their user name, and some proof of possession that
identifies the user as the rightful owner of the identity such as a password. WCF can use different
types of credentials to identify users, such as Windows identities and client certificates, and decide if
they have proven their identity or not. Authentication is used to prevent unknown users from
accessing the service.
Authorization. By implementing authorization, you can grant or deny access to specific resources.
Authenticated clients and unauthenticated clients can access resources according to a security policy.
WCF authorization supports the .NET role-based security model that uses the IIdentity and
IPrincipal interfaces, and the claim-based security model, which is discussed in depth in Module 11,
Developing Windows Azure and Web Services 14-5
"Identity Management and Access Control" in Course 20487. Authorization is used to reduce privilege
attacks, in which unauthorized clients access privileged resources.
Other security aspects such as input validation, in addition to auditing and exception handling, are not
handled automatically by WCF and you must address them yourself. There are no dedicated
infrastructures in WCF for input validation, auditing, and exception handling and therefore, these
concepts are out of scope of this course. For more information on how to implement these concepts, see:
Security Design Guidelines for Web Services
http://go.microsoft.com/fwlink/?LinkID=298793&clcid=0x409
14-6 Appendix B: Implementing Security in WCF Services
Lesson 2
Transport Security
One of the fundamental requirements for securing a service is the ability to create a secured channel. This
secured channel is used to convey the message from a client to the service and from the service to the
client, without letting network sniffers read and understand the content of the message. With WCF, you
can create secured channels in several ways to match different scenarios. Each method has its advantages
and disadvantages.
In this lesson, you will learn how to create a WCF service that uses transport security, in which the
transport layer itself, and not WCF, is responsible for securing the communication channel.
Lesson Objectives
After you complete this lesson, you will be able to:
Note: HTTPS and SSL were introduced in Module 4, Extending and Securing ASP.NET Web
API Services in course 20487. Because the operating system and the transport layer implement
SSL, the usage of SSL in WCF is similar to that of ASP.NET Web API.
If you use Transmission Control Protocol (TCP) transport, you can also use a secured transport - SSL over
TCP - which uses the Transport Layer Security (TLS) mechanism. TLS is a security protocol that evolved
from SSL.
performance, because most of transport security handling is implemented at the operating system-level.
Today, you can also use SSL hardware accelerators that perform the SSL handshake at the hardware-level,
which provides faster encryption compared to the security implemented at the operating system-level.
Exposed data. Because transport security uses point-to-point security, when the message reaches the
server, it is decrypted. In the previous example, Service A calls Service B through the proxy, so in fact
there are two secured channels: between Service A and the proxy, and between the proxy and Service
B. When the encrypted messages reach the proxy, they are decrypted, and then re-encrypted before
they are delivered to Service B. If an attacker gains access to the proxy, the attacker will be able to
retrieve the decrypted data.
Limited types of client credentials. When you use a secure transport, the service identifies itself to
the client to prevent phishing. Phishing is when a malicious service poses as another service. In
addition, you can configure the channel to require the client to authenticate. The client authenticates
by using the HTTP-Authorization header, which supports most of the authentication credentials -
such as Windows and X.509 - but does not support custom authentication credentials.
<binding name="secured">
<security mode="Transport">
14-8 Appendix B: Implementing Security in WCF Services
<transport clientCredentialType="None"/>
</security>
</binding>
To change the security settings of a binding, you need to add the <security> element under the
configuration of the binding. To change the security mode, you set the mode attribute to Transport -
other values will be discussed later in this module. In the case of HTTP transport security, you will also
need to change the scheme or the address from http:// to https://. The scheme change only applies to
HTTP-based bindings. When you use transport security with TCP or Named Pipes, you can leave the
address unchanged.
In addition to setting the transport security, you can also set the credential types that the client needs to
transfer. In the previous example, the client credentials are set to None, which means the client does not
need to transfer any credentials. The following table describes the different client credentials types.
Authentication
Description
scheme
Basic The client is required to send their user name and password in a Base64-
encoded string. The string is not encrypted, and therefore sending user names
and passwords by using this method is the same as sending them in clear text.
If you use this scheme, you will also need to set the realm attribute to the
name of the domain against which you are authenticating. Basic
authentication is common in Internet environments where clients and services
do not use a shared domain controller.
Digest The client is required to send an encrypted user name and password. This
setting requires a domain account, and the service needs to trust that domain.
When the service receives the user name and password, it sends them to the
domain controller for validation. If you use this scheme, you will also need to
set the realm attribute to the name of the domain against which you are
authenticating. Digest is rarely used today, because it requires the domain
controller to store reversible passwords, which is considered a security risk.
Windows With Windows authentication, the client uses the Kerberos security token that
it obtains from a Ticket-Granting Service (TGS). The token is sent to the
service, which in turn sends it to the TGS for validation. This technique has
been used since Windows Server 2003 to authenticate users, and is commonly
used on Microsoft networks. Using Windows identities is common in modern
intranet environments.
Certificate The client is required to send an X.509 certificate that the service can validate.
A trusted certificate authority (CA) must issue the certificate. Using certificates
to identify clients is common in business-to-business (B2B) environments.
InheritedFromHost When the service is hosted in Internet Information Services (IIS), this option
will instruct the service host to inherit the configuration of the client
credential types set for the web application in IIS. You can have multiple client
credential types for a web application in IIS. For example, you can configure
your web application to support both Basic authentication for Internet users
Developing Windows Azure and Web Services 14-9
Authentication
Description
scheme
and Windows authentication for intranet users. Instead of creating two
endpoints for your service, one for each authentication type, you can create a
single endpoint and configure the binding to inherit the client credential
types set in IIS.
Note: The X.509 certificate is a standard for packaging a public key together with metadata
about the token, the issuer, and its owner. SSL certificates for Internet websites are X.509
certificates. X.509 certificates are also used for client authentication and digitally signing
messages.
If your service is self-hosted within a Windows application or a Windows Service, you will need to specify
the certificate that you wish to use. For the HTTP transport with SSL, you can use the netsh utility with
Windows Vista and newer versions, to attach an SSL certificate to a specific port.
The following code example shows how to attach a certificate to port 8000 by using the netsh tool.
Replace line breaks with spaces.
The certhash value is the hex string representing the SHA hash of the certificate, also referred to as the
certificate's thumbprint, which you can find by looking at the certificate's information. The appid value is
the globally unique identifier (GUID) of your service host assembly, which you can find in the
AssemblyInfo.cs file.
If your self-hosted service uses a secure TCP transport, you cannot use the netsh utility. You will need to
set the certificate of the service in code or in configuration.
The following configuration shows how to set the service's certificate in the configuration file.
storeName="My"
findValue="0000000000003ed9cd0c315bbb6dc1c08da5e6"
x509FindType="FindByThumbprint" />
</serviceCredentials>
</behavior>
</serviceBehaviors>
</behaviors>
For more information on netsh, and on configuring ports for SSL for Windows Server 2003 and Windows
XP, see:
Transport Security.
http://go.microsoft.com/fwlink/?LinkID=298797&clcid=0x409
All the bindings that support transport security, such as BasicHttpBinding, NetTcpBinding, and
NetNamedPipesBinding, have a constructor that receives an enumeration for the security mode. If you
want to change the security mode of the binding dynamically, you can use the binding's Security.Mode
property.
Developing Windows Azure and Web Services 14-11
After you set the security mode of the binding, you need to set the credential type you will use, by setting
the Security.Transport.ClientCredentialType property to the credential type the service endpoint
requires.
Instead of manually configuring the proxy in code, you can use the Add Service Reference dialog box of
Visual Studio 2012. The generated configuration will contain the binding configuration with the transport
security settings and the required client credential type. The generated configuration is demonstrated in
the next code example.
The last step of the configuration is to set the client credentials, which will be sent to the service along
with the message. In the above example, a certificate is attached to the proxy by calling the
ClientCertificate.SetCertificate method. However, there are other types of credentials you can set, which
are described in the following table.
ClientCredentialTy
Property to set Description
pe
NTLM, Windows Windows By default, the current user's Windows identity will be used as
the client credentials. If you want to set the credentials to a
different Windows identity manually, set the
Windows.ClientCredential property to an instance of the
NetworkCredential class.
Note: The preceding table contains the client credential types for HTTP. If you use
transport security with TCP, you can only use the Windows or Certificate client credential types.
Instead of configuring the client credentials in code, you can also set it in the configuration file, just like
any other client endpoint setting. However, as you can see in the previous table, all the credential types,
other than Certificate, require that you set your username and password. Setting username and password
in configuration is considered unsecured, and therefore, you can only set the client's certificate in the
configuration file.
The following code example shows how to configure a client endpoint with transport security and
Certificate client credentials in the configuration file.
bindingConfiguration="securedBinding"
behaviorConfiguration="certificateEndpointBehavior"
contract="Contracts.IBookingService"
name="bookingServiceEP"></endpoint>
</client>
<bindings>
<wsHttpBinding>
<binding name="securedBinding">
<security mode="Transport">
<transport clientCredentialType="Certificate"/>
</security>
</binding>
</wsHttpBinding>
</bindings>
<behaviors>
<endpointBehaviors>
<behavior name="certificateEndpointBehavior">
<clientCredentials>
<clientCertificate storeLocation="CurrentUser" storeName="My"
findValue="BlueYonderClientCert" x509FindType="FindBySubjectName" />
</clientCredentials>
</behavior>
</endpointBehaviors>
</behaviors>
Similar to how service behaviors configure how services work, endpoint behaviors configure how
endpoints work. In the above example, an endpoint behavior was created to configure the certificate that
the client will use when authenticating itself to the service.
To point to a specific certificate, you will need to specify where the certificate is stored, and how to find it.
The following table describes the attributes that you need to use to find the certificate:
Attribute
Description
storeLocation LocalMachine or CurrentUser. You should set this attribute according to the store
where the certificate was installed.
X509FindType Specify which field of the certificate will be searched for the matching value. The
value specified in the findValue attribute must be an exact match to the value in
the field. The possible fields are:
FindByThumbprint
FindBySubjectName
FindBySubjectDistinguishedName (default value)
Developing Windows Azure and Web Services 14-13
Attribute
Description
FindByIssuerName
FindByIssuerDistinguishedName
FindBySerialNumber
FindByTimeValid
FindByTimeNotYetValid
FindByTemplateName
FindByApplicationPolicy
FindByCertificatePolicy
FindByExtension
FindByKeyUsage
FindBySubjectKeyIdentifier
Secure transport protocols, such as HTTPS, provide a guarantee to the client that the server is legitimate.
The client does not necessarily have to authenticate when establishing a secured channel. However, the
server must supply a certificate to prove its identity. If the client trusts the CA and the certificate is valid, it
can communicate with the server. After a channel is established and secured, the client can send
additional credentials. These are used by the server-side application to authenticate the client.
When the server uses a certificate that was not issued by a well-known CA, the client does not trust the
issuer of the certificate, and the certificate validation fails. Clients can introduce custom certificate
validation logic by using the class ServicePointManager and authorize the certificate before establishing
and securing a channel. To attach custom certificate validation logic, add a delegate to the
ServerCertificateValidationCallback static property of the System.Net.ServicePointManager class.
The following code demonstrates how to attach custom certificate validation logic in the client.
The above example overrides the default certificate validation process by accepting any service certificate.
This override makes your client susceptible for phishing attacks, and therefore is not recommended for
use in production environments. Usually, this override is used only in development environments where
the services you connect to often use a self-signed certificate that cannot be validated by your clients.
Demonstration Steps
1. From D:\Allfiles\Apx02\DemoFiles\TransportSecurity\setup, run the
CreateAndRegisterCert.cmd file
4. Run the client and service to verify that they communicate successfully.
5. Change the configuration of the service to use transport security, and change the service address to
use HTTPS.
Add a binding configuration named Secured for basicHttpBinding, and set the security mode to
Transport.
<bindings>
<basicHttpBinding>
<binding name="Secured">
<security mode="Transport"/>
</binding>
</basicHttpBinding>
</bindings>
7. Examine the clients App.config file. Observe the changes made to the client configuration.
8. Run the client application and view the exception that occurs because of the certificate validation
failure.
9. In the Client project, open the client.cs file, and then register a delegate to the
ServicePointManager.ServerCertificateValidationCallback property. Set the delegate to return
True.
Note: Refer to Topic 3, "Using Transport Security in Clients", in this Lesson for a code sample of
setting the ServerCertificateValidationCallback property.
10. Run the client application and verify that the application communicates with the service successfully.
Developing Windows Azure and Web Services 14-15
Lesson 3
Message Security
There might be some scenarios when you want to use a secured channel but transport security does not
meet your requirements. For example, if you need to make sure that the message is encrypted so it
cannot be deciphered even if it passes through a proxy, you can use the WCF security mechanism, which
provides end-to-end encryption of messages.
This lesson describes the differences between transport and message security, the scenarios where each
technique is suitable for use, and the procedures for applying message security to your WCF services.
Lesson Objectives
After you complete this lesson, you will be able to:
Unlike transport security in which the entire message is encrypted, message security can encrypt and sign
parts of the message. If only a part of the message is encrypted, intermediaries can open the message,
view the clear text parts, and take some action or make a decision accordingly. Because the signature is
part of the message, the final destination can still verify that the message was not tampered with. You can
use this feature for implementing a content-based router that can route a message according to the
action header value. By default, WCF does not encrypt the action header. However, it does sign it when
using message security. Therefore, the action header is available to all intermediaries, but no one can
tamper with its value.
14-16 Appendix B: Implementing Security in WCF Services
WCF message security uses the Web Services (WS)-Security specification to secure messages. This
specification defines the structure of a secured SOAP message. WCF message security also uses WS-Trust,
which specifies how to establish a secured channel, and how to exchange security tokens and credentials.
For more information about SOAP messages, security tokens, and credentials, see:
Understanding WS-Security.
http://go.microsoft.com/fwlink/?LinkID=298799&clcid=0x409
Message security supports all the client credential types supported by transport security. In addition to the
well-known credential types, it also supports the Security Assertion Markup Language (SAML) token and
custom tokens. The WS-Security specification, on which message security is based, provides an extensible
framework that can transmit any kind of token inside the SOAP message. Similar to transport security,
message security supports a set of easy-to-use built-in authentication providers. But unlike transport
security, it also provides the ability to add custom validation of credentials, where your custom code
performs authentication.
Although there are many benefits of using message security, you should be aware that there are some
disadvantages to this solution:
Because message security is implemented in WCF, which is the software layer, and not in the
operating system or in hardware, it has a higher performance cost than transport security.
The security negotiation that is required to create the secure channel involves sending large messages
through the channel, so there might be a noticeable delay when opening such a channel.
Secured messages are larger than normal messages, which might result in a longer transmission time.
For more information about message security in WCF, see:
In the previous example, the security mode was set to Message, and the client's credential type was set to
Certificate. The authentication types in message security are different from the ones for transport
security. The following table lists the possible authentication schemes:
Authentication
Description
scheme
Windows Uses the client's Windows credentials to secure the SOAP message.
UserName Requires the client to transfer a user name and password. WCF requires that the user
name and password are transferred in plain text. When using this scheme, WCF
automatically uses transport security.
Certificate Requires the client to send an X.509 certificate that the service can validate.
IssuedToken Requires the client to transfer a custom security token, such as a SAML token.
Note: In the previous example, you used the MessageCredentialType enum, which has
different values than the HttpClientCredentialType enum, which was used in the previous
lesson for transport security.
The same binding configuration can also be applied through the configuration file.
The following configuration demonstrates how to configure a binding for message security with username
client credentials.
<wsHttpBinding>
<binding name="secured">
<security mode="Message">
<message clientCredentialType="Certificate"/>
</security>
</binding>
<wsHttpBinding>
Message security does not depend on transport security, and therefore you do not need to change the
address scheme from http:// to https://, as shown in the above example.
Note: The default binding configuration for WSHttpBinding is to use message security
with Windows client credentials. Therefore, in the previous XML binding configuration, omitting
the mode attribute would have the same affect. For other bindings, such as NetTcpBinding, you
will need to configure the mode attribute to Message explicitly.
When you use message security, you need to specify the certificate that the service will use for
authentication. (You cannot use IIS or netsh to register certificates for message security.) To do this, you
need to add the service credentials behavior to your service.
14-18 Appendix B: Implementing Security in WCF Services
X.509 certificates are a standard for packaging a public key together with metadata about the token, the
issuer, and its owner. An X.509 certificate is considered valid if the issuer of the certificate is trusted and
the digital signature is valid. The digital signature guarantees that a valid user produced the certificate
and the certificate was not tampered with.
If your client and service use the same Active Directory domain controller, you do not need to set the
service certificate. This is because the client can use the service's Windows identity (the identity that is
attached to the process that runs the service host) to identify the service and to secure the channel. To
create a secured channel with the service's Windows identity, make sure both client and service are on the
same network and are using the same domain, and set the client credential type to Windows.
The secured channel negotiation process of message security is very similar to that of SSL: The client uses
the certificate presented by the server to authenticate the server and to establish and secure a channel
with it. The client reads the server certificate's public key, then uses the public key to encrypt a random
number (the symmetric key), and then sends it to the server. The difference between this process and SSL
is that with message security, the negotiation is made by WCF, rather than by the operating system.
Best Practice: You can place the service credential configuration in configuration or in
code. However, because certificates are renewed over time and their information might change,
it is a best practice to set credential information in configuration, and not in code.
In the example shown at the beginning of this topic, the client credential type was set to Certificate.
Unlike with transport security, where the operating system validates the client certificate, in message
security, WCF is responsible for validating the client certificate. By default, a service will validate the
certificate if the server can find the issuer in a trust chain that leads to a root CA. If you know that a
trusted CA did not issue your client's certificate - for example, if you are working in a development
environment where your client's certificates are self-issued - then you can change the way your service
validates certificates.
The following code demonstrates how to change the client certificate validation mode.
In the preceding example, the certificateValidationMode attribute was set to PeerOrChainTrust. The
various types of validation modes are described in the following table:
Value Description
None The certificate is not validated. Every certificate is acceptable. This setting is not
recommended.
PeerTrust The client's certificate must exist in the trusted people store.
ChainTrust The chain from the certificate's issuer must lead to a root CA.
PeerOrChainTrust Either the certificate is located in the trusted people store, or its issuer is part of a
root CA trust chain.
Note: Authenticating client certificates with a custom validation class will be explained in
Lesson 4, Configuring Service Authentication and Authorization.
The service behavior must also include the configuration of authentication and authorization handlers,
which will be explained in Lesson 4, Configuring Service Authentication and Authorization.
Similar to the code demonstrated in the topic Using Transport Security in Clients in Lesson 2, Transport
Security, configuring the client for message security requires setting the security mode to Message,
setting the client credential type according to the type required by the service, and lastly, configuring the
14-20 Appendix B: Implementing Security in WCF Services
client credentials, which will be used to authenticate the client. In the above example, the binding is
configured to use the UserName credential type. Other credential types are detailed in the mentioned
topic.
Instead of configuring the proxy for message security manually, you can use the Add Service Reference
dialog box in Visual Studio 2012. The generated client configuration will contain the appropriate message
security settings, including the required client credential type. If you use the Add Service Reference
dialog box, and you configured your service behavior with a service certificate (instead of using the
service's Windows identity to secure the channel), then the generated client-side configuration will also
contain the information about the service's certificate. The client will use this information to compare the
certificate in the configuration to the certificate sent by the service upon opening a channel.
The following configuration shows a client endpoint configuration that contains the service certificate
informationthe encoded value was shortened for brevity.
The Base64 string in the above example is not the certificate's thumbprint - it is the service certificate,
without its private key, encoded as a Base64 string. In run time, the client decodes the string and loads it
into a certificate object, and then compares this certificate with the certificates sent from the service.
The generated configuration, however, will not include information about which certificate the client
needs to use - it is up to you to provide this information.
Note: Refer to the Topic Using Transport Security in Clients in Lesson 2, Transport
Security, for an example of setting the client certificate in code and in configuration.
To guarantee that the server is legitimate, the client has to validate the servers certificate. Unlike transport
security, WCF handles the credentials validation process, and you can control it in the endpoint behavior
of the client.
The following configuration demonstrates how to configure server certificate validation in the
configuration file.
</endpointBehaviors>
</behaviors>
In the above example, the certificate validation mode is set to PeerOrChainTrust, which means that the
client will try to validate the service certificate by first checking if it has the same certificate in the Trusted
People certificate store. If the certificate is not in the certificate store, the client will try to verify its chain
trust (the CA that issued the certificate). For the client to locate the service certificate in the Trusted
People store, you need to receive the service certificate out-of-band as a .cer file (by email or other
means), and install it on the client computer, in the Trusted People certificate store.
Note: Other certificate validation modes are: None, PeerTrust, ChainTrust (default), and
Custom. Refer to the previous topic for more details on each of the other validation modes.
If the service does not use a service certificate, and instead uses its Windows identity to authenticate itself
and to create the secured channel, the servers identity needs to be specified in the client configuration
file. If you use the Add Service Reference dialog box, the generated configuration will be created with
the identity of the service. If you manually create the configuration, you will need to specify either the
User Principal Name (UPN) or the Service Principal Name (SPN) that identities the service, according to
the type of the Windows account the service uses:
UPN. Used when the service uses a non-system user, such as a domain user.
SPN. Used when the service uses a system user, such as NetworkService, or LocalSystem.
For more information about service identities, see:
Establishing and securing a channel is an expensive process that involves exchange of several messages.
You can explore these messages by looking at the WCF message log.
Note: WCF Message logging is explained in depth in Module 10, Monitoring and
Diagnostics, in Course 20487.
The following figure displays the message flow of a simple service secured by message security.
14-22 Appendix B: Implementing Security in WCF Services
When you send the first request to the service, WCF establishes the secured channel by negotiating with
the service, exchanging the service credentials and client credentials. After the secured context is
established, WCF will start sending the requests through the secured channel. After the channel is
established and secured there is no need to execute the security protocol for each request.
Developing Windows Azure and Web Services 14-23
The following configuration demonstrates how to configure your binding to use the
TransportWithMessageCredential security mode.
In the above example, after setting the security mode attribute to TransportWithMessageCredential,
you configure the client credentials by adding the <message> element and setting the
clientCredentialType attribute to any of the message security client credential types - UserName,
Windows, even custom tokens, and SAML.
For more information on configuring various bindings for the TransportWithMessageCredentials
security mode, see:
Demonstration Steps
1. From D:\Allfiles\Apx02\DemoFiles\MessageSecurity\setup, run the CreateCert.cmd file.
5. Change the binding configuration of the service endpoint to use Message security instead of None.
6. Add a <serviceCredentials> element with a <serviceCertificate> element, and set the service
certificate to use the DemoCert certificate from the My store in the localMachine store location. The
resulting configuration should resemble the following code.
<serviceCredentials>
<serviceCertificate findValue="DemoCert" storeLocation="LocalMachine"
storeName="My" x509FindType="FindBySubjectName"/>
</serviceCredentials>
7. Use the WCF Service Configuration Editor tool to configure the service for message logging.
Configure the log to include the entire message.
11. Open the message log file, and view the negotiation messages and the encrypted message.
Developing Windows Azure and Web Services 14-25
Lesson 4
Configuring Service Authentication and Authorization
After defining the service endpoints with the security mode and the credential type, you should address
authentication and authorization of the client's identity. WCF contains several built-in providers and
extensibility points to customize these aspects of your service for your needs.
This lesson describes the WCF authentication and authorization models, the built-in providers and
extensibility points, and explains how you can perform authorization enforcement at run time.
Lesson Objectives
After you complete this lesson, you will be able to:
Authenticate clients.
Create a custom credential validator.
Authorize clients.
Authenticating Clients
The security infrastructure provided by transport
and message security establishes and helps secure
a channel where the client can transfer credentials
to the server. After receiving the credentials, the
server has to execute some logic to identify the
calling client. This is the authentication logic.
When writing secured services, authenticating
clients is essential. If the calling client is
authenticated successfully, an identity object is
created and attached to the security context.
Note: The authentication process only deals with determining the identity of the client.
Authorization, which is the process of determining what the user is allowed to do, will be
explained in the next topic.
How WCF authenticates the credentials of the client depends on the type of credentials the client
provides and the type of security mode the client is using. When the client and the service use Transport
security, the transport layer is responsible for authenticating the client's credentials. For example, if the
client and service are configured for Transport security with the Basic client credential type (sending
username and password as a Base64 string), then the transport layer will validate the username and
password against the domain's Active Directory. The only exception for this behavior is when the client
14-26 Appendix B: Implementing Security in WCF Services
credential type is Certificate. If the client sends a certificate, then WCF is responsible for validating the
certificate.
If your client and service use Message security, then WCF is responsible for the validation of the client's
credentials. The validation process depends on the type of the client credential used, as detailed in the
following table:
Client
credential Credential authentication process
type
Windows Windows identities are tokens that the client receives from its Active Directory and
sends them to the service as a mean of identification. WCF can authenticate Windows
tokens by verifying them against Active Directory.
UserName Clients can send a set of username and password as their identity. By default, WCF
will validate the username and password against the domain's AD DS. You can
configure WCF to validate the username and password against an ASP.NET
Membership provider, or provide your own username/password validator logic.
Certificate WCF has several ways to authenticate the client's X.509 certificate. For example, the
default behavior of WCF is to validate the certificate by checking its expiration and its
chain of issuers. However, you can customize the validation process and even replace
it with a custom validation process. Refer to the Topic Configuring a Service for
Message Security in the previous lesson for a code example of changing the client
certificate validation mode.
IssuedToken When a client sends a token issued to it by a Security Token Service (STS), WCF can
validate the token according to prior information it received from the STS provider,
such as its name and digital signature key. Issued tokens, claims, and STS
authentication are covered in Module 11, "Identity Management and Access Control"
in Course 20487.
Note: If you want to customize the credentials validation process, but still be able to use
Transport security, consider using the TransportWithMessageCredential security mode, which
was explained in the previous lesson.
ASP.NET has a successful infrastructure for identity management, which is based on username and
password credentials. WCF has full support for integrating services with the ASP.NET Membership
infrastructure.
The following configuration demonstrates how to change the username/password validation from using
Active Directory to ASP.NET Membership.
<serviceBehaviors>
<behavior>
<serviceCredentials>
Developing Windows Azure and Web Services 14-27
By implementing the Validate method, you can introduce your own validation logic, such as validating
the credentials against a list of users and passwords stored in your own database. The Validate method
does not have a return value; If the method completes its execution successfully, WCF will assume the
validation was successful. If the custom validation reveals a problem with the credentials, for example, if
the username does not appear in your users list, or the password does not match the user's password, you
should throw a SecurityTokenException. Throwing a security token exception will result in a security
exception on the client side. If you want to provide extra information about the validation failure, such as
a list of possible failure reasons and links where the user can reset their password, you can replace the
SecurityTokenException object with a FaultException or a FaultException<T> object.
14-28 Appendix B: Implementing Security in WCF Services
Best Practice: As a best practice, avoid providing too much information about the failed
validation. For example, if a hacker tries to guess the username and password, letting them know
that the username is valid but the password is wrong will advance the hacker one step towards
finding the correct credentials.
To complete the process, modify the service configuration. Change the validation mode to Custom and
specify the newly created validator type.
The following configuration demonstrates how to attach a custom username/password validator by using
the service configuration file.
Make sure you set the customUserNamePasswordValidatorType attribute to the fully qualified name of
your custom validator class, including the fully qualified name of its containing assembly.
If the service endpoint is configured to use the Certificate client credentials, you can create a custom
X.509 certificate validator, by creating a class that derives from the
System.IdentityModel.Selectors.X509CertificateValidator class and implement the Validate method.
To attach the custom validator to your service, you can use either code or configuration.
The following code demonstrates how to attach a custom certificate validator through code.
If you want to attach the custom validator by using the configuration file, add the <clientCertificate>
element to your service behavior. Add an <authentication> element to it, set the
certificateValidationMode attribute to Custom, and set the customCertificateValidatorType attribute
to the fully qualified name of the class and its containing assembly.
For an example of configuring a custom certificate validator in both code and configuration, see:
The PrimaryIdentity property returns an object that implements the IIdentity interface. The final type of
the object depends on the client credential type and the way the WCF service validates these credentials.
For example, if the client provides a username and a password, and WCF validates those credentials
against AD DS, WCF will create a WindowsIdentity object. If the validation process is against the ASP.NET
Membership provider, the created identity will be a GenericIdentity object.
The AuthenticationType property contains a string representing the name of the authentication type
mechanism. The Name property contains of a string representing the identity name, which could be a
user name or the name of the client's certificate, depending on the type of the client credentials. For
example, for the ASP.NET Membership provider, the AuthenticationType property will be set to
MembershipProviderValidator, and the Name property will be set to the username presented by the
client's credentials.
14-30 Appendix B: Implementing Security in WCF Services
Authorizing Clients
Even though the server can identify the client, it
does not mean that the client can execute any
service operation. The authorization process
grants access or denies access to a specific
resource or action according to the clients
identity. It is only after authentication and
authorization that WCF forwards the request to
the service operation.
Active Directory and ASP.NET Membership are two types of role-based identities. Roles define a group of
users with a common set of permissions. For example, all the sales people in a system will have the Sales
role. In code, you will permit all those who have the Sales role to invoke purchase-related service
operations, but you will prevent them from invoking salary-related operations. The Managers role, to
which several other identities will belong, will have permissions to invoke both purchase-related and
salary-related operations. To retrieve the roles of an identity, you will need to retrieve a list of roles for the
identity. For example, AD DS stores role information, called groups, for each Windows identity. ASP.NET
also supports role storage with the ASP.NET Role provider. The default storage for the role provider is a
SQL database.
The following example shows how you can configure the service behavior to use the built-in ASP.NET
Membership and Role providers for client authentication and authorization.
Configuring the Service for Authorization with the ASP.NET Role Provider
<serviceBehaviors>
<behavior name="SecureCalcBehavior">
<serviceAuthorization principalPermissionMode="UseAspNetRoles" />
<serviceCredentials>
<userNameAuthentication
userNamePasswordValidationMode="MembershipProvider" />
</serviceCredentials>
</behavior>
</serviceBehaviors>
By adding the <serviceAuthorization> element to your service behavior, you can configure which
authorization technique you want to use. The default authorization mechanism used by WCF is the Active
Directory Windows groups, but you can set the principalPermissionMode attribute to either
UseAspNetRoles, to use the ASP.NET Role provider, or to Custom if you want to create your own
authorization manager.
Note: It is possible to mix authentication and authorization storages. For example, WCF can
authenticate the client credentials against Active Directory and retrieve the list of roles for the
Windows identity from the ASP.NET Role provider.
Creating a custom authorization manager is outside the scope of this course. For an example of how to
create a custom authorization manager, see:
Developing Windows Azure and Web Services 14-31
http://go.microsoft.com/fwlink/?LinkID=298805&clcid=0x409
Role-based identities are one type of identities that you can use to define permissions in a WCF service.
Another type of identities is claim-based identities. With claim-based identities, the credentials provided
by the client contain, in addition to the identity information, sets of claims you can use to authorize the
client. With claim-based identities, all the information required to authenticate and authorize the client
are included in the credentials the client sends to the service. Claim-based identities are discussed in
Module 11, "Identity Management and Access Control" in Course 20487. However, claim-based
authorization is outside the scope of this course. The remaining of this topic will focus on role-based
identities and role-based authorization.
Role-based security in the .NET Framework is based on the IIdentity and IPrincipal interfaces. The
IIdentity implementation contains information about the client's identity, and the IPrincipal
implementation contains information about the client's roles along with functionality for checking if the
client's identity has a certain role.
For example, in cases where you authenticate users by using Windows credentials, WCF creates the
corresponding WindowsIdentity and WindowsPrincipal objects, and attaches them to the threads
security context.
Because the identity and principal are available throughout the entire lifetime of your service, you can use
their implementation to authorize the client against specific role requirements. In addition, you can use
the common .NET role-based security model through the PrincipalPermission class imperatively or
declaratively.
To use the PrincipalPermission declaratively, decorate your service operation implementation with the
[PrincipalPermission] attribute, and set against which roles you want to authenticate the identity.
The following code demonstrates how to configure a service operation with the [PrincipalPermission]
attribute, which requires that the calling user is a member of the Sales role.
If the Sales role is not one of the identity's roles, a SecurityException will be thrown, and an Access
Denied fault message will be sent back to the client. The WCF client will turn this fault message into a
SecurityAccessDeniedException managed exception.
You can use the SecurityAction enum to require that the identity belong to a role, and you can also use
it to deny access from specific roles.
The following code demonstrates how to configure a service operation with the [PrincipalPermission]
attribute, denying access from the Intern role.
You can also use the PrincipalPermission imperatively by creating an instance of the
PrincipalPermission type, and then specifying which roles to check.
The following code demonstrates how to use the PrincipalPermission class in code.
The [PrincipalPermission] attribute cannot be used in the preceding example, because the permission
check depends on the price of the reservation. That is why you need to use imperative authorization. The
call to the Demand method verifies the user has the Manager role. If the method fails, a
SecurityException will be thrown.
You can use both the [PrincipalPermission] attribute and the PrincipalPermission class to authorize
roles and individual users, by setting the name parameter.
WCF Authorization
http://go.microsoft.com/fwlink/?LinkID=298806&clcid=0x409
http://go.microsoft.com/fwlink/?LinkID=298807&clcid=0x409
Demonstration Steps
1. From D:\Allfiles\Apx02\DemoFiles\AuthenticatingUsers\setup, run the CreateCert.cmd file.
3. Explore the contents of the Service project and Client project. Both service and client are configured
to use WsHttp binding which uses message security with windows authentication by default.
4. To authenticate user names with the ASP.NET Membership provider, add binding configuration for
the wsHttpBinding, and in the message security settings, set the client credential type to UserName.
The resulting configuration should resemble the following code.
Developing Windows Azure and Web Services 14-33
<bindings>
<wsHttpBinding>
<binding>
<security>
<message clientCredentialType="UserName"/>
</security>
</binding>
</wsHttpBinding>
</bindings>
6. To authorize users with ASP.NET Roles, add a <serviceAuthorization> element to the service
behavior configuration, and set the element's principalPermissionMode to UseAspNetRoles.
7. Enable the ASP.NET Role manager by adding the following configuration.
<system.web>
<roleManager enabled="true"/>
</system.web>
8. Add four principal permission checks to the service implementation in the following way:
Add: Principal permission attribute that demands the user has the StandardUsers role
Sub: Principal permission attribute that demands the user has the Managers role
Mul: Principal permission object that demands the user is User1
Div: Principal permission object that demands the user has the Managers role
9. In the clients App.config file, configure the binding for UserName authentication. The resulting
configuration should resemble the binding configuration you created in the service a few steps
before.
10. In the client configuration, add a default endpoint behavior to disable the service certificate check.
The resulting configuration should resemble the following code.
<behaviors>
<endpointBehaviors>
<behavior>
<clientCredentials>
<serviceCertificate>
<authentication certificateValidationMode="None"/>
</serviceCertificate>
</clientCredentials>
</behavior>
</endpointBehaviors>
</behaviors>
11. From the Client project, open the client.cs. Add the code needed to set the credentials of the proxy
to username/password credentials.
Use the ClientCredentials.UserName property of the proxy object to set the client credentials.
12. Run the service and the client. Verify that the Add and Mul operations succeed, and that the Sub and
Div operations throw a SecurityAccessDenied exception.
14-34 Appendix B: Implementing Security in WCF Services
Objectives
After you complete this lab, you will be able to:
Lab Setup
Estimated Time: 30 minutes
Virtual Machine: 20487B-SEA-DEV-A, 20487B-SEA-DEV-C
1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.
3. If you executed a later lab before this one, follow these instructions:
5. In the Action pane, click Connect. Wait until the virtual machine starts.
Password: Pa$$w0rd
7. Return to Hyper-V Manager, click 20487B-SEA-DEV-C, and in the Action pane, click Start.
8. In the Action pane, click Connect. Wait until the virtual machine starts.
Password: Pa$$w0rd
10. Verify that you received credentials to log in to the Azure portal from you training provider, these
credentials and the Azure account will be used throughout the labs of this course.
Developing Windows Azure and Web Services 14-35
<binding>
<security mode="Message">
<message clientCredentialType="Certificate"/>
</security>
</binding>
Note: You can create one default binding configuration without the name attribute for
each binding type. The default configuration will apply to any endpoint using that binding, which
does not have its own binding configuration.
Task 3: Configure the service behavior to use the newly created certificate
3. Add a <serviceCredentials> element to the default service behavior.
3. In the <serviceCredentials> element, add the <clientCertificate> element and set the revocation
mode of the client certificate to NoCheck.
Note: You cannot check if the client certificate has been revoked, because it was generated
locally. If a real certification authority had issued the client certificate, it would have been possible
to check whether it was revoked.
14-36 Appendix B: Implementing Security in WCF Services
Results: You can test your changes at the end of the next exercise.
4. Implement the Id property by returning a new GUID. Initialize the new GUID in the classs constructor
and store it in a private field.
5. Create a static dictionary field called AuthorizationForUser in the class. Set the key to a string and
the value to a string array. Initialize it with the following key/value pair:
Key: CN=Client
Get the Identities value from the evaluationContext.Properties collection and cast it to a list of
System.Security.Principal.IIdentity.
Verify the list contains identities, and select the first identity from the list.
Extract the part of the identitys Name property before the semicolon.
Check whether the roles dictionary contains roles for the partial name that you found.
Set the Principal value of the evaluationContext.Properties collection to a new
System.Security.Principal.GenericPrincipal. Initialize the generic principal with the identity and the
roles that you found (or a null value if no roles are found.)
If the principal was created, return true. Otherwise, return false.
The resulting code should resemble the following code.
Task 2: Configure the service authorization to use the custom authorization policy
1. From the BlueYonder.Server.Booking.Host project, open the App.config file.
2. Add a <serviceAuthorization> element to the default service behavior.
Results: After you complete this exercise, the booking service host is opened successfully and can locate
the service certificate.
Exercise 3: Configure the ASP.NET Web API Booking Service for Secured
Communication
Scenario
To complete the message security configuration, you must also configure the client-side accordingly. In
this exercise, you update the endpoint configuration in the Web API service to use message security and
authenticate with a certificate.
14-38 Appendix B: Implementing Security in WCF Services
1. Create a client authentication certificate for the ASP.NET Web API booking service
Task 1: Create a client authentication certificate for the ASP.NET Web API booking
service
1. From D:\AllFiles\Mod08\LabFiles\Setup, run the CreateClientCertificate.cmd file.
3. Add a default NetTcpBinding binding configuration. Set it to use Message security with client
credentials of type Certificate.
Task 3: Configure the client-side endpoint behavior with the client's certificate
1. Add a <behaviors> element and in it, add a default endpoint behavior with a <clientCredentials>
element.
The resulting configuration should resemble the following code.
<behaviors>
<endpointBehaviors>
<behavior>
<clientCredentials>
</clientCredentials>
</behavior>
</endpointBehaviors>
</behaviors>
2. Add a <serviceCertificate> element to the <clientCredentials> element, and set the revocation
mode of the service certificate to NoCheck.
3. Add the <clientCertificate> element to the <clientCredentials> element. Set the certificate to use
the client certificate from the LocalMachine\My certificate store. Search for the certificate with the
Client subject name.
4. Add an <identity> element in the client endpoint element. Configure the identity to the server
certificate from the LocalMachine\TrustedPeople certificate store. Search for the certificate with the
Server subject name.
<identity>
<certificateReference storeLocation="LocalMachine" storeName="TrustedPeople"
x509FindType="FindBySubjectName" findValue="Server"/>
</identity>
Developing Windows Azure and Web Services 14-39
Note: The <identity> element contains the information about the service's certificate. The
client uses this configuration to verify that it is connected to the correct service.
7. Open the app bar and search for trips to New York. Purchase a trip from Seattle to New York.
8. Go back to the 20487B-SEA-DEV-A virtual machine to debug the WCF service. Use the Quick Watch
to view the value of the ServiceSecurityContext.Current.PrimaryIdentity property, and verify that
it is set to the client certificate. Continue running and verify the client is showing the new reservation.
Close the client and service applications.
Results: After you complete the exercise, you will be able to start the client application and create a
reservation.
Question: In this lab, you used a client certificate to authenticate the client. Why would you
use certificates instead of other authentication types, such as Windows identities or
username/password?
14-40 Appendix B: Implementing Security in WCF Services
Review Question(s)
Question: What types of credentials does WCF support?
Tools
WCF Services Configuration Editor
Course Evaluation
Include this slide only in the last module of the Course.
Keep this evaluation topic page if this is the final module in this course. Insert the Product_Evaluation.ppt
on this page.
If this is not the final module in the course, delete this page
Your evaluation of this course will help Microsoft understand the quality of your learning experience.
Please work with your training provider to access the course evaluation form.
Microsoft will keep your answers to this survey private and confidential and will use your responses to
improve your future learning experience. Your open and honest feedback is valuable and appreciated.
14-42 Appendix B: Implementing Security in WCF Services
L1-1
2. Browse to https://manage.windowsazure.com.
3. If the Sign In page appears, enter your email and password, and then click Sign In.
5. In the top corner on the left side of the sql databases page, click SERVERS.
7. In the SQL Database server settings dialog box, in the LOGIN NAME box, type SQLAdmin.
8. In the LOGIN PASSWORD box and CONFIRM PASSWORD box, type Pa$$w0rd.
9. In the REGION list, select the region closest to you, and then click Complete (the V button).
10. Wait for the server to appear in the list of servers and its status changed to Started.
11. Write down the name of the newly created SQL Database server.
Task 2: Manage the Windows Azure SQL Database Server from the SQL Server
Management Studio.
1. On the sql database page, click on the name of the newly created server.
2. Click the CONFIGURE tab.
3. In the allowed ip addresses section, add a new firewall rule by filling the following information:
Note: As a best practice, you should allow only your IP address, or your organization's IP
address range to access the database server. However, in this course, you will use this database
server for future labs, and your IP address might change in the meanwhile, therefore you are
required to allow access from all IP addresses.
5. On the Start screen, click the SQL Server Management Studio tile.
6. In the Connect to Server dialog box, in the Server name box, enter
SQLServerName.database.windows.net (Replace SQLServerName with the server name you wrote
down in the previous task).
11. In Object Explorer, right-click the Databases node, and then click Import Data Tier Application.
12. In the Import Data-tier Application wizard, click Next, and then click Browse.
15. Click Next, click Next again, and then click Finish. Wait until the database import procedure is
finished, and then click Close.
16. Press F5 to refresh the database list, expand the Databases node, and verify you see the BlueYonder
database.
Results: After completing this exercise, you should have created a Windows Azure SQL Database in your
Windows Azure account.
L1-3
6. Select the Create directory for solution check box, and then click OK.
7. In Solution Explorer, right-click the Class1.cs file, click Delete, and then click OK.
8. Right-click the BlueYonder.Model project, point to Add, and then click New Item.
9. In the Add New Item dialog box, on the navigation pane, expand Visual C# Items node. Click the
Data node, and then select the ADO.NET Entity Data Model from the list of templates.
10. In the Name text box, enter EntityModel, and then click Add.
11. In the Entity Data Model Wizard wizard, click Generate from database, and then click Next.
12. In the Choose Your Data Connection step, click New Connection.
13. If the Choose Data Source dialog box appears, click Microsoft SQL Server, and then click Continue.
14. In the Connection Properties dialog box, in the Server name box, enter
SQLServerName.database.windows.net (replace SQLServerName with the server name you have
written down in the previous exercise).
20. From the Select or enter a database name list, select the BlueYonder database, and then click OK.
21. In the Choose Your Data Connection step, select the Yes, Include the sensitive data in the
connection string option, and then click Next.
22. In the Choose Your Database Objects and Settings step, expand Tables, expand dbo, then check
Locations and Travelers, and then click Finish. Wait until Visual Studio 2012 creates the data model.
23. If the Security Warning dialog box appears, click OK (it may appear more than once).
24. To save the file, press Ctrl+S. If the Security Warning dialog box appears, click OK (it may appear
more than once).
25. Close the EntityModel.edmx diagram.
Results: After completing this exercise, you should have created Entity Framework wrappers for the
BlueYonder database.
L1-4 Developing Windows Azure and Web Services
2. In the Add New Project dialog box, on the navigation pane, expand the Installed node. Expand the
Visual C# node. Click the Web node. Click ASP.NET MVC 4 Web Application from the list of
templates.
4. In the New ASP.NET MVC 4 Project dialog box, click the Web API template, and then click OK.
Task 2: Add a Web API Controller with CRUD Actions, Using the Add Controller
Wizard
1. Right-click BlueYonder.MVC project, and then click Add Reference.
2. In the Reference Manager dialog box, click Solution in the navigation pane, check the
BlueYonder.Model check box, and then click OK.
4. Locate the <connectionStrings> section, select the <add> element, including its attributes, and
press Ctrl+C to copy the element to the clipboard.
5. In Solution Explorer, in the BlueYonder.MVC project, double-click Web.config.
6. Locate the <connectionStrings> section, place the cursor after the <connectionStrings> tag, and
press Ctrl+V to paste the connection string.
10. In Server Explorer, right-click Data Connections, and then click Refresh. BlueYonderEntities will
show up under the Data Connections node.
11. In Solution Explorer, in the BlueYonder.MVC project, right-click Controllers, then point Add, and
then click Controller.
12. In the Add Controller dialog box, in the Controller name box, enter LocationsController.
13. Select API controller with read/write actions, using Entity Framework from the Template drop
down list.
14. Select Location (BlueYonder.Model) from the Model class combo box.
15. Select BlueYonderEntities (BlueYonder.Model) from the Data context class combo box.
Note: You now have a Web API controller for the Location model.
17. In Solution Explorer, right-click the BlueYonder.MVC project, and then click Set as StartUp Project.
19. In the Internet Explorer window, in the address bar, append api/locations to the address, and then
press Enter.
20. At the bottom of the Internet Explorer window, a prompt appears. Click Open.
21. If you are prompted to select a program to open the file, in Windows can't open this type of file
(.json) dialog box, click Try an app on this PC, select Notepad from the list of available programs.
When Notepad opens, you should see a list of Location entities, encoded with the JSON format.
Results: After completing this exercise, you will have a website that exposes the Web API for CRUD
operations on the BlueYonder database.
L1-6 Developing Windows Azure and Web Services
Note: The name you enter is combined with the .azurewebsites.net suffix, providing a
unique host name that is used as your website URL.
7. In the REGION drop down list, select the region closest to your location.
8. Click CREATE WEB SITE. The website is added to the Web Sites table with the status as Creating;
Wait until the status changes to Running.
9. On the web sites page, click the name of the new website.
10. On the DASHBOARD page, click Download the publish profile on the right side under quick
glance. An Internet Explorer dialog box appears at the bottom. Click the arrow within the Save
button. Select the Save as option and specify the location D:\AllFiles\Mod01\LabFiles.
Task 2: Deploy the Web Application to the Windows Azure Web Site
1. Return to Visual Studio 2012, and in Solution Explorer, right-click the BlueYonder.MVC project, and
then click Publish.
2. In the Publish Web dialog box, click Import, browse to D:\AllFiles\Mod01\LabFiles, then select the
profile settings file that you downloaded, and then click Open. Click Ok in the Import Publish
Profile dialog box.
3. Click Publish. Visual Studio 2012 builds and publishes the application according to the settings that
are provided in the profile file.
4. After the deployment finishes, Visual Studio 2012 opens Internet Explorer and browses to the website.
Note: At this point, you can simply click Next at every step of the wizard, and then click
Publish to start the publishing process. Later in the course you will learn how the deployment
process works and how to configure it.
5. In the Internet Explorer window, in the address bar, append api/locations to the address, and then
press Enter.
6. At the bottom of the Internet Explorer window, a prompt appears. Click Open.
7. If you are prompted to select a program to open the file, select Notepad from the list of available
programs.
8. When Notepad opens, you should see a list of Location entities, encoded with the JSON format.
L1-7
2. Browse to https://manage.windowsazure.com.
3. If the Sign In page appears, enter your email and password, and then click Sign In.
5. In the top corner on the left side of the sql databases page, click SERVERS.
6. On the sql database page, click on the line of the newly created server, in the STATUS column. The
line should highlight.
8. In the Delete Server Confirmation dialog box, enter the name of the server, as suggested in the
description. Click V to confirm the operation.
Note: Windows Azure free subscriptions have a resource limitation and a restriction on the
total working hours. To avoid exceeding those limitations, you have to delete the Windows Azure
SQL Databases.
Results: After completing this exercise, you should ensure that all your products will be hosted on the
Windows Azure cloud by using Windows Azure SQL Database and Windows Azure Web Site.
L1-8 Developing Windows Azure and Web Services
L2-1
5. Explore the FlightScheduleId and the Flight properties of the FlightSchedule class, and note how
the DatabaseGenerated and ForeignKey attributes are used in this class.
6. In Solution Explorer, in the BlueYonder.DataAccess project, expand the Repositories folder, and
then double-click FlightRepository.cs.
Note: The FlightRepository class implements the Repository pattern. The Repository
pattern is designed to decouple the data access strategy from the business logic layer that
handles the data. The repository exposes the data access functionality and implements it
internally by using a specific data access strategy, which in this case is Entity Framework. By using
repositories, you can easily create a mock, replacing the repository, and improve the testability of
the business logic.
For more information about the Repository pattern and its related patterns, see
http://go.microsoft.com/fwlink/?LinkID=298756&clcid=0x409.
In Lab 4, "Extending Travel Companions ASP.NET Web API Services", Module 4, "Extending and
Securing ASP.NET Web API Services", you will see how to increase testability by using mocked
repositories.
using System.ComponentModel.DataAnnotations.Schema;
3. Note the name of the TripId field that will be used as a key.
Note: You do not need to decorate the TripId property with the [Key] attribute, because
the property corresponds to the Code First convention for primary key name, which is the class'
name suffixed with ID.
4. Enable lazy loading for the FlightInfo property by replacing the property with the following code.
[ForeignKey("FlightScheduleID")]
public virtual FlightSchedule FlightInfo { get; set; }
Note: Entity Framework will detect the virtual property in the Trip class and will create a
new derived proxy class that implements lazy loading for the FlightInfo property.
When you load trip entities from the database, the entity object will be of the derived trip proxy
type, and not of the Trip type.
using System.ComponentModel.DataAnnotations.Schema;
8. Enable lazy loading for the DepartureFlight property by replacing the property with the following
code.
[ForeignKey("DepartFlightScheduleID")]
public virtual Trip DepartureFlight { get; set; }
9. Enable lazy loading for the ReturnFlight property by replacing the property with the following code.
[ForeignKey("ReturnFlightScheduleID")]
public virtual Trip ReturnFlight { get; set; }
Note: Setting the ReturnFlightScheduleID foreign key property to a nullable int indicates
that this relation is not mandatory (0-N relation, meaning a reservation does not require a return
flight). The DepartFlightScheduleID foreign key property is not nullable and therefore indicates
the relation is mandatory (1-N relation, meaning every reservation must have a departing flight).
4. Locate the Edit method, and replace its content with the following code.
Note: You can refer to Lesson 4, "Manipulating Data", Topic 4, "Updating Entities", for an
explanation when to use the SetValues method instead of manually setting the state of the entity to
Modified.
5. Locate the Dispose method, and replace its content with the following code.
if (context != null)
{
context.Dispose();
context = null;
}
Add
Delete
7. Review the implementation of the Delete method to understand how cascade delete was
implemented, so that when a Reservation is deleted, its DepartureFlight and ReturnFlight objects
are deleted as well.
Results: After you complete this exercise, the Entity Framework Code First model is ready for testing.
L2-4 Developing Windows Azure and Web Services
2. Explore the query test methods in the FlightQueries class. The TestInitialize static method is
responsible for initializing the database and the test data, and all the other methods are intended to
test various queries with lazy load and eager load.
Reservation reservation;
using (var repository = new ReservationRepository())
{
var query = from r in repository.GetAll()
where r.ConfirmationCode == "1234"
select r;
query = query.Include(r => r.DepartureFlight).Include(r => r.ReturnFlight);
reservation = query.FirstOrDefault();
}
Assert.IsNotNull(reservation);
Assert.IsNotNull(reservation.DepartureFlight);
Assert.IsNotNull(reservation.ReturnFlight);
Assert.IsNotNull(reservation.DepartureFlight);
Assert.IsNotNull(reservation.ReturnFlight);
Note: By examining the value of the navigation properties, you are invoking the lazy load
mechanism.
context.Configuration.LazyLoadingEnabled = false;
Note: Pay attention that the Assert method now checks again a null value, because lazy loading
was turned off, and the navigation properties are not loaded.
2. In the FlightActions class, add the following code to the UpdateFlight method, below the comment
//TODO: Lab 02 Exercise 2, Task 4.1 : Implement the UpdateFlight Method.
FlightRepository repository;
using (repository = new FlightRepository())
{
repository.Edit(flight);
repository.Save();
}
using (repository = new FlightRepository())
{
Flight updatedFlight = repository.FindBy(f => f.FlightNumber ==
"BY002_updated").FirstOrDefault();
Assert.IsNotNull(updatedFlight);
}
3. Locate the code for loading and updating the flight and location objects. Each entity is updated and
saved in a separate transaction, but because both transactions are located in the same transaction
scope, both transactions are not yet committed.
4. In the UpdateUsingTwoRepositories method, locate the query below the comment //TODO: Lab
02, Exercise 2 Task 5.2 : Review the query for the updated flight that is inside the transaction scope.
Note: When querying from inside a transaction scope, you will get the updated values of
entities, while other users, not participating in the transaction, will see the old values, until the
transaction commits.
Note: Without setting the transaction as complete, both transactions will roll back after the
transaction scope closes
6. Locate the query below the comment //TODO: Lab 02, Exercise 2 Task 5.4 : Review the query for the
updated flight that is outside the transaction scope.
Note: After the transaction is rolled back, attempts to locate the updated entity will fail.
L2-6 Developing Windows Azure and Web Services
Task 6: Run the tests, and explore the database created by Entity framework
1. In Solution Explorer, under the BlueYonder.IntegrationTests project, double-click
TravelCompanionDatabaseInitializer.cs.
2. Locate the Seed method, and add the following code at the end of the method.
context.Reservations.Add(reservation1);
context.Reservations.Add(reservation2);
context.SaveChanges();
4. On the Test menu, point to Windows, and then click Test Explorer.
5. In Test Explorer, click Run All, and wait for all the tests to complete.
6. Explore the results, and verify that all 16 methods have passed the test.
7. On the Start screen, click the SQL Server Management Studio tile.
8. In the Connect to Server dialog box, type the following information, and then click Connect:
10. Explore the tables that were created by Entity Framework, and notice the creation of the
Reservations and Trips tables.
Results: The Entity Framework data model works as designed and is verified by tests.
L3-1
5. In the Add New Item window type TravelersController in the Name box, and then click Add.
6. Add the following using directives at the beginning of the class.
using System.Web.Http;
using BlueYonder.DataAccess.Interfaces;
using BlueYonder.DataAccess.Repositories;
using System.Net.Http;
using BlueYonder.Entities;
using System.Net;
public TravelersController()
{
Travelers = new TravelerRepository();
}
10. Create a public method called Get by adding the following code.
11. Add code to retrieve a traveler from the Travelers property by adding the following code to the Get
method.
12. Add the following code to the end of the Get method.
L3-2 Developing Windows Azure and Web Services
13. In the Get method, right-click the first line of code, point to Breakpoint, and then click Insert
Breakpoint.
14. Create a public method called Post by adding the following code.
15. Add new traveler to the database by adding the following code to the Post method.
16. Add the following code to the end of the Post method, to create an HttpResponseMessage.
// creating the response, the newly saved entity and 201 Created status code
var response = Request.CreateResponse(HttpStatusCode.Created, traveler);
17. Add the following code to the end of the Post method to set the Location header with the URI of
the new traveler.
18. In the Post method, right-click the first line of code, point to Breakpoint, and then click Insert
Breakpoint.
19. Create a public method called Put by adding the following code.
20. Add the following code to the beginning of the Put method to validate that the traveler exists.
21. Update the existing traveler by adding the following code to the end of the method.
Travelers.Edit(traveler);
Travelers.Save();
return Request.CreateResponse(HttpStatusCode.OK);
Note: The HTTP PUT method can also be used to create resources. Checking if the
resources exist is performed here for simplicity.
22. In the Put method, right-click the first line of code, point to Breakpoint, and then click Insert
Breakpoint.
L3-3
23. Create a public method called Delete by adding the following code.
24. Retrieve the traveler from the repository by adding the following code to the Delete method.
25. Validate that the traveler exist by adding the following code to the end of the Delete method.
26. Delete the traveler from the repository by adding the following code to the end of the Delete
method.
Travelers.Delete(traveler);
Travelers.Save();
return Request.CreateResponse(HttpStatusCode.OK);
Results: After you complete this exercise, you will be able to run the project from Visual Studio 2012 and
access the travelers service.
L3-4 Developing Windows Azure and Web Services
4. If you are prompted by a Developers License dialog box, click I Agree. If you are prompted by a
User Account Control dialog box, click Yes. Type your email address and a password in the
Windows Security dialog box, and then click Sign in. Click Close in the Developers License dialog
box.
Note: If you do not have valid email address, click Sign up and register for the service.
Write down these credentials and use them whenever a use of an email account is required.
5. In Solution Explorer, under the BlueYonder.Companion.Client project, expand the Helpers folder,
and then double-click DataManager.cs.
6. Locate the GetTravelerAsync method, and under the comment // TODO: Lab 03 Exercise 2: Task 1.3:
Implement the GetTravelelrAsync method, remove the return null line and add the following code.
7. In the GetTravelerAsync method, right-click the first line of code, point to Breakpoint, and then
click Insert Breakpoint.
8. Locate the comment // TODO: Lab 03 Exercise 2: Task 1.6: Review the UpdateTravelerAsync method,
and review the code of the CreateTravelerAsync method. The method sets the ContentType header
to request a JSON response. The method then uses the PostAsync method to send a POST request to
the server.
9. In the CreateTravelerAsync method, right-click the first line of code, point to Breakpoint, and then
click Insert Breakpoint.
10. Locate the comment // TODO: Lab 03 Exercise 2: Task 1.8: Review the UpdateTravelerAsync method,
and review the code of the UpdateTravelerAsync method. The method uses the client.PutAsync
method to send a PUT request to the server.
11. In the UpdateTravelerAsync method, right-click the first line of code, point to Breakpoint, and then
click Insert Breakpoint.
5. Visual Studio 2012 with the BlueYonder.Companion.Client solution opens. The code execution
breaks inside the GetTravelerAsync method, and the line in breakpoint is highlighted in yellow.
8. Visual Studio 2012 with the BlueYonder.Companion solution opens. The code execution breaks
inside the Get method, and the line in breakpoint is highlighted in yellow.
9. Position the mouse cursor over the id parameter to view its value.
15. Visual Studio 2012 with the BlueYonder.Companion solution opens. The code execution breaks
inside the Post method, and the line in breakpoint is highlighted in yellow.
16. Position the mouse cursor over the traveler parameter to view its contents. Expand the traveler
object to view the object's properties.
19. If you are prompted to allow the app to run in the background, click Allow.
20. Display the app bar by right-clicking or by swiping from the bottom of the screen.
21. Click Search, and in the Search box on the right side enter New. If you are prompted to allow the
app to share your location, click Allow.
22. Wait for the app to show a list of flights from Seattle to New York.
31. Visual Studio 2012 with the BlueYonder.Companion.Client solution opens. The code execution
breaks inside the UpdateTravelerAsync method, and the line in breakpoint is highlighted in yellow.
L3-6 Developing Windows Azure and Web Services
34. Visual Studio 2012 with the BlueYonder.Companion solution opens. The code execution breaks
inside the Put method, and the line in breakpoint is highlighted in yellow.
35. Position the mouse cursor over the traveler parameter to view its contents. Expand the traveler
object to view the object's properties.
38. In the client app, click Close to close the confirmation message, and then close the client app.
39. Go back to the virtual machine 20487B-SEA-DEV-A.
Results: After you complete this exercise, you will be able to run the BlueYonder Companion client
application and create a traveler when purchasing a trip. You will also be able to retrieve an existing
traveler and update its details.
L4-1
Note: The same pattern was already applied in the begin solution for the rest of the
controller classes (TravelersController, FlightsController, ReservationsController and
TripsController). Open those classes to review the constructor definition.
2. Locate the GetService method, and add the following code after the comment // TODO: Lab 4:
Exercise 1: Task 2.1: Add a resolver for the LocationsController class.
if (serviceType == typeof(LocationsController))
return new LocationsController(new LocationRepository());
2. Add the following code in the beginning of Register method, to map the dependency resolver to
BlueYonderResolver:
L4-2 Developing Windows Azure and Web Services
6. In Solution Explorer, right-click the BlueYonder.Companion.Host project, and then click Set as
StartUp Project.
9. Switch back to Visual Studio and make sure the code breaks on the breakpoint.
10. Move the mouse cursor over the constructor's parameter and verify it is not null.
12. In Solution Explorer, expand the Tests folder, expand the BlueYonder.Companion.Controllers.Tests
project, and then double-click LocationControllerTest.cs.
13. Locate the Initialize method and examine its code. The test initialization process uses the
StubLocationRepository type which was auto-generated with the Fakes framework. This stub
repository mimics the real location repository. You use the fake repository to test the code, instead of
using the real repository, which requires using a database for the test. When running unit tests, you
should use fake objects to replace external components, in order to reduce the complexity of creating
and executing the test.
14. On the Test menu, point to Run, and the click All Tests.
15. Ensure the test passes and then close the Test Explorer window that opened.
Results: You will be able to inject data repositories to the controllers instead of creating them explicitly
inside the controllers. This will decouple the controllers from the implementation of the repositories.
L4-3
2. In the Manage NuGet Packages dialog box, on the navigation pane, expand the Online node and
then click the NuGet official package source node.
3. Press Ctrl+E and type WebApi.OData.
4. In the center pane, click the Microsoft ASP.NET Web API OData package, and then click Install.
8. Decorate the Get method overload, which has three parameters, with the following attribute.
[Queryable]
9. Remove the three parameters from the Get method and replace the IEnumerable return type with
the IQueryable type. The resulting method declaration should resemble the following code.
10. Replace the implementation of the method with the following code.
return Locations.GetAll();
Task 2: Handle the search event in the client application and query the flight schedule
service by Using OData filters
1. Log on to the virtual machine 20487B-SEA-DEV-C as Admin with the password Pa$$w0rd.
4. Browse to D:\AllFiles\Mod04\LabFiles\begin\BlueYonder.Companion.Client.
6. If you are prompted by a Developers License dialog box, click I Agree. If you are prompted by a User
Account Control dialog box, click Yes. Type your email address and a password in the Windows
Security dialog box and then click Sign in. Click Close in the Developers License dialog box.
Note: If you do not have valid email address, click Sign up now and register for the service. Write
down these credentials and use them whenever a use of an email account is required.
8. Locate the declaration of the GetLocationsWithQueryUri property, change it to access the locations
service using OData query by replacing the returned value with the following code.
GetLocationsUri + "?$filter=substringof(tolower('{0}'),tolower(City))";
L4-4 Developing Windows Azure and Web Services
Results: Your web application exposes OData protocol that supports Get request of the locations data.
L4-5
2. In Solution Explorer, expand the BlueYonder.Entities project, and then double-click Traveler.cs.
3. Add the following using directives to the beginning of the file.
using System.ComponentModel.DataAnnotations;
4. Decorate the FirstName, LastName and HomeAddress properties with the following attribute.
[Required]
[Phone]
[EmailAddress]
2. In the Add New Item dialog box, type ModelValidationAttribute in the Name box. Click Add.
using System.Web.Http.Filters;
using System.Net;
using System.Net.Http;
using System.Web.Http;
Task 3: Apply the custom attribute to the PUT and POST actions in the booking
service
1. In Solution Explorer, in the BlueYonder.Companion.Controllers project, double-click
TravelersController.cs.
using BlueYonder.Companion.Controllers.ActionFilters;
3. Decorate the Put and Post methods with the following attribute.
[ModelValidation]
Results: Your web application will verify that the minimum necessary information is sent by the client
before trying to handle it.
L4-7
2. Browse to D:\AllFiles\Mod04\LabFiles\Setup.
3. Double-click the Setup.cmd file. Wait for the script to complete successfully and press any key to
close the window.
4. On the Start screen, click the Internet Information Services (IIS) Manager tile.
6. If an Internet Information Services (IIS) Manager dialog pops up asking about the Microsoft Web
Platform, click No.
7. In the Features View, double-click the Server Certificates icon under the IIS group.
8. In the Server Certificates list, verify you see a certificate issued to SEA-DEV12-A. This certificate was
created by the script you ran in the previous task.
9. In the Connections pane, expand SEA-DEV12-A (SEA-DEV12-A\Administrator). Expand Sites, and then
click Default Web Site.
12. In the Add Site Binding dialog box, select https in the Type combo box.
13. Select SEA-DEV12-A in the SSL Certificate combo box. Click OK. An HTTPS binding is added to the
Site Binding list.
Note: When you add an HTTPS binding to the Web site bindings, all web applications in
the Web site will support HTTPS.
2. In Solution Explorer, right-click the BlueYonder.Companion.Host project, and then click Properties.
3. On the navigation pane, click the Web tab.
4. On the Web tab, scroll to Servers group. Clear the Use IIS Express check box. If you get a
confirmation dialog for SQL Express, click Yes.
9. In the Connections pane, right-click Default Web Site, and click Refresh.
This action opens Internet Explorer and browses to the web application.
12. In Internet Explorer 10, locate the address bar and append locations to the end of the URL. Press
Enter.
13. In the bottom of Internet Explorer 10 appears a prompt. Click Open.
14. If you are prompted to select a program to open the file, click Try an App on this PC and select
Notepad from the list of available programs.
15. Explore the contents of the file. It contains the list of locations from the database in the JSON format.
16. In Internet Explorer 10, change the URL in the address bar to https://SEA-DEV12-
A/BlueYonder.Companion.Host/locations and press Enter.
18. If you are prompted to select a program to open the file, select Notepad from the list of available
programs.
19. Explore the contents of the file. It contains the list of locations from the database in the JSON format.
https://SEA-DEV12-A/BlueYonder.Companion.Host/
7. After the client app starts, display the app bar by right-clicking or by swiping from the bottom of the
screen.
8. Click Search, and in the Search box on the right side enter New. If you are prompted to allow the
app to share your location, click Allow.
Note: The search functionality now uses the OData based locations service.
9. Wait for the app to show a list of flights from Seattle to New York.
18. Verify you receive an error message originating from the service saying The Email field is not a valid
e-mail address. Click Close.
19. In the Email Address box, replace ABC with your email address.
20. Click Purchase.
21. Click Close to close the confirmation message, and then close the client app.
Results: The communication with your web application will be secured using a certificate.
L4-10 Developing Windows Azure and Web Services
L5-1
using System.Runtime.Serialization;
using BlueYonder.Entities;
[DataMember]
public int FlightScheduleID { get; set; }
[DataMember]
public FlightStatus Status { get; set; }
[DataMember]
public SeatClass Class { get; set; }
13. In the ReservationDto.cs file that opened, add the following using directives.
using System.Runtime.Serialization;
using BlueYonder.Entities;
[DataMember]
public int TravelerId { get; set; }
L5-2 Developing Windows Azure and Web Services
[DataMember]
public DateTime ReservationDate { get; set; }
[DataMember]
public TripDto DepartureFlight { get; set; }
[DataMember]
public TripDto ReturnFlight { get; set; }
2. In the Add New Item dialog box, select Interface from the items list, enter IBookingService in the
Name box (replace the existing name), and then click Add.
3. In the IBookingService.cs file that opened, add the following using directives.
using System.ServiceModel;
using BlueYonder.BookingService.Contracts.Faults;
[ServiceContract(Namespace = "http://blueyonder.server.interfaces/")]
[OperationContract]
[FaultContract(typeof(ReservationCreationFault))]
string CreateReservation(ReservationDto request);
2. To implement the IBookingService interface, change the declaration of the class as follows.
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]
4. Implement the interface by adding the following method code to the class.
Note: At this point, the class will not compile because no value is returned from the
method. Ignore this for now, as you will soon write the missing code.
if (request.DepartureFlight == null)
{
throw new FaultException<ReservationCreationFault>(
new ReservationCreationFault
{
Description = "Reservation must include a departure flight",
ReservationDate = request.ReservationDate
}, "Invalid flight info");
}
6. To the CreateReservation method, add the following code below the if statement block.
if (request.ReturnFlight != null)
{
reservation.ReturnFlight = new Trip
{
Class = request.ReturnFlight.Class,
Status = request.ReturnFlight.Status,
FlightScheduleID = request.ReturnFlight.FlightScheduleID
};
}
8. To the CreateReservation method, add the following code below the last if statement block.
10. In the CreateReservation method, right-click the first line of code, point to Breakpoint, and then
click Insert Breakpoint.
Results: You will be able to test your results only at the end of the second exercise.
L5-4 Developing Windows Azure and Web Services
Note: The begin solution already contains all the project references that are needed for the
project. This includes the BlueYonder.BookingService.Contracts,
BlueYonder.BookingService.Implementation, BlueYonder.DataAccess, and
BlueYonder.Entities projects, as well as the Entity Framework 5.0 package assembly.
3. In the Reference Manager dialog box, expand the Assemblies node in the pane on the left side, and
then click Framework.
4. Scroll down the assemblies list, point to the System.ServiceModel assembly, and then select the
check box next to the assembly name.
5. Click OK.
9. Before the </configuration> tag (the last tag in the file), add the following configuration.
<system.serviceModel>
<services>
<service name="BlueYonder.BookingService.Implementation.BookingService">
</service>
</services>
</system.serviceModel>
10. Between the <service> and </service> tags of the App.config file, add the following configuration.
<endpoint name="BookingTcp"
address="net.tcp://localhost:900/BlueYonder/Booking/"
binding="netTcpBinding"
contract="BlueYonder.BookingService.Contracts.IBookingService" />
11. In the App.config file, before the </configuration> tag (the last tag in the file), add the following
configuration.
<connectionStrings>
<add name="BlueYonderServer" connectionString="Data
Source=.\SQLEXPRESS;Database=BlueYonder.Server.Lab5;Integrated Security=SSPI"
providerName="System.Data.SqlClient" />
</connectionStrings>
2. To the Program class, after the Main method, enter the following methods.
using System.ServiceModel;
using BlueYonder.DataAccess;
5. In the Main method, after calling the InitializeDatabase method, enter the following code.
7. In Solution Explorer, right-click the BlueYonder.BookingService.Host project, and then click Set as
StartUp Project.
8. To start debugging the project, press F5.
9. Wait until the initialization and running messages appear in the service host console window.
Note: Keep the console window open, as you will need to use it later in the lab
Results: You will be able to start the console application and open the service host.
L5-6 Developing Windows Azure and Web Services
Exercise 3: Consuming the WCF Service from the ASP.NET Web API
Booking Service
Task 1: Add a reference to the service contract project in the ASP.NET Web API
projects
1. On the Start screen, right-click the Visual Studio 2012 tile, and then click Open new window at the
bottom.
4. On the File menu, point to Add, and then click Existing Project.
6. In Solution Explorer, right-click the BlueYonder.Companion.Controllers project, and then click Add
Reference.
7. In the Reference Manager dialog box, in the pane on the left side, click Solution. In the pane on the
right side, point to BlueYonder.BookingService.Contracts, select the check box next to the project
name, and then click OK.
8. In Solution Explorer, right-click the BlueYonder.Companion.Host project, and then click Add
Reference.
9. In the Reference Manager dialog box, in the pane on the left side, click Solution. In the pane on the
right side, point to BlueYonder.BookingService.Contracts, select the check box next to the project
name, and then click OK.
2. To the bottom of the file, before the </configuration> tag, add the following configuration.
<system.serviceModel>
<client>
<endpoint address="net.tcp://localhost:900/BlueYonder/Booking"
binding="netTcpBinding"
contract="BlueYonder.BookingService.Contracts.IBookingService" name="BookingTcp"/>
</client>
</system.serviceModel>
using BlueYonder.BookingService.Contracts;
using BlueYonder.BookingService.Contracts.Faults;
L5-7
3. In the ReservationsController class, locate the comment // TODO: Module 5: Exercise 3: Task 3.1.
Create an instance of the channel factory and then add the following code after it.
4. Locate the CreateReservationOnBackendSystem method, and then uncomment the code below
the comment // TODO: Module 5: Exercise 3: Task 3.2 Uncomment the Dto creation objects.
5. To create the channel, above the try block, add the following statement.
return null;
8. Locate the comment // TODO: Module 5: Exercise 3: Task 3.4: Call the service and return the result,
and replace the catch block with the following code.
9. Locate the comment // TODO: Module 5: Exercise 3: Task 3.5: abort the communication in case of
Exception, and after the comment and before calling the throw statement, add the following code.
(proxy as ICommunicationObject).Abort();
10. In the ReservationsController class, look for the Post method, and in it locate the following
comment.
11. After the comment, and before saving the entity, add the following code.
2. In the Post method, right-click the line of code that starts with string confirmationCode, point to
Breakpoint, and then click Insert Breakpoint.
L5-8 Developing Windows Azure and Web Services
3. In Solution Explorer, right-click the BlueYonder.Companion.Host project, and then click Set as
StartUp Project.
8. If you are prompted by a Developers License dialog box, click I Agree. If you are prompted by a
User Account Control dialog box, click Yes. In the Windows Security dialog box, enter your email
address and a password, and then click Sign in. In the Developers License dialog box, click Close.
Note: If you do not have valid email address, click Sign up and register for the service.
Write down these credentials and use them whenever an email account is required.
9. In Solution Explorer, right-click the BlueYonder.Companion.Client project, and then click Set as
StartUp Project.
10. To start the client app without debugging, press Ctrl+F5.
11. If you are prompted to allow the app to run in the background, click Allow.
12. Display the app bar by right-clicking or by swiping from the bottom of the screen.
13. Click Search, and in the Search box on the right side enter New. If you are prompted to allow the
app to share your location, click Allow.
14. Wait for the app to show a list of flights from Seattle to New York.
15. Click Purchase this trip.
23. Go back to the 20487B-SEA-DEV-A virtual machine, to Visual Studio 2012 instance with open
BlueYonder.Companion solution. The code execution breaks, and the line in breakpoint is
highlighted in yellow.
24. To step over the line, press F10. Switch to the Visual Studio 2012 where the BlueYonder.Server
solution is open. The code execution breaks, and the line in breakpoint is highlighted in yellow.
25. To run the service code and return to the previous Visual Studio 2012 window, press F5.
26. Hover with the mouse cursor over the confirmationCode variable and verify it is now set to a random
confirmation code it received from the WCF service.
L5-9
27. To resume execution and go back to the 20487B-SEA-DEV-C virtual machine, to the client app, press
F5.
28. To close the confirmation message, and then close the client app, click Close.
29. To stop debugging the service, go back to the 20487B-SEA-DEV-A virtual machine, and close the
service host console window.
30. To stop debugging the web application, return to Visual Studio 2012 where the
BlueYonder.Companion solution is open and press Shift+F5.
Results: After you complete this exercise, you will be able to run the Blue Yonder Companion client
application and purchase a trip.
L5-10 Developing Windows Azure and Web Services
L6-1
2. Browse to D:\AllFiles\Mod06\LabFiles\Setup.
3. Double-click the setup.cmd file. Wait for the script to complete successfully and press any key to
close the window.
8. In the Add New Project dialog box, on the navigation pane, expand the Installed node. Expand the
Visual C# node. Click the Web node. Select ASP.NET Empty Web Application from the list of
templates.
9. In the Name box, type BlueYonder.Server.Booking.WebHost.
13. In the Manage NuGet Packages dialog box, on the navigation pane, expand the Online node, and
then click the NuGet official package source node.
17. In Solution Explorer, right-click the BlueYonder.Server.Booking.WebHost project, and then click
Add Reference.
18. In the Reference Manager dialog box, expand the Assemblies node in the pane on the left side.
Click Framework.
19. Scroll down the assemblies list, point to the System.ServiceModel assembly, and select the check
box next to the assembly name.
21. In the pane on the right side, point to each of the following projects, and select the check box next to
the project name:
BlueYonder.BookingService.Contracts
BlueYonder.BookingService.Implementation
L6-2 Developing Windows Azure and Web Services
BlueYonder.DataAccess
BlueYonder.Entities
26. In the Add New Item dialog box, in the pane on the left side, expand the Installed node, expand the
Visual C# node, click the Web node, and then click Global Application Class in the list of items. To
finish, click Add.
27. In the Global.asax.cs file that opened, add the following using directives to the beginning of the file.
using System.Data.Entity;
using BlueYonder.DataAccess;
using BlueYonder.BookingService.Host;
28. Locate the Application_Start method, and add the following code to it.
4. Select all the contents of Web.config file by pressing Ctrl+A, and then press Delete.
<serviceHostingEnvironment>
<serviceActivations>
<add service="BlueYonder.BookingService.Implementation.BookingService"
relativeAddress="Booking.svc"/>
</serviceActivations>
</serviceHostingEnvironment>
7. Within the <configuration> section, enter the following configuration to the end of the section.
<system.web>
<compilation debug="true" targetFramework="4.5" />
<httpRuntime targetFramework="4.5" />
</system.web>
L6-3
Note: Make sure you are adding this code as the last child element of the
<configuration> element. Adding this code anywhere else within the Web.Config file will make
the application fail.
8. In the <system.serviceModel> section group, locate the <behaviors> section, and in it, locate the
<serviceMetadata> element.
9. Remove the httpGetUrl attribute and value from the <serviceMetadata> element.
10. In the <system.serviceModel> section group, locate the <services> section, and in it, locate the
<endpoint> element.
11. Remove the address attribute and value from the <endpoint> element.
Note: IIS uses the address of the web application to create the service metadata address
and the service endpoint address.
13. In Solution Explorer, right-click the BlueYonder.Server.Booking.WebHost project, and then click
Properties.
3. If an Internet Information Services (IIS) Manager dialog box opens asking about the Microsoft
Web Platform, click No.
4. From the Connections pane, expand Sites, and then click Default Web Site.
Note: The site bindings configure which protocols are supported by the IIS Web Site and
which port, host name, and IP address are used with each protocol.
8. Click BlueYonder.Server.Booking.WebHost.
10. In the Advanced Settings dialog box, expand the Behavior node. In the Enabled Protocols box,
type http, net.tcp (replace the current value).
L6-4 Developing Windows Azure and Web Services
Note: In addition to adding net.tcp to the site bindings list, you also need to enable net.tcp
for each Web application you host in IIS. By enabling net.tcp, WCF will automatically create an
endpoint with NetTcpBinding.
12. On the Start screen, click Computer to open File Explorer. Browse to D:\AllFiles, and double-click the
WcfTestClient shortcut.
13. In the WCF Test Client application, on the File menu, click Add Service, and in the Add Service
dialog box, type http://localhost/BlueYonder.Server.Booking.WebHost/Booking.svc, and then
click OK. Wait until you see the service and endpoints tree in the pane to the left.
14. Close the WCF Test Client application.
Results: You will be able to run the WCF Test Client application and verify if the services are running
properly in IIS.
L6-5
Note: The browser should automatically log you in to the portal. If you are redirected to
the Windows Live ID Sign in page, type your email and password, and then click Sign in.
5. Click ADD at the bottom of the page. In the CREATE SERVER dialog box, enter the following
information:
LOGIN NAME: BlueYonderAdmin
LOGIN PASSWORD: Pa$$w0rd
7. Write down the name of the newly created SQL Database Server. Later in this task, you will embed
this name within the connection string.
8. On the sql databases page, click the name of the newly created server.
Note: As a best practice, you should allow only your IP address, or your organization's IP
address range to access the database server. However, in this course, you will use this database
server for future labs, and your IP address might change in the meanwhile, therefore you are
required to allow access from all IP addresses.
12. Click NEW on the lower left of the portal. Click COMPUTE. Click CLOUD SERVICE. Click QUICK
CREATE. URL and REGION/AFFINITY GROUP input boxes are displayed to the right side.
13. In the URL box, enter the following cloud service name: BlueYonderCompanionYourInitials
(YourInitials contains your initials).
14. In the REGION OR AFFINITY GROUP box, select the region closest to your location.
15. Click CREATE CLOUD SERVICE at the lower right corner of the portal. Wait until the cloud service is
created.
L6-6 Developing Windows Azure and Web Services
17. Click the cloud service that you created in the previous step (the one that named
BlueYonderCompanionYourInitials (YourInitials contains your initials), and then click CERTIFICATES
at the top of the page.
18. Click the UPLOAD link at the bottom of the page.
19. In the UPLOAD CERTIFICATE dialog box, click the BROWSE FOR FILE link.
20. In the Choose File to Upload dialog box, In the File name box, type
D:\AllFiles\certs\CloudApp.pfx, and then click Open.
22. Click the V icon at the right side bottom of the window, and wait for the upload to finish.
Note: In this lab, the ASP.NET Web API services are accessible through HTTP and HTTPS. To
use HTTPS, you need to upload a certificate to the Windows Azure cloud servie.
23. On the Start screen, right-click the Visual Studio 2012 tile, and then click Open new window at the
bottom.
24. On the File menu, point to Open, and then click Project/Solution.
26. In Solution Explorer, open the Web.config file under BlueYonder.Companion.Host project.
27. Find the <connectionStrings> section. Locate the connection string entry, whose name attribute is
set to TravelCompanion.
28. Locate the two occurrences of {ServerName} placeholder in the connectionString attribute, select
each of them, and replace them with the new SQL Database server name.
29. In the end of the file, locate the <system.serviceModel> section group, and in it, locate the
<client> section.
30. In the <client> section, locate the <endpoint> element, and change its address attribute value to
net.tcp://localhost/BlueYonder.Server.Booking.WebHost/Booking.svc.
3. Verify that a new BlueYonder.Companion.Host.Azure project was added to the solution and that it
has BlueYonder.Companion.Host as its Web Role.
Note: You can achieve the same result by adding a new Windows Azure Cloud Service
project, to the solution, and then manually adding a Web Role Project from an existing project.
6. On the Certificates tab, click Add Certificate, to add a new certificate row with the following
information:
Name: BlueYonderCompanionSSL
Store Location: LocalMachine
Store Name: My
Click the Thumbprint box, and then click the ellipsis. Select the BlueYonderSSLCloud certificate, and
then click OK.
7. From the Service Configuration drop down list at the top of the tab, select Local.
8. In the BlueYonderCompanionSSL certificate line, click the Thumbprint box, and then click the
ellipsis. Select the BlueYonderSSLDev certificate, and then click OK.
9. From the Service Configuration drop down list at the top of the tab, select All Configurations.
Note: SSL certificates contain the name of the server so that clients can validate the
authenticity of the server. Therefore, there are different certificates for the local deployment, and
for the cloud deployment.
11. On the Endpoints tab, click Add Endpoint, to add a new endpoint row with the following
information:
Name: Endpoint2
Type: Input
Protocol: https
14. To start the Windows Azure compute emulator without debugging, press Ctrl+F5.
15. When the two web browsers open, verify they use the addresses http://127.0.0.1:81 and
https://127.0.0.1:444.
Note: The endpoint configuration of the role uses ports 80 and 443 for the HTTP and
HTTPS endpoint. However, the local IIS Web server already uses those ports, so the emulator
needs to uses different ports.
16. Log on to the virtual machine 20487B-SEA-DEV-C as Admin with the password Pa$$w0rd.
17. On the Start screen, click the Visual Studio 2012 tile.
18. On the File menu, point to Open, and then click Project/Solution.
20. If you are prompted by a Developers License dialog box, click I Agree. If you are prompted by a
User Account Control dialog box, click Yes. Type your email address and a password in the
Windows Security dialog box, and then click Sign in. In the Developers License dialog box, click
Close.
Note: If you do not have a valid email address, click Sign up and register for the service.
Write down these credentials and use them whenever a use of an email account is required.
21. In Solution Explorer, right-click the BlueYonder.Companion.Client project, and then click Set as
StartUp Project.
22. The client app is already configured to use the Windows Azure compute emulator. To start the client
app without debugging, press Ctrl+F5.
Note: Normally, the Windows Azure Emulator is not accessible from other computers on
the network. For purposes of testing this lab from a Windows 8 client, a routing module was
installed on the server's IIS, routing the incoming traffic to the emulator.
23. If you are prompted to allow the app to run in the background, click Allow.
24. After the client app starts, display the app bar by right-clicking or by swiping from the bottom of the
screen.
25. Click Search, and in the Search box on the right side enter New. If you are prompted to allow the
app to share your location, click Allow.
26. Wait for the app to show a list of flights from Seattle to New York.
3. Double-click the comments to view the code that was marked and verify that no calls are made to
UpdateReservationOnBackendSystem
CreateReservationOnBackendSystem
Note: Prior to the deployment of the cloud project to Azure, all the on-premises WCF calls
were disabled.
These include calls from the Reservation Controller class and the Trips Controller class.
After you deploy the ASP.NET Web API project to Windows Azure, it cannot call the on-premises
WCF service, so for now, the WCF Service calls are disabled. In Module 7, "Windows Azure Service
Bus" in Course 20487, you will learn how a cloud application can connect to an on-premises
service.
5. If you already added your Windows Azure subscription information to Visual Studio 2012, select your
subscription from the drop down list and skip to step 9.
6. In the Publish Windows Azure Application dialog box, click the Sign in to download credentials
hyperlink.
Note: The browser automatically logs you in to the portal. When you are redirected to the
Windows Live ID Sign in page, type your email address and password, and then click Sign in.
7. The publish settings file is generated, and a Do you want to open or save... Internet Explorer dialog
box appears at the bottom. Click the arrow within the Save button. Select the Save as option and
specify the following location:
D:\AllFiles\Mod06\LabFiles. Click Save. If a Confirm Save As dialog box appears, click Yes.
8. In Visual Studio 2012, return to Publish Windows Azure Application dialog box. Click Import. Type
D:\AllFiles\Mod06\LabFiles and select the file that you downloaded in the previous step, and then
click Open.
9. Make sure that your subscription is selected under Choose your subscription section, and then click
Next.
10. If the Create Windows Azure Services dialog box appears, click Cancel.
11. On the Common Settings tab, click the Cloud Service box. Select
BlueYonderCompanionYourInitials (YourInitials contains your names initials)
12. On Advanced Settings tab, click the Storage Account box. Select Create New. In the Create
Windows Azure Services dialog box that opens, fill the following information:
Note: The abbreviation bycl stands for Blue Yonder Companion Labs. An abbreviation is
used because storage account names are limited to 24 characters. The abbreviation is in lower-
case because storage account names are in lower-case. Windows Azure Storage is covered in
depth in Module 9, "Windows Azure Storage" in Course 20487.
Note: If you get a message saying the service creation failed because you reached your
storage account limit, delete one of your existing storage accounts, and then retry the step. If you
do not know how to delete a storage account, consult the instructor.
15. Clear the Append current date and time check box.
16. Click Publish to start the publishing process. This might take several minutes to complete.
6. If you are prompted to allow the app to run in the background, click Allow.
7. After the client app starts, display the app bar by right-clicking or by swiping from the bottom of the
screen.
8. Click Search, and in the Search box on the right side, type New. If you are prompted to allow the
app to share your location, click Allow.
9. Wait for the app to show a list of flights from Seattle to New York.
Results: You will verify the application works locally in the Windows Azure compute emulator, and then
deploy it to Windows Azure and verify it works there too.
L6-11
Note: The browser automatically logs you in to the portal. When you are redirected to the
Windows Live ID Sign in page, type your email address and password, and then click Sign in.
4. On the lower left of the portal, click NEW. Click COMPUTE. Select WEB SITE. Click QUICK CREATE.
The URL and REGION input boxes are displayed to the right side.
5. In the URL box, enter the following Web Site name: BlueYonderCompanionYourInitials (YourInitials
contains your initials).
7. At the lower right corner of the portal, click CREATE WEB SITE, and wait for the Web Site creation to
complete.
8. From the pane to the left, click WEB SITES, and then click the name of your newly created Web Site.
11. A Do you want to open or save Internet Explorer dialog box appears at the bottom. Click the
arrow within the Save button, select Save as option and specify the following location:
D:\AllFiles\Mod06\LabFiles. Click Save.
Note: The publishing profile file includes the information required to publish a Web
application to the Web Site. This is an alternative publish method to downloading the
subscription file, as shown in Lesson 2, "Hosting Services in Windows Azure", Demo 1, "Hosting in
Windows Azure" in Course 20487. The difference is that by importing the subscription file, you
can publish to any of the Web Sites manages by your Windows Azure subscription, whereas
importing the publish profile file of a Web Site will only allow you to publish to that specific Web
Site.
Task 2: Upload the Flights Management web application to the new Web Site by
using the Windows Azure Management Portal
1. On the Start screen, right-click the Visual Studio 2012 tile, and then click Open new window at the
bottom.
4. In Solution Explorer, expand the BlueYonder.FlightsManager project, and then double-click the
Web.config file.
L6-12 Developing Windows Azure and Web Services
6. In Solution Explorer, right-click the BlueYonder.FlightsManager project, and then click Publish.
7. In the Publish Web dialog box, click Import.
8. In the Import Publish Settings dialog box, in the File name box, type D:\AllFiles\Mod06\LabFiles,
and then press Enter.
9. Select the publish settings file that you downloaded in the previous task, and then click Open.
10. Click Publish. The deployment process starts. When the process is complete Internet Explorer
automatically opens with the URL of the deployed site.
11. In the browser, select Paris, France from the drop down list on the left, select Rome, Italy from the
drop down list on the right, and then click Filter. Verify you see flight schedules.
Results: After you publish the flights manager web application, you will open the web application in a
browser and verify if it is working properly and is able to communicate with the web role you deployed in
the previous exercise.
L7-1
2. Browse to D:\AllFiles\Mod07\LabFiles\Setup.
3. Double-click the Setup.cmd file. When prompted for information, provide it according to the
instructions.
Note: You may see a warning saying the client model does not match the server model.
This warning may appear if there is a newer version of the Windows Azure PowerShell cmdlets. If
this message is followed by an error message, please inform the instructor, otherwise you can
ignore the warning.
4. Write down the name of the cloud service that is shown in the script. You will use it later on during
the lab.
5. Wait for the script to finish, and then press any key to close the script window.
Note: The browser should automatically log you in to the portal. If you are redirected to
the Windows Live ID Sign in page, type your email and password, and then click Sign in.
10. In the CREATE A NAMESPACE dialog box enter the following information:
11. To create the namespace, click the V icon at the bottom of the window, and wait until the namespace
is active.
12. To highlight the STATUS column of the namespace you created, click it.
14. In the ACCESS CONNECTION INFORMATION dialog box, locate DEFAULT KEY field and click the
Copy icon to the right of it. If you are prompted to allow access to your clipboard, click Allow access.
15. Click the OK button, on the right side of the window to close the dialog box.
5. In the Manage NuGet Packages dialog box, on the navigation pane, expand the Online node, and
then click the NuGet official package source node.
7. In the center pane, click the Windows Azure Service Bus package, and then click Install. If a License
Acceptance dialog box appears, click I Accept.
10. In the <system.serviceModel> section group, locate the <services> section, and inside it, locate the
<endpoint> element named BookingTcp.
11. In the <endpoint> element, change the value of the binding attribute from netTcpBinding to
netTcpRelayBinding.
12. Add the address attribute to the <endpoint> element with the value
sb://BlueYonderServerLab07YourInitials.servicebus.windows.net/booking (Replace YourInitials
with your initials).
<endpointBehaviors>
<behavior name="sbTokenProvider">
<transportClientEndpointBehavior>
<tokenProvider>
<sharedSecret issuerName="owner" issuerSecret="{IssuerSecret}" />
</tokenProvider>
</transportClientEndpointBehavior>
</behavior>
</endpointBehaviors>
15. Substitute the {IssuerSecret} placeholder by pasting the Service Bus namespace access key that you
copied in the previous task.
Note: Visual Studio Intellisense uses built-in schemas to perform validations. Therefore, it will not
recognize the transportClientEndpointBehavior behavior extension, and will display a warning.
Disregard this warning.
16. In the <system.serviceModel> section group, locate the <services> section, and inside it, locate the
<endpoint> element.
17. Add a behaviorConfiguration attribute to the element, and then set its value to sbTokenProvider.
18. Locate the <system.webServer> section group, and then add the following configuration to it.
<applicationInitialization>
<add initializationPage="/booking.svc"/>
</applicationInitialization>
Note: Application initialization automatically sends requests to specified addresses after the
Web application loads. Sending the request to the service will make the service host load and
initiate the Service Bus connection.
20. On the Start screen, click the Internet Information Services (IIS) Manager tile.
22. If an Internet Information Services (IIS) Manager dialog box pops up asking about the Microsoft
Web Platform, click No.
24. In the Features View, right-click DefaultAppPool, and then click Advanced Settings.
25. In the Advanced Settings dialog box, set the Start Mode option to AlwaysRunning.
Note: Setting the start mode to AlwaysRunning will load the application pool
automatically after IIS loads. To use application initialization the application pool must be
running.
27. From the Connections pane, expand the Sites node, and then expand the Default Web Site node.
28. Right-click the BlueYonder.Server.Booking.WebHost node, point to Manage Application, and then
click Advanced Settings.
29. In the Advanced Settings dialog box, set the Preload Enabled option to True. This setting will start
the service after IIS starts.
30. To save the changes, click OK.
Note: When preload is enabled, IIS will simulate requests after the application pool starts.
The list of requests is specified in the application initialization configuration that you already
created.
31. Return to Visual Studio 2012, and in Solution Explorer, right-click the
BlueYonder.Server.Booking.WebHost project, and then click Build.
32. Return to IIS Manager, and in the Connections pane, click the Application Pools node and in the
Features View.
Task 3: Configure the ASP.NET Web API back-end service to use the new relay
endpoint
1. On the Start screen, right-click the Visual Studio 2012 tile, and then click Open new window at the
bottom.
4. In Solution Explorer, right-click the BlueYonder.Companion.Host project node, and then click Manage
NuGet Packages.
L7-4 Developing Windows Azure and Web Services
5. In the Manage NuGet Packages dialog box, on the navigation pane, expand the Online node and
then click the NuGet official package source node.
11. Change the value of the binding attribute from netTcpBinding to netTcpRelayBinding.
12. Substitute the value of the address attribute with the following value:
sb://BlueYonderServerLab07YourInitials.servicebus.windows.net/booking (Replace YourInitials
with your initials).
13. In the <system.serviceModel> section group, add the following configuration.
<behaviors>
<endpointBehaviors>
<behavior>
<transportClientEndpointBehavior>
<tokenProvider>
<sharedSecret issuerName="owner" issuerSecret="{IssuerSecret}" />
</tokenProvider>
</transportClientEndpointBehavior>
</behavior>
</endpointBehaviors>
</behaviors>
14. Substitute the {IssuerSecret} placeholder by pasting the Service Bus namespace access key that you
copied in the first task.
3. In the left navigation pane, click SERVICE BUS, and then click the name column of
BlueYonderServerLab07YourInitials (Replace YourInitials with your initials) Service Bus.
4. Click RELAYS.
6. Go back to the Visual Studio 2012 instance with the open BlueYonder.Companion solution.
9. Double-click the comment // TODO: Lab 07 Exercise 1: Task 4.3: Bring back the call to the backend
WCF service.
10. Uncomment the call to the CreateReservationOnBackendSystem method. Make sure the return
value of the method is stored in the confirmationCode variable.
11. To save the file, press Ctrl+S.
12. In Solution Explorer, right-click the BlueYonder.Companion.Host.Azure, and then click Publish.
13. In the Publish Windows Azure Application dialog box, click Import.
14. Type D:\AllFiles\Mod07\LabFiles in the File name box, and then click Open. Select your publish
settings file (the file should have the .publishsettings extension), and then click Open.
16. On the Common Settings tab, click the Cloud Service box, and then select the cloud service that
matches the name you wrote down at the beginning of the lab, after running the setup script.
17. Click Publish to start the publishing process. If a Deployment Environment In Use dialog box
appears, click Replace. This might take several minutes to complete.
18. Switch to the Visual Studio 2012 instance with the open BlueYonder.Server solution.
19. In Solution Explorer, expand the BlueYonder.BookingService.Implementation project, and then
double-click BookingService.cs.
20. In the CreateReservation method, right-click the line of code that starts with
if(request.DepartureFlight, point to Breakpoint, and then click Insert Breakpoint.
21. In Solution Explorer, right-click the BlueYonder.Server.Booking.WebHost project, and then click
Set as StartUp Project.
24. On the Start screen, click the Visual Studio 2012 tile.
25. On the File menu, point to Open, and then click Project/Solution.
27. If you are prompted by a Developers License dialog box, click I Agree. If you are prompted by a
User Account Control dialog box, click Yes. Type your email address and a password in the
Windows Security dialog box, and then click Sign in. Click Close in the Developers License dialog
box.
Note: If you do not have valid email address, click Sign up and register for the service.
Write down these credentials and use them whenever an email account is required.
28. In Solution Explorer, expand the BlueYonder.Companion.Shared project, and then double-click
Addresses.cs.
29. In the Addresses class, locate the BaseUri property, and then replace the {CloudService} string with
the Windows Azure Cloud Service name you wrote down at the beginning of this lab.
32. If you are prompted to allow the app to run in the background, click Allow.
33. Display the app bar by right clicking or by swiping from the bottom of the screen.
34. Click Search, and then in the Search box on the right side enter New. If you are prompted to allow
the app to share your location, click Allow.
35. Wait for the app to show a list of flights from Seattle to New York.
36. Click Purchase this trip.
44. Go back to the 20487B-SEA-DEV-A virtual machine, to Visual Studio 2012 instance with the open
BlueYonder.Server solution. The code execution breaks, and the line in breakpoint is highlighted in
yellow.
45. To resume execution, press F5, and then go back to the 20487B-SEA-DEV-C virtual machine, to the
client app.
46. To close the confirmation message, click Close, and then close the client app.
48. Return to Visual Studio 2012 where the BlueYonder.Server solution is open and press Shift+F5 to
stop debugging the WCF application.
Results:After you complete this exercise, you can run the client app and book a flight, and have the
ASP.NET Web API services running in the Windows Azure Web Role communicate with the on-premises
WCF services by using Windows Azure Service Bus Relays.
L7-7
Note: The browser should automatically log you in to the portal. If you are redirected to
the Windows Live ID Sign in page, type your email and password, and then click Sign in.
3. In the navigation pane, click SERVICE BUS, click in the STATUS column of the service bus row you
created in the previous exercise, and then at the bottom of the page, click CONNECTION
INFORMATION.
4. In the ACCESS CONNECTION INFORMATION dialog box, locate CONNECTION STRING field and
click the Copy icon to the right of it.
If you are prompted to allow access to your clipboard, click Allow access.
5. Click OK, on the right side of the window to close the dialog box.
6. Return to Visual Studio 2012 where the BlueYonder.Companion solution is open.
Name: Microsoft.ServiceBus.ConnectionString
Type: String
Value: Press Ctrl+V to paste the Service Bus connection string you copied from the portal.
10. Press Ctrl+S to save the changes.
11. In Solution Explorer, expand BlueYonder.Companion.Controllers project node, and then double-click
ServiceBusQueueHelper.cs to open it.
12. Replace the return null statement in the ConnectToQueue method with the following code.
string connectionString =
CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.ConnectionString");
var namespaceManager = NamespaceManager.CreateFromConnectionString(connectionString);
13. Add the following code to the end of the method to create the queue if it does not exist.
if (!namespaceManager.QueueExists(QueueName))
{
namespaceManager.CreateQueue(QueueName);
L7-8 Developing Windows Azure and Web Services
14. Add the following code to the end of the method to return a QueueClient object.
using Microsoft.ServiceBus.Messaging;
19. Create a static constructor in the class by adding the following code to it.
static FlightsController()
{
Client = ServiceBusQueueHelper.ConnectToQueue();
}
20. Locate the Put method. Place the following code after the comment // TODO: Lab07, Exercise 2, Task
1.6 : Send a flight update message to the queue
updatedSchedule.FlightId = id;
var msg = new BrokeredMessage(updatedSchedule);
msg.ContentType = "UpdatedSchedule";
Client.Send(msg);
23. Explore the content of the Register method and the static constructor of the
NotificationsController. The same pattern of creating a QueueClient object in the static constructor
and then sending the update messages by using the BrokeredMessage is applied to this controller.
Note: The Register method subscribes clients to flight update notifications. When a flight
update message is sent to the queue, every subscribed client waiting for that flight will be
notified by using the Windows Push Notification Services (WNS).
Task 2: Create a Windows Azure Worker role that receives messages from a Service
Bus Queue
1. In Solution Explorer, under the BlueYonder.Companion.Host.Azure project, right-click Roles, then
point to Add, and then click New Worker Role Project.
2. In the Add New .NET Framework 4.5 Role Project dialog box, click Worker Role with Service Bus
Queue.
4. In Solution Explorer, under the BlueYonder.Companion.Host.Azure project, under the Roles folder,
double-click the BlueYonder.Companion.Host web role.
5. Click the Settings tab. In the Microsoft.ServiceBus.ConnectionString setting row, click the Value
cell, and then press Ctrl+C.
6. In Solution Explorer, under the BlueYonder.Companion.Host.Azure project, under the Roles folder,
double-click the BlueYonder.Companion.WNS.WorkerRole worker role.
7. Click the Settings tab. In the Microsoft.ServiceBus.ConnectionString setting row, click the Value
cell, and then press Ctrl+V.
6. Place the text cursor between the <configuration> and <system.diagnostics> tags, and then press
Ctrl+V to paste the connection string in the configuration.
7. Locate the <appSettings> element and add the following configuration to it.
Note: You can find the above configuration in the WnsConfiguration.xml file, under the
lab's Assets folder
The ClientSecret and PackageSID settings were retrieved by the Windows 8 client team during
the upload process of the client app to the windows store.
10. In the Reference Manager dialog box, in the pane on the left side, click Solution.
11. In the pane on the right side, point to each of the following projects, and select the check box next to
the project name:
L7-10 Developing Windows Azure and Web Services
BlueYonder.Companion.WNS
BlueYonder.Companion.Entities
BlueYonder.DataAccess.Interfaces
BlueYonder.DataAccess
BlueYonder.Entities
14. Type D:\AllFiles\Mod07\LabFiles\Assets in the File name box, and press Enter.
15. Select MessageHandler.cs from the file list and then click Add.
Note: The MessageHandler class contains the code to subscribe clients to WNS and send
notifications to clients when their flights are rescheduled.
17. Add the following code to the beginning of the OnStart method.
WNSManager.Authenticate();
18. In the WorkerRole class, locate the QueueName constant, and change its string value from
ProcessingQueue to FlightUpdatesQueue.
19. Add the following using directives to the beginning of the file.
using BlueYonder.Companion.Entities;
using BlueYonder.Entities;
20. In the WorkerRole class, locate the Run method. Locate the // Process the message comment and
add the following code after the Trace.Writeline method.
switch (receivedMessage.ContentType)
{
case "Subscription":
MessageHandler.CreateSubscription(receivedMessage.GetBody<RegisterNotificationsReques
t>());
break;
case "UpdatedSchedule":
MessageHandler.Publish(receivedMessage.GetBody<FlightSchedule>());
break;
}
22. In Solution Explorer, right-click the BlueYonder.Companion.Host.Azure project, and then click Publish.
23. In the Publish Windows Azure Application dialog box, click Publish.
24. Click Publish to start the publishing process. When a Deployment Environment In Use dialog box
appears, click Replace. The publish process might take several minutes to complete.
L7-11
Task 4: Test the Service Bus Queue with flight update messages
1. Place the two virtual machine windows so you work in virtual machine 20487B-SEA-DEV-A and see
the right-hand side of 20487B-SEA-DEV-C.
5. Leave the client app open and go back to the 20487B-SEA-DEV-A virtual machine.
6. On the Start screen, right-click the Visual Studio 2012 tile, and then click Open new window at the
bottom.
8. Type
D:\AllFiles\Mod07\LabFiles\begin\BlueYonder.Server\BlueYonder.Companion.FlightsManager.
sln in the File name box, and then click Open.
9. In Solution Explorer, expand the BlueYonder.FlightsManager project, and then double-click the
Web.config file.
10. In the <appSettings> section, locate the webapi:BlueYonderCompanionService key. In the value
attribute, replace the {CloudService} string with the Windows Azure Cloud Service name you wrote
down at the beginning of this lab.
11. In Solution Explorer, right-click the BlueYonder.FlightsManager project, and then click Set as
StartUp Project.
12. Press Ctrl+F5 to start the web application. A browser will open.
13. In the browser, select Seattle, Washington United States from the drop down list on the left, select
New York, New York United States from the drop down list on the right, and then click the filter
icon.
14. Locate the row for the departure date of your purchased trip.
15. Click in the New Time cell. Select the hour 9:00 AM, and then click the save icon.
16. Wait for a couple of seconds and observe the toast notification in the 20487B-SEA-DEV-C virtual
machine (should appear in the top-right hand corner of the window).
17. Close the client app in virtual machine 20487B-SEA-DEV-C.
Results: After you complete this exercise, you will be able to run the Flight Manager Web application,
update the flight departure time of a flight you booked in advance in your client app, and receive
Windows push notifications directly to your computer.
L7-12 Developing Windows Azure and Web Services
L8-1
2. Browse to D:\AllFiles\Mod08\LabFiles\Setup.
3. Double-click the setup.cmd file. When prompted for information, provide it according to the
instructions.
Note: You may see a warning saying the client model does not match the server model.
This warning may appear if there is a newer version of the Windows Azure PowerShell Cmdlets. If
this message is followed by an error message, please inform the instructor, otherwise you can
ignore the warning.
4. Wait for the deployment to complete successfully, write down the names of the Windows Azure
Service Bus namespace and Windows Azure Cloud Service, and press any key to close the window.
5. On the Start screen, click the Visual Studio 2012 tile.
13. If you are taken to the sign in page, enter your Windows Azure account email and password, and
then click Sign in.
14. In the navigation pane, click CLOUD SERVICES, and in the Cloud Services list, click the name of the
Cloud Service you wrote down at the beginning of this lab.
15. On the Cloud Service DASHBOARD tab, click PRODUCTION, and then click Update or Upload at
the bottom of the page (only one of the buttons should be visible).
16. In the dialog box that opened, enter Lab08 in the DEPLOYMENT NAME box.
18. Under CONFIGURATION, click FROM LOCAL, select ServiceConfiguration.Cloud.cscfg from the
file list, and then click Open.
L8-2 Developing Windows Azure and Web Services
19. Select the Deploy even if one or more roles contain a single instance check box or the Update
even if one or more roles contain a single instance check box (only one of the check boxes will be
visible), and then click OK (lower-right icon in the dialog).
20. Click the INSTANCES tab, wait for the new instance to show in the list, and then wait until its status
changes to Running.
23. Locate the GetWeather method and replace its code with the following code.
config.Routes.MapHttpRoute(
name: "LocationWeatherApi",
routeTemplate: "locations/{locationId}/weather",
defaults: new
{
controller = "locations",
action = "GetWeather"
},
constraints: new
{
httpMethod = new HttpMethodConstraint(HttpMethod.Get)
}
);
Task 2: Deploy the updated project to staging by using the Windows Azure
Management Portal
1. In Solution Explorer, right-click the BlueYonder.Companion.Host.Azure project, and then click
Package.
2. In the package Windows Azure Application, select Cloud in the Service configuration drop-down
list, select Debug in the Build configuration drop-down list, and then click Package.
3. Wait for the packaging process to complete, and then close File Explorer that opened.
6. If you are taken to the sign in page, enter your Windows Azure account email and password, and
then click Sign in.
7. In the navigation pane, click CLOUD SERVICES, and in the Cloud Services list, click the name of the
Cloud Service you wrote down at the beginning of this lab.
8. On the Cloud Service DASHBOARD tab, click STAGING, and then click UPLOAD A NEW STAGING
DEPLOYMENT.
L8-3
9. In the Upload a package dialog box, enter Lab08 in the DEPLOYMENT NAME box.
12. Select the Deploy even if one or more roles contain a single instance check box, and then click
OK (lower-right icon in the dialog).
13. Click the INSTANCES tab, wait for the new instance to show in the list, and then wait until its status
changes to Running.
Note: You are performing the exact same procedure as you did in Task 1 of this exercise,
with one difference: you are deploying to the Staging configuration and not to the Production
configuration.
Note: It may take several minutes until the instance starts to run.
Task 3: Test the client app with the production and staging deployments
1. In the 20487B-SEA-DEV-C virtual machine, on the Start screen, click the Visual Studio 2012 tile.
2. On the File menu, point to Open, and then click Project/Solution.
3. Type
D:\AllFiles\Mod08\LabFiles\begin\BlueYonder.Companion.Client\BlueYonder.Companion.Clie
nt.sln in the File name box, and then click Open.
4. If you are prompted by a Developers License dialog box, click I Agree. If you are prompted by a
User Account Control dialog box, click Yes. Type your e-mail address and a password in the
Windows Security dialog box and then click Sign in. Click Close in the Developers License dialog
box.
Note: If you do not have valid e-mail address, click Sign up and register for the service.
Write down these credentials and use them whenever a use of an e-mail account is required.
6. Locate the BaseUri property, and replace the {CloudService} string with the Windows Azure Cloud
Service name you wrote down at the beginning of this lab.
7. In Solution Explorer, right-click the BlueYonder.Companion.Client project, and then click Set as
StartUp Project.
9. If you are prompted to allow the app to run in the background, click Allow.
10. Display the app bar by right-clicking or by swiping from the bottom of the screen.
11. Click Search, and in the Search box on the right side enter New. If you are prompted to allow the
app to share your location, click Allow.
12. Wait for the app to show a list of flights from Seattle to New York.
13. Click Purchase this trip.
L8-4 Developing Windows Azure and Web Services
22. Verify the weather forecast does not show the temperature, only the degrees Fahrenheit sign.
26. If you are taken to the sign in page, enter your Windows Azure account email and password, and
then click Sign in.
27. In the navigation pane, click CLOUD SERVICES, and in the Cloud Services list, click the name of the
Cloud Service you wrote down at the beginning of this lab.
28. On the Cloud Service DASHBOARD tab, click STAGING, then in the quick glance pane on the right
side, right-click the link below SITE URL, and then click Copy shortcut.
29. Return to Visual Studio 2012. In Solution Explorer, under the BlueYonder.Companion.Shared
project, double-click Addresses.cs.
30. Switch the comments between the two BaseUri get implementations by placing the production URL
in comments and removing the comment from the staging URL. The resulting code should resemble
the following:
Note: {CloudService} is replaced with the name of the cloud service that was shown at the
beginning of the lab.
31. In the BaseUri property, select the value {StagingAddress}, and then press Ctrl+V to paste the
copied staging deployment address over it.
34. After the app starts, verify the weather forecast shows a temperature for the current trip.
Note: The staging and the production deployments share the database, which is why the
current trip, which you created with the production deployment, is shown when connecting to
the staging deployment.
Task 4: Perform a VIP Swap by using the Windows Azure Management Portal and
retest the client app
1. Return to Internet Explorer, and then click SWAP on the bottom task bar.
2. In the dialog box, click YES, and wait until SWAP is enabled.
3. Leave the browser open, and return to Visual Studio 2012.
5. Switch the comments between the two BaseUri get implementation by placing the staging in
comments and removing the comment from the production.
8. After the app starts, verify that the weather forecast shows a temperature for the current trip.
10. Return to Internet Explorer, move the pointer over DELETE on the bottom task bar, and then click
Delete staging deployment for Cloud Service. Click YES, and wait until the message You have
nothing deployed to the staging environment appears.
Note: After the production deployment is running and has been tested, it is recommended
that you delete the staging deployment to reduce compute hour charges.
Results: After you complete this exercise, the client app will retrieve weather forecast information from
the production deployment in Windows Azure.
L8-6 Developing Windows Azure and Web Services
3. If an Internet Information Services (IIS) Manager dialog box pops up asking about the Microsoft
Web Platform, click No.
4. From the Connection pane, expand Sites, and then click Default Web Site.
7. In the Manage Components dialog box, select the first line in the grid, click Remove, and then click
Yes when you are asked whether to delete the selected entry.
Path: DefaultAppPool
11. Click OK to close the Manage Components dialog box.
12. Click Next, then click Next again, type C:\backup.zip in the Package path box, and then click Next.
13. Wait for the export to be created, and then click Finish.
14. Close the Internet Information Services (IIS) Manager window.
15. On the Start screen, click the Computer tile to open File Explorer, and browse to C:\.
16. Select the backup.zip file, and then press Ctrl+C to copy the file.
17. In File Explorer, browse to \\10.10.0.11\c$, and press Ctrl+V to paste the file.
4. From the Connection pane, expand Sites, and then click Default Web Site.
6. In the Import Application Package dialog box, type C:\backup.zip in the Package path box, and
then click Next.
9. Click Next, wait for the package to be installed, and then click Finish.
10. Close IIS Manager, and on the Start screen, click the Internet Explorer tile.
12. If you are taken to the sign in page, enter your Windows Azure account email and password, and
then click Sign in.
13. In the navigation pane, click SERVICE BUS, and then click the Service Bus namespace you wrote
down at the beginning of this lab on the right pane. Click the RELAYS tab and verify that you see the
booking relay with two listeners.
Results: As soon as both servers are online, they will listen to the same Service Bus relay, and will be load
balanced. You will verify that both servers are listening by checking the Service Bus relay listeners
information supplied by Service Bus in the Windows Azure Management Portal.
L8-8 Developing Windows Azure and Web Services
L9-1
2. On the Start screen, click the Computer tile to open File Explorer.
3. Browse to D:\AllFiles\Mod09\LabFiles\Setup.
4. Double-click the Setup.cmd file. When prompt for information, provide it according to the
instructions.
Note: You may see a warning saying the client model does not match the server model.
This warning may appear if there is a newer version of the Windows Azure PowerShell Cmdlets. If
this message is followed by an error message, please inform the instructor, otherwise you can
ignore the warning.
5. Write down the name of the cloud service that is shown in the script. You will use it later on during
the lab.
6. Wait for the script to finish, and then press any key to close the script window.
Note: The browser should automatically log you in to the portal. If you are redirected to
the Windows Live ID Sign in page, type your email and password, and then click Sign in.
11. In the URL text box, enter the following storage account name: blueyonderlab09yourinitials
(yourinitials contains your names initials, in lower-case).
12. In the REGION box, select the region closest to your location.
13. Click CREATE STORAGE ACCOUNT at the lower right corner of the portal. Wait until the storage
account is created.
Note: If you get a message saying the storage account creation failed because you reached your
storage account limit, delete one of your existing storage accounts and retry the step. If you do not know
how to delete a storage account, consult the instructor.
15. In the STORAGE pane, click the account name that you just created.
16. Click MANAGE ACCESS KEYS at the bottom of the page.
L9-2 Developing Windows Azure and Web Services
17. In the Manage Access Keys dialog, click on the copy icon to the right of the PRIMARY ACCESS KEY
box.
18. If you are prompted to allow copying to the clipboard, click Allow access.
19. Close the dialog.
5. Click the Settings tab, then click Add Setting and enter the following information:
Name: BlueYonderStore
6. In the Create Storage Connection String dialog box, enter the following information and click OK:
Account key: press Ctrl+V to paste the primary access key you copied in the previous task.
3. Replace the content of the GetContainer method with the following code:
4. Replace the content of the GetBlob method with the following code:
6. Locate the UploadStreamAsync method and explore its code. The method uses the previous
methods to retrieve a reference to the new blob, and then uploads the stream to it.
2. Explore the UploadFile method of the FilesController class. Explore how the asynchronous
UploadStreamAsync method is called, and how the result is returned.
3. Explore the Public and Private methods of the FilesController class. Each method uploads a file to
either a public blob container or a private blob container.
Note: The client app calls these service actions to upload files as either public or private.
Public files can be viewed by any user, whereas private files can only be viewed by the user who
uploaded them.
Results: You can test your changes at the end of the lab.
L9-4 Developing Windows Azure and Web Services
using Microsoft.WindowsAzure.Storage.Table.DataServices;
3. Derive the FileEntity class from the TableServiceEntity abstract class by replacing the FileEntity
class declaration with the following code:
using Microsoft.WindowsAzure.Storage.Table;
7. In the AsyncStorageManager class, replace the content of the GetTableContext method with the
following code:
Note: You should make sure the table exists before you return a context for it, otherwise
the code will fail when running queries on the table. If you already created the table, you can skip
calling the GetTableReference and CreateIfNotExists methods.
8. In the SaveMetadataAsync method, add the following code after the // TODO: Lab 9 Exercise 2: Task
1.3: use a TableServiceContext to add the object comment:
tableContext.AddObject(MetadataTable, fileData);
11. In the CreateFileEntity method, before the return statement, add the following code:
entity.RowKey = HttpUtility.UrlEncode(fileData.Uri.ToString());
entity.PartitionKey= locationId.ToString();
Note: The RowKey property is set to the files URL, because it has a unique value. The URL
is encoded because the forward slash (/) character is not valid in row keys. The PartitionKey
property is set to the locationID property, because the partition key groups all the files from a
L9-5
single location in the same partition. By using the locations ID as the partition key, you can query
the table and get all the files uploaded for a specific location.
12. Explore the code in the Metadata method. The method creates the FileEntity object and saves it to
the table.
Note: The client app calls this service action after it uploads the new file to Blob storage. By
storing the list of files in Table storage, the client app can use queries to find specific images,
either by trip or location.
2. In the AsyncStorageManager class, replace the content of the GetLocationMetadata method with
the following code:
Note: Recall that the location ID was used as the entity's partition key.
LocationId = int.Parse(file.PartitionKey),
11. Open the FilesController class and explore the code in the TripMetadata method.
Note: The method retrieves the list of files in the trips public blob container, and then uses
the GetFilesMetadata method of the AsyncStorageManager class to get the FileEntity object
for each of the files. The client app calls this service action to get a list of all files related to a
specific trip. Currently the code retrieves only the public files. In the next exercise you will add the
code to retrieve both public and private files.
Results: You can test your changes at the end of the lab.
L9-7
3. In the CreateSharedAccessSignature method, add the following code to the end of the method:
4. Complete the CreateSharedAccessSignature method by adding the following code to the end of
the method:
return container.GetSharedAccessSignature(policy);
Note: The shared access key signature is a URL query string that you append to blob URLs.
Without the query string, you cannot access private blobs.
8. Add the following code after the // TODO: Lab 9, Exercise 3, Task 1.4: get a list of files in the trip's
private folder comment:
9. In the allKeys variable assignment, replace the publicUris variable with the allUris variable. The
resulting code should resemble the following:
11. Locate the ToFileDto method and explore its code. If the requested file is private, you create a shared
access key for the blob's container, and then set the Uri property of the file to a URL containing the
shared access key.
12. In Solution Explorer, right-click the BlueYonder.Companion.Host.Azure project, and then click
Publish.
L9-8 Developing Windows Azure and Web Services
13. If you already added your Windows Azure subscription information to Visual Studio 2012, select your
subscription from the drop down list and skip to step 17.
14. In the Publish Windows Azure Application dialog box, click Import.
15. Type D:\AllFiles\Mod09\LabFiles in the File name text box, and then click Open. Select your
publish settings file and click Open.
17. On the Common Settings tab, click the Cloud Service box, and select the cloud service that matches
the name you wrote down in the beginning of the lab, while running the setup script.
18. Click Publish to start the publishing process. If a Deployment Environment In Use dialog box
appears, click Replace. The publish process might take several minutes to complete.
Note: If you do not have valid email address, click Sign up and register for the service.
Write down these credentials and use them whenever a use of an email account is required.
9. In Solution Explorer, right-click the BlueYonder.Companion.Client project, and then click Set as
StartUp Project.
10. Press Ctrl+F5 to start the client app without debugging.
11. If you are prompted to allow the app to run in the background, click Allow.
12. After the client app starts, display the app bar by right-clicking or by swiping from the bottom of the
screen.
13. Click Search, and in the Search box on the right side enter New. If you are prompted to allow the
app to share your location, click Allow.
14. Wait for the app to show a list of flights from Seattle to New York.
16. In the First Name text box, type your first name.
17. In the Last Name text box, type your last name.
L9-9
20. In the Home Address text box, type 423 Main St..
21. In the Email Address text box, type your email address.
24. In the Blue Yonder Companion page, click the current trip from Seattle to New York.
25. In the Current Trip page, display the app bar by right-clicking or by swiping from the bottom of the
screen. Click Media.
26. In the Media page, display the app bar by right-clicking or by swiping from the bottom of the screen.
Click Add Files from Disk.
28. In the Media page, display the app bar by right-clicking or by swiping from the bottom of the screen.
Click Upload Item to Public Storage.
32. In the Media page, display the app bar by right-clicking or by swiping from the bottom of the screen.
Click Upload Item to Private Storage.
35. In the Media page, wait for a couple of seconds until the images are downloaded from storage.
Verify you see both the private and the public photos.
36. Click the back button to return to the Current Trip page, and then click the back button again to
return to the Blue Yonder Companion page. Under New York at a Glance, verify you see the photo
of the Statue of Liberty you uploaded to the public container.
3. In Server Explorer, right-click Windows Azure Storage, and then click Add New Storage Account.
4. If you already added your Windows Azure subscription information to Visual Studio 2012, select your
subscription from the drop down list and skip to step 8.
5. In the Add New Storage Account dialog box, click the Download Publish Settings hyperlink.
Note: The browser automatically logs you in to the portal. When you are redirected to the Windows
Live ID Sign in page, type your email address and password, and then click Sign in.
6. The publish settings file is generated, and a Do you want to open or save... Internet Explorer dialog
appears at the bottom. Click the arrow within the Save button. Select the Save as option and specify
L9-10 Developing Windows Azure and Web Services
the following location: D:\AllFiles\Mod09\LabFiles. Click Save. If a Confirm Save As dialog box
appears, click Yes.
7. Return to the Add New Storage Account dialog box in Visual Studio 2012. Click Import. Type
D:\AllFiles\Mod09\LabFiles and select the file that you downloaded in the previous step. Make sure
that your subscription is selected in the Subscription dropdown list.
8. In the Account name dropdown list, select the account named blueyonderlab09yourinitials
(yourinitials contains your names initials, in lower-case). Click OK.
10. Under Blobs, double-click the container that ends with public. The blob container holds one file.
11. In the containers file table, right-click the first line, and then click Copy URL.
14. Return to Visual Studio 2012, and in Server Explorer, double-click the container that ends with
private. The blob container holds one file.
15. In the containers file table, right-click the first line, and then click Copy URL.
18. The private photos cannot be accessed by a direct URL, therefore an HTTP 404 (The webpage cannot
be found) page is shown.
Note: The client app is able to show the private photo because it uses a URL that contains a
shared access permission key.
19. In Server Explorer, expand the Tables node, and then double-click the FilesMetadata node.
20. View the content of the FilesMetadata table. The table contains metadata for both public and
private photos.
Results: After you complete the exercise, you will be able to use the client App to upload photos to the
private and public blob containers. You will also be able to view the content of the Blob and Table storage
by using Visual Studio 2012.
L10-1
2. Browse to D:\AllFiles\Mod10\LabFiles\Setup.
3. Double-click the setup.cmd file. When prompted for information, provide it according to the
instructions.
Note: You may see a warning saying the client model does not match the server model.
This warning may appear if there is a newer version of the Windows Azure PowerShell Cmdlets. If
this message is followed by an error message, please inform the instructor, otherwise you can
ignore the warning.
4. Wait for the script to complete successfully, write down the name of the Windows Azure Cloud
Service, and press any key to close the window.
5. On the Start screen, click the Visual Studio 2012 tile.
10. When prompted that the Microsoft.ServiceBus assembly could not be found, click Yes. Browse to
D:\AllFiles\Mod10\LabFiles\begin\BlueYonder.Server\BlueYonder.Server.Booking.WebHost\bin select
Microsoft.ServiceBus.dll and click open. Click Yes in the dialog box.
3. In the Message Logging pane, set the LogEntireMessage and LogMessagesAtServiceLevel settings to
True. Set the LogMessagesAtTransportLevel setting to False.
4. Press Ctrl+S to save the changes.
5. Click File on the menu bar, and select Exit to close the window.
Results: You can test your changes at the end of the lab.
L10-2 Developing Windows Azure and Web Services
5. Implement the Trace method by adding the following code to the method.
using System.Web.Http.Tracing;
9. Add the following code to the beginning of the Register method, before setting the dependency
resolver.
using System.Web.Http.Tracing;
13. Add the following code to the Post method, after the call to the Save method.
3. In the Diagnostics configuration dialog box, on the Application logs tab, change the Log level from
Error to Verbose.
L10-3
4. On the Log directories tab, in the Transfer period combo box, select 1.
6. In the Directories grid, in the IIS logs row, type 1024 in the Directory quota (MB) column.
7. Click OK, and then press Ctrl+S to save the changes to the role configuration.
2. If you already added your Windows Azure subscription information to Visual Studio 2012, select your
subscription from the drop down list and skip to step 6.
3. In the Publish Windows Azure Application dialog box, click the Sign in to download credentials
hyperlink.
Note: The browser automatically logs you in to the portal. When you are redirected to the
Windows Live ID Sign in page, type your email address and password, and then click Sign in.
4. The publish settings file is generated, and a Do you want to open or save... Internet Explorer dialog
appears at the bottom. Click the arrow within the Save button. Select the Save as option and specify
the following location:
D:\AllFiles\Mod10\LabFiles. Click Save. If a Confirm Save As dialog box appears, click Yes.
5. Return to Publish Windows Azure Application dialog box in Visual Studio 2012. Click Import. Type
D:\AllFiles\Mod10\LabFiles and select the file that you downloaded in the previous step. Make sure
that your subscription is selected under Choose your subscription section.
6. Click Next.
7. On the Common Settings tab, click the Cloud Service box, and select the cloud service that matches
the name you wrote down in the beginning of the lab, while running the setup script.
8. Click Publish to start the publishing process. If a Deployment Environment In Use dialog box
appears, click Replace. The publish process might take several minutes to complete.
5. If you are prompted by a Developers License dialog box, click I Agree. If you are prompted by a
User Account Control dialog box, click Yes. Type your email address and a password in the
Windows Security dialog box and then click Sign in. Click Close in the Developers License dialog
box.
Note: If you do not have valid email address, click Sign up and register for the service.
Write down these credentials and use them whenever a use of an email account is required.
L10-4 Developing Windows Azure and Web Services
7. Locate the BaseUri property, and replace the {CloudService} string with the Windows Azure Cloud
Service name you wrote down in the beginning of this lab.
8. In Solution Explorer, right-click the BlueYonder.Companion.Client project, and then click Set as
StartUp Project.
10. If you are prompted to allow the app to run in the background, click Allow.
11. Click Close to close the confirmation message, and then close the client app.
12. After the client app starts, display the app bar by right-clicking or by swiping from the bottom of the
screen.
13. Click Search, and in the Search box on the right side enter New. If you are prompted to allow the
app to share your location, click Allow.
14. Wait for the app to show a list of flights from Seattle to New York.
16. In the First Name text box, type your first name.
17. In the Last Name text box, type your last name.
18. In the Passport text box, type Aa1234567.
21. In the Email Address text box, type your email address.
4. Click Cancel.
5. On the View menu, click Server Explorer.
6. In Server Explorer, right-click Windows Azure Storage and click Add New Storage Account.
7. In the Add New Storage Account dialog box, select the storage account you have written down in
the previous step from the Account name drop down list.
8. Click OK.
9. In Server Explorer, under Windows Azure Storage, expand the new storage account you added, and
then expand Tables.
10. Click Tables, and double-click WADLogsTable.
11. Scroll right and explore the table by looking at the Message column which presents the value of the
logged events.
L10-5
12. Look for the message that starts with ReservationsController, and then double-click the row to open
its details.
Note: In addition to the trace message your code writes to the log, ASP.NET Web API
writes several other infrastructure trace messages.
13. In the Edit Entity dialog box, view the message and then click OK to close the dialog box.
14. In Server Explorer, under Windows Azure Storage, under the new storage account you added,
expand Blobs.
15. Double-click wad-iis-logfiles, and in the container's file list, double-click the first line to open it in
Notepad. If you are prompted to select how to open this type of file, click More options and then
choose to open the file with Notepad.
16. After the IIS log opens in Notepad, verify you see the requests for the Travelers, Locations, Flights,
and Reservations controllers. Close Notepad.
Note: It is possible it will take more than a minute from the time the request is sent and
until it is logged by IIS. If you do not yet see any logs, or the requests are missing from the log,
wait for another minute, refresh the blob container, and then download the log again.
17. On the Start screen, click the Computer tile to open File Explorer.
22. Review the CreateReservationResponse message on the Message tab in the bottom-right pane. Scroll
to the end of the message to view the <s:Body> element.
Results: After you complete the exercise, you will be able to use the client App to purchase a trip, and
then view the created log files, for both the Windows Azure deployment and the on-premises WCF
service.
L10-6 Developing Windows Azure and Web Services
L11-1
Note: The browser should log you in automatically to the portal. If you are redirected to
the Windows Live ID Sign in page, type your email and password, and then click Sign in.
3. If the Windows Azure Tour dialog appears, click close (the X button).
4. Click NEW, then click APP SERVICES, then click ACCESS CONTROL, and then click QUICK CREATE.
5. Enter the following values:
Namespace: BlueYonderCompanionYourInitials (YourInitials will contain your initials).
Name: BlueYonderCloud
Realm: urn:blueyonder.cloud
Return URL: https://CloudServiceName.cloudapp.net/federationcallback (CloudServiceName is
the name of the cloud service you wrote down in the beginning of the lab while running the setup
script)
Token format: SWT
5. Verify that the Create new rule group check box is selected.
6. Under Token Signing Settings, click the Generate button to generate new token signing key.
2. On the Rule Groups page, under the Rule Groups section, click Default Rule Group for
BlueYonderCloud.
3. On the Edit Rule Group page, click the Generate link above the Rules section.
4. On the Generate Rules page, click Generate.
Results: After you complete this exercise, you would have created a new ACS namespace and configured
an RP for the ASP.NET Web API services. You will test the RP configuration at the end of the lab.
L11-3
4. On the Tools menu, point to Library Package Manager, and then click Package Manager Console.
Note: The last known version of the ThinkTecture.IdentityModel NuGet package that
supports the SWT token is 2.2.1. Therefore, you need to use the Package Manager Console to
install this NuGet package, rather than using the Manage NuGet Packages dialog box.
Name: ACS.IssuerName
Type: String
Name: ACS.Realm
Type: String
Value: urn:blueyonder.cloud
5. Return to the Internet Explorer window, to the ACS portal, and in the pane on the left, click
Certificates and Keys link under the Service settings section.
6. On the Certificates and Keys page, click BlueYonderCloud, and then click Show Key.
7. Select the generated key, and press Ctrl+C to copy it to the clipboard.
8. Return to Visual Studio 2012 (do not close the browser), to the Settings tab, and click Add Settings.
Enter a new setting with the following information:
Name: ACS.SigningKey
Type: String
10. In Solution Explorer, right-click the BlueYonder.Companion.Host project, point to Add, and then
click New Folder. Name the folder Authentication, and press Enter.
L11-4 Developing Windows Azure and Web Services
11. In Solution Explorer, right-click the Authentication folder, point to Add, and then click Class.
12. In the Add New Item dialog box, type AuthenticationConfig in the Name text box, and then click
Add.
13. Add the following using directive to the beginning of the file.
using Thinktecture.IdentityModel.Tokens.Http;
15. Enter the following code after the // Get the SWT configuration comment.
string issuerName =
Microsoft.WindowsAzure.CloudConfigurationManager.GetSetting("ACS.IssuerName").Trim();
string realm =
Microsoft.WindowsAzure.CloudConfigurationManager.GetSetting("ACS.Realm").Trim();
string signingKey =
Microsoft.WindowsAzure.CloudConfigurationManager.GetSetting("ACS.SigningKey").Trim();
16. Add the following code after the // Add an SWT authentication support comment.
config.AddSimpleWebToken(
issuer: issuerName,
audience: realm,
signingKey: signingKey,
options: AuthenticationOptions.ForAuthorizationHeader("OAuth"));
Note: Realm is the unique identifier of your RP. Audience refers to the realm of the RP that
redirected the client to the STS. In most cases, the realm and audience are the same, because you
are redirected back to the application you came from. There are scenarios where the RP that got
the token is not the same RP that requested the token.
The AuthenticationOptions.ForAuthorizationHeader method sets the value of the HTTP
Authorization header that is added to unauthorized responses. The OAuth value specifies that
OAuth authentication should be used.
17. Add the following code after the // Set defaults comment.
config.DefaultAuthenticationScheme = "OAuth";
config.EnableSessionToken = true;
string signingKey =
Microsoft.WindowsAzure.CloudConfigurationManager.GetSetting("ACS.SigningKey");
// Add an SWT authentication support
config.AddSimpleWebToken(
issuer: issuerName,
audience: realm,
signingKey: signingKey,
options: AuthenticationOptions.ForAuthorizationHeader("OAuth"));
// Set defaults
config.DefaultAuthenticationScheme = "OAuth";
config.EnableSessionToken = true;
return config;
}
4. In the packages list, click the Simple Web Token Support for Windows Identity Foundation package,
and then click Install.
7. In the Add New Item dialog box, type FederationCallbackController in the Name text box, and
then click Add.
8. Add the following using directives to the beginning of the file.
using System.Web.Http;
using System.Net.Http;
using System.Net;
using System.Web;
using Microsoft.IdentityModel;
10. Add the following method to the class, to handle active federation with POST requests.
Note: The POST method extracts the token from the request's body, and returns a message
carrying the token in the Location HTTP header. The client application can then extract the token
from the response and use it to authenticate against the service in future requests.
L11-6 Developing Windows Azure and Web Services
The special redirect to FederationCallback/end indicates to the client that the authentication
process has completed successfully. This message flow is part of the passive federation process.
12. Return to the browser window, to the ACS portal, and in the pane on the left, click Certificates and
Keys link under the Service settings section.
13. In the Certificates and Keys page, click BlueYonderCloud, and then click Show Key.
14. Select the generated key, and press Ctrl+C to copy it to the clipboard.
15. Return to Visual Studio 2012 (do not close the browser), and in Solution Explorer, in the
BlueYonder.Companion.Host project, double-click Web.config.
16. Locate the <appSettings> section, and in it locate the <add> element whose key attribute is set to
SwtSigningKey.
17. Select the value [your 256-bit symmetric key configured in the STS/ACS], and then press Ctrl+V to
replace it with the token signing key you copied from the portal.
18. Locate the <microsoft.identityModel> section in the end of the file, and in it, locate
<audienceUris> element.
Note: WIF 4.5 uses the <system.identityModel> section. However, the WIF.SWT NuGet package you
installed still uses WIF 4, which uses the <microsoft.identityModel> section.
20. Locate the <trustedIssuers> element, and replace the string [youracsnamespace] with
blueyondercompanionyourinitials (yourinitials will contain your initials). Make sure the string you
type is in lowercase letters.
21. Add the following configuration between the <service> and <audienceUris> tags.
<federatedAuthentication>
<wsFederation passiveRedirectEnabled="false" issuer="urn:unused" realm="urn:unused"
requireHttps="false" />
</federatedAuthentication>
23. In Solution Explorer, in the BlueYonder.Companion.Host project, expand the References folder.
24. Click Microsoft.IdentityModel, and in the Properties window, change Copy Local to True.
Note: WIF 4 is not installed by default in Windows Azure VMs. Therefore you need to make
sure the assembly is included in the deployed package.
using Thinktecture.IdentityModel.Tokens.Http;
L11-7
3. Add the following code to the Register static method, before the first call to the MapHttpRoute
method.
config.Routes.MapHttpRoute(
name: "callback",
routeTemplate: "FederationCallback",
defaults: new { Controller = "FederationCallback" });
Note: The order of routes is important; you must add the federation callback route before
adding the default route ({controller}/{id}) which handles all the other calls to the controllers. If
you add the default route first, it will be used even when you use a URL that ends with
FederationCallback.
using BlueYonder.Companion.Host.Authentication;
5. Create a new authentication configuration by adding the following code to the beginning of the
Register method.
AuthenticationConfiguration authenticationConfig =
AuthenticationConfig.CreateConfiguration();
6. Locate the call to the MapHttpRoute method, which uses the name TravelerReservationsApi.
Replace the method call with the following code.
config.Routes.MapHttpRoute(
name: "TravelerReservationsApi",
routeTemplate: "travelers/{travelerId}/reservations",
defaults: new
{
controller = "reservations",
id = RouteParameter.Optional
},
constraints:null,
handler: new AuthenticationHandler(authenticationConfig,
GlobalConfiguration.Configuration));
7. Locate the call to the MapHttpRoute method, which uses the name ReservationApi. Replace the
method call with the following code.
config.Routes.MapHttpRoute(
name: "ReservationsApi",
routeTemplate: "Reservations/{id}",
defaults: new
{
controller = "Reservations",
action = "GetReservation"
},
constraints: new
{
httpMethod = new HttpMethodConstraint(HttpMethod.Get)
},
handler: new AuthenticationHandler(authenticationConfig,
GlobalConfiguration.Configuration));
8. Locate the call to the MapHttpRoute method, which uses the name DefaultApi. Replace the method
call with the following code.
config.Routes.MapHttpRoute(
L11-8 Developing Windows Azure and Web Services
name: "DefaultApi",
routeTemplate: "{controller}/{id}",
defaults: new { id = RouteParameter.Optional },
constraints:null,
handler: new AuthenticationHandler(authenticationConfig,
GlobalConfiguration.Configuration));
Note: The authentication handler is not used for the first two routes you just added,
because the requests to the FederationCallback controller are sent before the client is
authenticated. The authentication handler is not used for the location's weather route because
the GetWeather action is public and does not require any authentication.
2. Decorate the class with the [Authorize] attribute. The resulting code should resemble the following.
[Authorize]
public class ReservationsController : ApiController
{
...
}
4. Repeat the same process for the TravelersController.cs and the TripsController.cs files.
Results: After completing this exercise, you will have configured your ASP.NET Web API services to use
claims-based identities, authenticate users, and authorize users. You will test this configuration at the end
of the lab.
L11-9
2. If your subscription is already listed in the subscriptions drop down list, skip to the next task,
otherwise continue.
3. In the Publish Windows Azure Application dialog box, click the Sign in to download credentials
hyperlink.
Note: The browser automatically logs you in to the portal. When you are redirected to the
Windows Live ID Sign in page, type your email address and password, and then click Sign in.
4. The publish settings file is generated, and Do you want to open or save... Internet Explorer dialog
appears at the bottom. Click arrow within the Save button. Select the Save as option and specify the
following location:
D:\AllFiles\Mod11\LabFiles. Click Save. If a Confirm Save As dialog box appears, click Yes.
5. Return to Publish Windows Azure Application window in Visual Studio 2012. Click Import. Type
D:\AllFiles\Mod11\LabFiles and select the .publishsettings file that you downloaded in the previous
step. Make sure that your subscription is selected under Choose your subscription section.
6. Click Next.
7. On the Common Settings tab, click the Cloud Service box, and select the cloud service that matches
the name you wrote down in the beginning of the lab, after running the setup script.
8. Click Publish to start the publishing process. If a Deployment Environment In Use dialog box
appears, click Replace. The publish process might take several minutes to complete.
3. Type
D:\AllFiles\Mod11\LabFiles\begin\BlueYonder.Companion.Client\BlueYonder.Companion.Clie
nt.sln in the File name text box, and then click Open.
4. If you are prompted by a Developers License dialog box, click I Agree. If you are prompted by a
User Account Control dialog box, click Yes. Type your email address and a password in the
Windows Security dialog box and then click Sign in. Click Close in the Developers License dialog
box.
Note: If you do not have valid email address, click Sign up and register for the service.
Write down these credentials and use them whenever a use of an email account is required.
6. Locate the BaseUri property, and replace the {CloudService} string with the Windows Azure Cloud
Service name you wrote down at the beginning of this lab.
L11-10 Developing Windows Azure and Web Services
8. In Solution Explorer, expand the BlueYonder.Companion.Client project, expand Helpers, and then
double-click DataManager.cs.
9. In the DataManager class, locate the private constant named AcsNamespace, and set its value to
BlueYonderCompanionYourInitials (YourInitials will contain your initials).
Task 3: Examine the Client Code That Manages the Authentication Process
1. Locate the GetLiveIdUri method, and examine its code. The method sends a request to the ACS to
request the list of identity providers, and returns the address of the first identity provider.
Note: The ACS namespace you created automatically has the Windows Live ID identity
provider.
2. Locate the AuthenticateAsync method, and examine its code. The method uses the
WebAuthenticationBroker class to authenticate the client with the Windows Live ID identity
provider, and then sends the token to the ASP.NET Web API FederationCallback controller for
authentication.
3. Locate the GetSessionToken method and examine its code. The method uses the SWT returned from
the federation callback to perform a session handshake with the ASP.NET Web API service. The service
returns a session token, which is then stored in the static Token field.
4. Locate the CreateHttpClient method and examine its code. The method creates an HTTP request
and adds the HTTP Authorization header with the stored token.
5. In Solution Explorer, right-click the BlueYonder.Companion.Client project, and then click Set as
StartUp Project.
6. Press Ctrl+F5 to start the client without debugging.
7. Wait for the Connecting to a service page to show, enter your email and password, and click Sign in.
8. Wait until the connection is complete and you see the main window of the app.
9. Right-click the screen to open the app bar, and click Log out.
Results: After completing this exercise, you will be able to run the client app, and log in using your
Windows Live ID credentials.
L12-1
2. Browse to D:\AllFiles\Mod12\LabFiles\Setup.
3. Double-click the setup.cmd file. When prompt for information, provide it according to the
instructions.
Note: You may see a warning saying the client model does not match the server model.
This warning may appear if there is a newer version of the Windows Azure PowerShell Cmdlets. If
this message is followed by an error message, please inform the instructor, otherwise you can
ignore the warning.
4. Wait for the script to finish, and then press any key to close the script window.
5. On the Start screen, click the Visual Studio 2012 tile.
7. Browse to D:\Allfiles\Mod12\LabFiles\begin\BlueYonder.Server.
8. Select the file BlueYonder.Companion.sln and then click Open.
9. In Solution Explorer, expand the BlueYonder.Companion.Host.Azure project, right-click Roles, then
point to Add, and then click New Worker Role Project.
10. In the Add New .NET Framework 4.5 Role Project dialog box, click Cache Worker Role.
11. In the Name text box, type BlueYonder.Companion.CacheWorkerRole, and then click Add.
Task 2: Add the Windows Azure Caching NuGet Package to the ASP.NET Web API
Project
1. In Solution Explorer, right-click the BlueYonder.Companion.Controllers project, and click Manage
NuGet Packages.
2. In the Manage NuGet Packages dialog box, on the navigation pane, expand the Online node and
then click the NuGet official package source node.
4. In the center pane, click the Windows Azure Caching package, and then click Install.
6. Wait for installation completion. Click Close to close the dialog box.
7. In Solution Explorer, right-click the BlueYonder.Companion.Host project, and click Manage NuGet
Packages.
L12-2 Developing Windows Azure and Web Services
8. In the Manage NuGet Packages dialog box, on the navigation pane, expand the Online node and
then click the NuGet official package source node.
11. Wait for installation completion. Click Close to close the dialog box.
using Microsoft.ApplicationServer.Caching;
3. Locate the comment // TODO: Place cache initialization here, and place the following code after it:
4. Locate the comment // TODO: Place cache check here, and place the following code after it:
5. Locate the comment // TODO: Insert into cache here, and place the following code after it:
cache.Put(cacheKey, routesWithSchedules);
2. In Solution Explorer, right-click the BlueYonder.Companion.Host.Azure project, and then click Set
as StartUp Project.
7. Type
D:\AllFiles\Mod12\LabFiles\begin\BlueYonder.Companion.Client\BlueYonder.Companion.Client.sln in
the File name text box, and then click Open.
L12-3
8. If you are prompted by a Developers License dialog box, click I Agree. If you are prompted by a
User Account Control dialog box, click Yes. Type your email address and a password in the
Windows Security dialog box and then click Sign in. Click Close in the Developers License dialog
box.
Note: If you do not have valid email address, click Sign up and register for the service.
Write down these credentials and use them whenever a use of an email account is required.
9. In Solution Explorer, right-click the BlueYonder.Companion.Client project, and then click Set as
StartUp Project.
10. The client app is already configured to use the Windows Azure compute emulator. Press Ctrl+F5 to
start the client app without debugging.
Note: Normally, the Windows Azure Emulator is not accessible from other computers on
the network. For the purpose of testing this lab from a Windows 8 client, a routing module was
installed on the server's IIS, routing the incoming traffic to the emulator.
11. If you are prompted to allow the app to run in the background, click Allow.
12. After the client app starts, display the app bar by right-clicking or by swiping from the bottom of the
screen.
13. Click Search, and in the Search box on the right side enter N.
14. Go back to the 20487B-SEA-DEV-A virtual machine, to Visual Studio 2012. The code execution
breaks, and the line in breakpoint is highlighted in yellow.
15. Press F10 several times, until you reach the second if statement. Hover over the
routesWithSchedules object and verify that the value of the variable is null, meaning the item was
not found in the cache.
16. Press F5 to continue, and go back to the 20487B-SEA-DEV-C virtual machine, to the client app.
17. Close the client app, return to Visual Studio 2012, and press Ctrl+F5 to start the client app without
debugging.
18. Display the app bar by right-clicking or by swiping from the bottom of the screen. Click Search, and
in the Search box on the right side enter N.
19. Go back to the 20487B-SEA-DEV-A virtual machine, to Visual Studio 2012. The code execution
breaks, and the line in breakpoint is highlighted in yellow.
20. Press F10 several times, until you reach the second if statement. Hover over the
routesWithSchedules object and verify that now the code retrieves data from the cache.
21. Press F5 to continue running the code, return to Visual Studio 2012, and on the Debug menu click
Stop Debugging.
22. Go back to the 20487B-SEA-DEV-C virtual machine, and close the client app.
Results: You will have added a caching worker role to the Cloud project, and implemented other
Windows Azure caching features.
L12-4 Developing Windows Azure and Web Services
L13-1
3. Browse to D:\Allfiles\Apx01\LabFiles\begin\BlueYonder.Server.
using System.ServiceModel;
10. To the class to create a local field which can store the operation parameters, add the following code.
12. Implement the IExtension<OperationContext> interface by adding the following code to the class.
Note: You do not have to add any code to the Attach and Detach methods, because you
only use the extension for state management, and not for adding functionality to the operation
context.
Task 2: Create a Parameter Inspector that Stores the Parameter Values in an Operation
Extension
1. In Solution Explorer, in the BlueYonder.BookingService.Implementation project, right-click the
Extensions folder, point to Add, and then click Class.
2. In the Add New Item dialog box, type ParametersInspector in the Name box, and then click Add.
3. To the beginning of the file, add the following using directives.
using System.ServiceModel;
using System.ServiceModel.Dispatcher;
5. Implement the IParameterInspector interface by adding the following code to the class.
6. To the BeforeCall method, add the following code to store the input parameters in an extension
object.
OperationContext.Current.Extensions.Add(new ParametersInfo(inputs));
return null;
Note: You have to implement the BeforeCall method to save the parameters of the
operation before the operation is invoked. You do not have to implement the AfterCall method,
because it only executes after the operation is complete without exceptions.
Task 3: Create an Error Handler that Traces Parameter Values for Faulty Operations
1. In Solution Explorer, in the BlueYonder.BookingService.Implementation project, right-click the
Extensions folder, point to Add, and then click Existing Item.
using System.ServiceModel;
using System.ServiceModel.Channels;
using System.ServiceModel.Dispatcher;
using System.Diagnostics;
L13-3
Note: Use the System.Diagnostics.TraceSource class to write trace messages to the WCF
tracing log file.
10. To implement the IErrorHandler interface, add the following code to the class.
11. To obtain the extension object from the operation's context, in the HandleError method, add the
following code before the return statement.
ParametersInfo parametersInfo =
OperationContext.Current.Extensions.Find<ParametersInfo>();
if (parametersInfo != null)
{
}
Note: The IErrorHandler interface provides two methods, ProvideFault and HandleError.
You can implement the ProvideFault method to provide a fault message to WCF based on the
thrown exception. The HandleError is called after WCF returns the fault message to the client so
that you can log the thrown exception without making the client wait until the logging
procedure is complete.
12. In the if block to log the parameters, add the following code.
Task 4: Create a Custom Service Behavior for the Error Handler and Apply it to the
Service
1. In Solution Explorer, in the BlueYonder.BookingService.Implementation project, right-click the
Extensions folder, point to Add, and then click Class.
2. In the Add New Item dialog box, type ErrorLoggingBehavior in the Name box, and then click Add.
3. To the beginning of the file, add the following using directives.
using System.ServiceModel;
using System.ServiceModel.Channels;
using System.ServiceModel.Description;
using System.Collections.ObjectModel;
using System.ServiceModel.Dispatcher;
5. To implement the IServiceBehavior interface, add the following code to the class.
6. To the ApplyDispatchBehavior method to attach the new error handler to each channel, add the
following code.
7. To the end of the foreach block you added (after the error handler) to apply the custom parameter
inspector to each operation, add the following code.
10. In the Reference Manager dialog box, expand the Assemblies node in the navigation pane, and
then click Framework.
L13-5
11. Scroll down the assemblies list, point to the System.Configuration assembly, select the check box
next to the assembly name, and then click OK.
14. To the beginning of the file, add the following using directive.
using System.ServiceModel.Configuration;
16. Implement the BehaviorExtensionElement abstract class by adding the following code to the class.
17. To the get property accessor of the BehaviorType property, add the following code.
return typeof(ErrorLoggingBehavior);
Note: You can use a custom behavior in the configuration file only if you create a class for
its configuration element. The configuration element class has to provide two things: the kind of
the custom behavior class and an instance of it.
21. Locate the <system.serviceModel> tag, and add the following section after it, before the
<services> section.
<extensions>
<behaviorExtensions>
<add name="errorLoggingBehavior"
type="BlueYonder.BookingService.Implementation.Extensions.ErrorLoggingBehaviorExtensi
onElement, BlueYonder.BookingService.Implementation"/>
</behaviorExtensions>
</extensions>
22. Locate the <serviceBehaviors> element under the <behaviors> section, and add the following
element between the <behavior> and <serviceMetadata> tags.
L13-6 Developing Windows Azure and Web Services
<errorLoggingBehavior/>
23. The resulting <behaviors> section should resemble the following configuration.
<behaviors>
<serviceBehaviors>
<behavior>
<errorLoggingBehavior/>
<serviceMetadata httpGetEnabled="true"
httpGetUrl="http://localhost/BlueYonder/Booking"/>
</behavior>
</serviceBehaviors>
</behaviors>
Note: Visual Studio Intellisense uses built-in schemas to perform validations. Therefore, it
will not recognize errorLoggingBehavior behavior extension, and will display a warning. Please
disregard this warning.
24. To save the file, press Ctrl+S, and leave the file opened.
<system.diagnostics>
<sources>
</sources>
<trace autoflush="true" />
</system.diagnostics>
Note: The autoflush attribute controls whether log messages are immediately written to
the log, or cached in memory and periodically flushed. The value of the attribute is set to true so
that you can view the results immediately without waiting for the log to flush its content to the
file.
2. In the <system.diagnostics> element, add the following configuration to configure where the logs
are written.
<sharedListeners>
<add name="ServiceModelTraceListener"
type="System.Diagnostics.XmlWriterTraceListener"
initializeData="D:\AllFiles\Apx01\LabFiles\WCFTrace.svclog" />
</sharedListeners>
3. Add the following configuration In the <sources> element, add the following configuration to log
both WCF trace messages and your custom trace messages.
<add name="ServiceModelTraceListener">
<filter type="" />
</add>
</listeners>
</source>
Note: In the above configuration you have two trace sources; each one configures a
different event source. The System.ServiceModel source is used for tracing WCF activities, and
the ErrorHandlerTrace source is used by the LoggingErrorHandler class, in the TraceSource
constructor. WCF tracing is covered in Module 10, Monitoring and Diagnostics, Lesson 2,
Configuring Service Diagnostics, in Course 20487.
5. In Solution Explorer, right-click the BlueYonder.BookingService.Host project, and then click Set as
StartUp Project.
6. To start the service host without debugging, press Ctrl+F5.
7. On the Start screen, click the Computer tile to open File Explorer. Browse to D:\AllFiles and double-
click the WcfTestClient shortcut.
8. In the WCF Test Client utility, on the File menu, click Add Service.
9. In the Add Service dialog box, type http://localhost/BlueYonder/Booking, and then click OK. Wait
until you see the service and endpoints tree in the pane to the left.
10. In the pane to the left, double-click the UpdateTrip() node, and then click Invoke in the UpdateTrip
tab. If a Security Warning dialog box appears, click OK.
11. Wait until an error dialog box appears that states "The confirmation code of the reservation is invalid".
To close the dialog box, click Close, and then close the WCF Test Client utility.
12. Return to File Explorer and browse to D:\AllFiles\Apx01\LabFiles. Double-click WCFTrace.svclog.
13. In the Microsoft Service Trace Viewer utility, in the Activity tab to the left, select the line marked in
red that says "Process action 'http://blueyonder.server.interfaces/IBookingService/UpdateTrip'".
14. In the pane to the right side, select the line marked in red that begins with "Exception of type
FaultException occurred: The confirmation code of the reservation is invalid".
15. In the Formatted tab on the lower-right side, locate the Application Data text area and verify that
you see the XML representation of the TripUpdateDto object.
Results: You can use the WCF Test Client utility to test the service, cause exceptions to be thrown in the
code, and check the log files to verify that the exception message is logged together with the parameters
that are sent to the service operation.
L13-8 Developing Windows Azure and Web Services
[TransactionFlow(TransactionFlowOption.Allowed)]
2. Locate the <system.serviceModel> tag and add the following <bindings> section between the
<system.serviceModel> and <services> tags.
<bindings>
<netTcpBinding>
<binding name="TcpTransactionalBind" transactionFlow="true" />
</netTcpBinding>
</bindings>
<endpoint name="FrequentFlyerTcp"
address="net.tcp://localhost:5010/BlueYonder/FrequentFlyer" binding="netTcpBinding"
bindingConfiguration="TcpTransactionalBind"
contract="BlueYonder.FrequentFlyerService.Contracts.IFrequentFlyerService" />
Note: Make sure that you replace the existing endpoint and do not add a second endpoint,
because the two endpoints have the same contract and the same address, and will therefore
collide.
Task 4: Add Code to the WCF Booking Service that Calls the Frequent Flyer WCF
Service
1. In Solution Explorer, in the BlueYonder.BookingService.Implementation project, double-click
BookingService.cs.
2. In the BookingService class, locate the comment // TODO: 1 - Create a channel factory for the
Frequent Flyer service, and add the following code after it.
3. Locate the method UpdateTrip, and scroll to the end of the method until you see the comment //
TODO: 2 - Call the Frequent Flyer service to add the miles if the traveler has checked-in and add the
following code between the two comments.
Task 5: Execute the Service Call and the Reservations Database Updates in a
Distributed Transaction
1. In Solution Explorer, right-click the BlueYonder.BookingService.Implementation project, and then
click Add Reference.
2. In the Reference Manager dialog box, expand the Assemblies node in the navigation pane, and
then click Framework.
3. Scroll down the assemblies list, point to the System.Transactions assembly, select the check box next
to the assembly name, and then click OK.
4. Return to the BookingService.cs file, and add the following using directive in the beginning of the
file.
using System.Transactions;
5. Locate the UpdateTrip method, and surround the code that begins with the if statement that you
added before, and ends with the Save method call with the following using statement.
6. Add the following code at the end of the using block after the Save method.
scope.Complete();
Task 6: Update the WCF Client Configuration with the Frequent Flyer Service Endpoint
and the Support for Transaction Flow in the Bindings
1. In Solution Explorer, in the BlueYonder.BookingService.Host project, double-click App.config.
2. Locate the <system.serviceModel> element and add the following <bindings> section in the end
of the element before the </system.serviceModel> tag.
<bindings>
<netTcpBinding>
<binding name="TcpTransactionalBind" transactionFlow="true" />
</netTcpBinding>
</bindings>
<client>
<endpoint address="net.tcp://localhost:5010/BlueYonder/FrequentFlyer"
binding="netTcpBinding" bindingConfiguration="TcpTransactionalBind"
contract="BlueYonder.FrequentFlyerService.Contracts.IFrequentFlyerService"
name="FrequentFlyerEP"></endpoint>
</client>
6. In the Services window, look for the Distributed Transaction Coordinator service and check its
Status column. If the status of the service is not Running, right-click it, and then click Start.
10. Wait until the message "Frequent Flyer Service Is Running... Press [ENTER] to close." appears.
11. On the Start screen, click the Computer tile to open File Explorer. Browse to D:\AllFiles and double-
click the WcfTestClient shortcut.
12. In the WCF Test Client utility, on the File menu, click Add Service.
13. In the Add Service dialog box, type http://localhost/BlueYonder/Booking, and then click OK. Wait
until you see the service and endpoints tree in the pane to the left.
15. In the Add Service dialog box, type http://localhost/BlueYonder/FrequentFlyer, and then click
OK. Wait until you see the service and endpoints tree in the pane to the left.
16. Double-click the UpdateTrip() node in the pane to the left and enter the following values in the
Request area of the UpdateTrip tab:
FlightDirection: Departing
ReservationConfirmationCode: Aa123
17. In the TripToUpdate property, click the null value, open the drop-down list, and select
BlueYonder.BookingService.Contracts.TripDto.
18. Expand the TripToUpdate node and enter the following values:
Class: First
FlightScheduleID: 1
Status: CheckedIn
19. In the UpdateTrip tab, click Invoke. If a Security Warning dialog box appears, click OK.
20. Wait until the service invocation is finished, and verify no errors are shown.
21. In the pane to the left, double-click the GetAccumulatedMiles() node and set the travelerId
parameter to 1 in the Request area of the GetAccumulatedMiles tab.
22. Click Invoke, and if a Security Warning dialog box appears, click OK. Wait until the service
invocation is finished, and verify the return value in the Response area is 5026.
23. Close the WCF Test Client utility and the two console windows.
Results: You can run the WCF Test Client utility, call an operation in the Booking service that starts a
distributed transaction, and verify that the Frequent Flyer service indeed committed its transaction.
L13-12 Developing Windows Azure and Web Services
L14-1
2. Browse to D:\AllFiles\Apx02\LabFiles\Setup.
3. Double-click CreateServerCertificate.cmd, when completed, press any key to close the command
window.
<binding>
<security mode="Message">
<message clientCredentialType="Certificate"/>
</security>
</binding>
Note: You can create one default binding configuration without the name attribute for
each binding type. The default configuration will apply to any endpoint using that binding, which
does not have its own binding configuration.
Task 3: Configure the service behavior to use the newly created certificate
1. Under the <system.serviceModel> section group, locate the <behaviors> section.
<serviceCredentials>
</serviceCredentials>
4. Add the <serviceCertificate> element to the <serviceCredentials> element with the following
configuration.
5. Add the <clientCertificate> element to the <serviceCredentials> element with the following
configuration.
<clientCertificate>
<authentication revocationMode="NoCheck"/>
</clientCertificate>
Note: You cannot check if the client certificate has been revoked, because it was generated
locally. If a real certification authority had issued the client certificate, it would have been possible
to check whether it was revoked.
Results: You can test your changes at the end of the next exercise.
L14-3
2. In the Reference Manager dialog box, expand the Assemblies tab, and then click the Framework
tab.
3. Select the System.IdentityModel assembly from the list, and then click OK.
5. In the Add New Item dialog box, type CertificateAuthorizationPolicy in the Name box, and then
click Add.
public CertificateAuthorizationPolicy()
{
_id = Guid.NewGuid();
}
public string Id
{
get
{
return _id.ToString();
}
}
Task 2: Configure the service authorization to use the custom authorization policy
1. In Solution Explorer, expand the BlueYonder.Server.Booking.Host project, and double-click
App.config.
2. Under the <system.serviceModel> section group, locate the <behaviors> section.
4. Add the <serviceAuthorization> element to the behavior with the following configuration.
[System.Security.Permissions.PrincipalPermission(
System.Security.Permissions.SecurityAction.Demand, Role="ReservationsManager")]
4. In the CreateReservation method, right-click the first line of code (the if statement), point to
Breakpoint, and then click Insert Breakpoint
5. In Solution Explorer, right-click the BlueYonder.Server.Booking.Host project, and then click Set as
StartUp Project.
7. Verify that the console window opens without throwing any exceptions.
Results: After you complete this exercise, the booking service host is opened successfully and can locate
the service certificate.
L14-6 Developing Windows Azure and Web Services
Exercise 3: Configure the ASP.NET Web API Booking Service for Secured
Communication
Task 1: Create a client authentication certificate for the ASP.NET Web API booking
service
1. On the Start screen, click Computer to open the File Explorer window.
2. Browse to D:\AllFiles\Apx02\LabFiles\Setup.
3. Double-click CreateClientCertificate.cmd, when completed, press any key to close the command
window.
5. Under the <configuration> root element, locate the <system.serviceModel> section group.
6. Add a <bindings> section within the <system.serviceModel> section group, with the following
configuration.
<bindings>
<netTcpBinding>
<binding>
<security mode="Message">
<message clientCredentialType="Certificate"/>
</security>
</binding>
</netTcpBinding>
</bindings>
Task 3: Configure the client-side endpoint behavior with the client's certificate
1. Add a <behaviors> section within the <system.serviceModel> section group, with the following
configuration.
<behaviors>
<endpointBehaviors>
<behavior>
<clientCredentials>
</clientCredentials>
</behavior>
</endpointBehaviors>
</behaviors>
2. Add the <serviceCertificate> element to the <clientCredentials> element, with the following
configuration.
<serviceCertificate>
<authentication revocationMode="NoCheck" />
</serviceCertificate>
L14-7
3. Add the <clientCertificate> element to the <clientCredentials> element, with the following
configuration.
4. Locate the <client> section, and add the following <identity> configuration to the <endpoint>
element.
<identity>
<certificateReference storeLocation="LocalMachine" storeName="TrustedPeople"
x509FindType="FindBySubjectName" findValue="Server"/>
</identity>
Note: The <identity> element contains the information about the service's certificate. The
client uses this configuration to verify that it is connected to the correct service.
10. If you are prompted by a Developers License dialog box, click I Agree. If you are prompted by a
User Account Control dialog box, click Yes. Type your email address and a password in the
Windows Security dialog box, and then click Sign in. Click Close in the Developers License dialog
box.
Note: If you do not have valid email address, click Sign up and register for the service.
Write down these credentials and use them whenever an email account is required.
11. In Solution Explorer, right-click the BlueYonder.Companion.Client project, and then click Set as
StartUp Project.
12. To start the client app without debugging, press Ctrl+F5.
13. If you are prompted to allow the app to run in the background, click Allow.
14. Display the app bar by right-clicking or by swiping from the bottom of the screen.
15. Click Search, and in the Search box on the right side enter New. If you are prompted to allow the
app to share your location, click Allow.
16. Wait for the app to show a list of flights from Seattle to New York.
17. Click Purchase this trip.
25. Go back to the 20487B-SEA-DEV-A virtual machine, to Visual Studio 2012. The code execution
breaks, and the line in breakpoint is highlighted in yellow.
26. On the Debug menu, click Quick Watch.
27. In the QuickWatch dialog box, in the Expression combo box, enter
ServiceSecurityContext.Current.PrimaryIdentity, and then click Reevaluate.
29. To close the QuickWatch dialog box, click Close and then press F5 to continue.
30. Go back to the 20487B-SEA-DEV-C virtual machine, to the client app.
31. To close the confirmation message, click Close, and then close the client app.
32. Go back to the 20487B-SEA-DEV-A virtual machine, and close the service host console window to
stop debugging the service.
Results: After you complete the exercise, you will be able to start the client application and create a
reservation.
Notes
Notes
Notes
Notes
Notes
Notes
Notes
Notes
Notes
Notes
Notes
Notes
Notes
Notes