Professional Documents
Culture Documents
2 How to change Subnet and Virtual Network for Azure Virtual Machines (ASM &
ARM)........................................................................................................................... 7
3
Create a VNet with a Site-to-Site connection using the Azure classic portal......27
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
Trace.TraceInformation Method.....................................................................443
42
43
46
47
48
49
50
51
Add a Site-to-Site connection to a VNet with an existing VPN gateway
connection.............................................................................................................. 494
52
VM Sizes........................................................................................................ 503
53
54
55
56
............................................................................................................................... 533
57
............................................................................................................................... 533
kind of shared storage like SQL), but most are not and so normally, we want to keep
every user attached to his designated server. If the user moves to another server, a
new session is started, and whatever session data the application was using is gone
(for example, the content of a shopping cart). Heres a brief description of this
process:
1.
2.
ARR runs on the front-end Azure server and receives the request
3.
4.
ARR forwards the request to the selected server, crafts and attaches an
ARRAffinity cookie to the request
5.
The response comes back to the client, holding the ARRAffinity cookie.
6.
When the client receives the request, it stores the cookie for later use
(browsers are designed to do this for cookies they receive from servers)
7.
8.
When ARR receives the request, it sees the cookie, and decodes it.
9.
The decoded cookie holds the name of the instance that was used earlier,
and so ARR forwards the request to the same instance, rather than choosing one
from the pool
10.
The same thing (steps 7-9) repeat upon every subsequent request for the
same site, until the user closes the browser, at which point the cookie is cleared
However, there are situations where keeping affinity is not desired. For example,
some users dont close their browser, and remain connected for extended periods of
time. When this happens, the affinity cookie remains in the browser, and this keeps
the user attached to his server for a period that could last hours, days or even more
(in theory, indefinitely!). Keeping your computer on and browser open is not
unusual, and many people (especially on their work-place computers) do it all the
time. In the real world, this leads to the distribution of users per instance falling out
of balance (thats a little like how the line behind some registers in the supermarket
can get hogged by a single customer, leading to others waiting in line more than
they normally should). Depending on your applications and what they do, you may
care more or less about users being tied to to their servers. In case this is of little or
no importance and youd rather disable this affinity and opt for better load
balancing, we have introduced the ability for you to control it. Because the affinity is
controlled by an affinity cookie, all you have to do to disable affinity is make sure
that Azure doesnt give the cookies out. If it doesnt, subsequent requests by the
user will be treated as new, and instead of trying to route them to their server,
ARR will use its normal load-balancing behavior to route the request to the best
server.
In your application
2.
In a site configuration
To control this behavior in an application, you need to write code to send out a
special HTTP header, which will tell the Application Request Router to remove the
affinity cookie. This header is Arr-Disable-Session-Affinity, and if you set it to
true, ARR will strip out the cookie. For example, you could add a line similar to this
to your applications code:
headers.Add("Arr-Disable-Session-Affinity", "True");
* This example is for C#, but this could just as easily be done in any other language
or platform. Setting this in the applications code would be suitable for situations
you DO want affinity to be kept for the most part, and only reset on specific
application pages. If, however, you prefer to have it completely disabled, you could
have ARR remove the cookie always by having IIS itself inject that header directly.
This is done with a customHeaders configuration section in web.config. Simply
add the following into your web.config, and upload it to the root of the site:
Keep in
mind, though, that the configuration in web.config is sensitive, and a badly
formatted file can stop the site from working properly. If you havent had a chance
to work with web.config files before, read this getting-started guide.
Troubleshooting If you intend on implementing this, you might wonder how to
confirm its working and troubleshoot it. The ARR Affinity cookie is normally included
with the 1st response from any Azure Web Sites web site, and subsequently included
with any request sent from the client and response received from the server. To see
it in action, you could use any of multiple HTTP troubleshooting and diagnostic tools.
Here is a list of some of the more popular options:
1.
Fiddler
2.
HTTPWatc
3.
Network Monitor
4.
WireShark
5.
Firebug
You can find info about several other tools here. The 1st one on the list, Fiddler, is
one of the most popular, because it can interact with any browser, and is available
for free. Once Fiddler is installed, it will record any URL you browse to, and you can
then click on the Inspector tab for either the request or response to see the details.
For example, below you can see the HTTP Headers tab, which show the affinity
cookie sent by the server using the Set-Cookie header:
If you add
the Arr-Disable-Session-Affinity header to disable the affinity cookie, ARR will
not set the cookie, but it will also remove the Arr-Disable-Session-Affinity header
itself, so if your process is working correctly, you will see neither. If you see both the
cookie AND the header, this means that something is wrong with the way you set
the header. Possibly an error in the text of the header name or its value. If you see
the cookie and not the header, this probably means your changes to Web.config are
invalid, or your header-injection code is not working, and you could try to confirm it
by adding another, unrelated header. Generally speaking, its easier to set the
headers with web.config than with code, so in case of doubt, you should start by
simplifying it to reduce the surface area of your investigation. In closing, we should
mention that disabling the affinity is not something that should be taken lightly. For
static content, it would rarely be an issue, but if youre running applications, and
they are not designed for dealing with users jumping from one server to another, it
might not end well. For scenarios where the affinity has led to imbalance, this new
ability will come as great news.
Web
$NIC.IpConfigurations[0].Subnet.Id = $Subnet2.Id
Set-AzureRmNetworkInterface -NetworkInterface $NIC
Once you have done this operation, you need to commit the change with the Azure
ARM Network Provider using the cmdlet Set-AzureRmNetworkInterface. Please
note that the execution of this cmdlet will take about two minutes, at least in my
scenario with an A3 VM type. Why not an immediate operation? Because once you
will commit the change, Azure will automatically restart the VM, then you can
execute this procedure while the VM is running, but at a certain point you will have
a service downtime.
If you want to check the change completed, you need to re-acquire the NIC object
reference and access the private IP address or subnet properties as in the example
below:
$NIC = Get-AzureRmNetworkInterface -Name $NICname -ResourceGroupName
$RGname
$NIC.IpConfigurations[0].PrivateIpAddress # 10.1.1.4 -> 10.1.2.4
There is a last aspect I want to mention here: what happens if you have more than
one VM in an Availability Set and you want to move one of more VMs in a different
subnet? No problem, I tested this scenario and you can have one (or more) VMs in
one subnet S1 and one (or more) VMs in a different subnet S2, provided that S1 and
S2 are in the same VNET.
75ca862d34c1/resourceGroups/igorrg7/providers/Microsoft.Network/networkInterfac
es/nic4-igor-vm1/ipConfigurations/ipconfig1 is not in the same Virtual Network as
the subnets of other VMs in the availability set.
StatusCode: 400
ReasonPhrase: Bad Request
OperationID :
At line:1 char:1
+ Update-AzureRmVM -VM $VirtualMachine -ResourceGroupName $rgname
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~
+ CategoryInfo
: CloseError: (:) [Update-AzureRmVM],
ComputeCloudException
+ FullyQualifiedErrorId :
Microsoft.Azure.Commands.Compute.UpdateAzureVMCommand
Another very important property you cannot change once the VM is created, is the
Availability Set, then you should think about it carefully. Finally, please note that
even if with a single VM into an Availability Set, trying to move to a different VNET
will give you exactly the same error as above, even if there only one VM.
Thats all I wanted to share with you on this topic, if you want you can also follow
me on Twitter (@igorpag). Best regards.
You can import and export network configuration settings contained in your network
configuration file by using PowerShell or the Management Portal. The instructions
below will help you export and import using the Management Portal.
To export your network settings
When you export, all of the settings for the virtual networks in your subscription will
be written to an .xml file.
1. Log into the Management Portal.
2. In the Management Portal, on the bottom of the networks page, click
Export.
3. On the Export network configuration window, verify that you have
selected the subscription for which you want to export your network settings.
Then, click the checkmark on the lower right.
4. When you are prompted, save the NetworkConfig.xml file to the location of
your choice.
To import your network settings
1. In the Management Portal, in the navigation pane on the bottom left, click
New.
2. Click Network Services -> Virtual Network -> Import Configuration.
3. On the Import the network configuration file page, browse to your
network configuration file, and then click the next arrow.
4. On the Building your network page, you'll see information on the screen
showing which sections of your network configuration will be changed or
created. If the changes look correct to you, click the checkmark to proceed to
update or create your virtual network.
4
Azure provides cloud storage that is highly available and scalable. The underlying
storage system for Azure is provided through a set of services, including the Blob,
Table, Queue, and File services. The Azure Table service is designed for storing
structured data. The Azure Storage service supports an unlimited number of tables,
and each table can scale to massive levels, providing terabytes of physical storage.
To take best advantage of tables, you will need to partition your data optimally. This
article explores strategies that allow you to efficiently partition data for Azure Table
storage. +
RowKey The RowKey property stores string values that uniquely identify
entities within each partition. The PartitionKey and the RowKey together form the
primary key for the entity
padded strings. In the previous example, using "002" will allow it to appear before
"111". +
The clustered index sorts by the PartitionKey in ascending order and then by
RowKey in ascending order. The sort order is observed in all query responses.
Lexical comparisons are used during the sorting operation. Therefore, a string value
of "111" will appear before a string value of "2". In some cases, you may want the
order to be numeric. To sort in a numeric and ascending order, you will need to use
fixed-length, zero-padded strings. In the previous example, using "002" will allow it
to appear before "111". +
4.3 Table Partitions
Partitions represent a collection of entities with the same PartitionKey values.
Partitions are always served from one partition server and each partition server can
serve one or more partitions. A partition server has a rate limit of the number of
entities it can serve from one partition over time. Specifically, a partition has a
scalability target of 500 entities per second. This throughput may be higher during
minimal load on the storage node, but it will be throttled down when the node
becomes hot or very active. To better illustrate the concept of partitioning, the
following figure illustrates a table that contains a small subset of data for footrace
event registrations. It presents a conceptual view of partitioning where the
PartitionKey contains three different values comprised of the event's name and
distance. In this example, there are two partition servers. Server A contains
registrations for the half-marathon and 10 Km distances while Server B contains
only the full-marathon distances. The RowKey values are shown to provide context
but are not meaningful for this example. +
PartitionKey
RowKey
"0001"
"0002"
"0003"
"0004"
"0005"
"0006"
Azure may group the first three entities into a range partition. If you apply a range
query to this table that uses the PartitionKey as the critiera and requests entities
from "0001" to "0003,", the query may perform efficiently because they will be
served from a single partition server. There is no guarantee when and how a range
partition will be created. +
The existence of range partitions for your table can affect the performance of your
insert operations if you are inserting entities with increasing, or decreasing,
PartitionKey values. Inserting entities with increasing PartitionKey values is called an
Append Only pattern, and inserting with decreasing values is called a Prepend Only
pattern. You should consider not using such patterns because the overall throughput
of your insert requests will be limited by a single partition server. This is because, if
range partitions exists, then the first and last (range) partitions will contain the least
and greatest PartitionKey values, respectively. Therefore, the insert of a new entity,
with a sequentially lower or higher PartitionKey value, will target one of the end
partitions. The following figure shows a possible set of range partitions based on the
previous example. If a set of "0007", "0008" and "0009" entities were inserted, they
would be assigned to the last (orange) partition. +
PartitionKe
y
Granularity
Partitio
n Size
Advantages
Single value
Small
number
of
entities
Large
number
Single value
Disadvantages
Multiple
values
of
entities
entity. See
http://msdn.microsoft.com/li
brary/dd894038.aspxfor
more information on the
limits of entity group
transactions
There
are
multiple
partitions
Partition
sizes
depend
on entity
distributi
on
Unique
values
There
are many
small
partitions
.
Dynamic partitioning is
possible
Single-request queries
possible (no continuation
tokens)
Throughput is
limited to the
performance of
a single server.
A highly uneven
distribution of
entities across
partitions may
limit the
performance of
the larger and
more active
partitions
Queries that
involve ranges
may require
visits to more
than one server.
Batch
transactions are
not possible.
Append or
prepend-only
patterns can
affect insertthroughput
This table shows how scaling is affected by the PartitionKey values. It is a best
practice to favor smaller partitions because they offer better load balancing. Larger
partitions may be appropriate in some scenarios, and are not necessarily
Query
Type
PartitionKe
y Match
RowKey
Match
Performance Rating
Point
Exact
Exact
Best
Row range
scan
Exact
Partial
Partition
range scan
Partial
Partial
Full table
scan
Partial, none
Partial,
none
scanned
4.4.2.1.1 Note
The table defines performance ratings relative to each other. The number and size
of the partitions may ultimately dictate how the query performs. For example, a
partition range scan for a table with many and large partitions may perform poorly
compared to a full table scan for a table with few and small partitions. +
The query types listed in this table show a progression from the best types of
queries to use to the worst types, based on their performance ratings. Point queries
are the best types of queries to use because they fully use the table's clustered
index. The following point query uses the data from the footraces registration table.
+
http://<account>.windows.core.net/registrations(PartitionKey=2011 New York City
Marathon__Full,RowKey=1234__John__M__55)
If the application uses multiple queries, not all of them can be point queries. In
terms of performance, range queries follow point queries. There are two types of
range queries: the row range scan and the partition range scan. The row range scan
specifies a single partition. Because the operation occurs on a single partition
server, row range scans are generally more efficient then partition range scans.
However, one key factor in the performance of row range scans is how selective a
query is. Query selectivity dictates how many rows must be iterated to find the
matching rows. More selective queries are more efficient during row range scans. +
To assess the priorities of your queries, you need to consider the frequency and
response time requirements for each query. Queries that are frequently executed
may be prioritized higher. However, an important but rarely used query may have
low latency requirements that could rank it higher on the priority list. +
that the partitions are distributed across many partition servers. If a query crosses a
server boundary, continuation tokens must be returned. Continuation tokens specify
the next PartitionKey or RowKey values that will retrieve the next set of data for the
query. In other words, continuation tokens represent at least one more request to
the service which can degrade the overall performance of the query. Query
selectivity is another factor that can affect the performance of the query. Query
selectivity is a measure of how many rows must be iterated for each partition. The
more selective a query is, the more efficient it is at returning the desired rows. The
overall performance of range queries may depend on the number of partition
servers that must be touched or how selective the query is. You also should avoid
using the append or prepend only patterns when inserting data into your table.
Using such patterns, despite creating small and many partitions, can limit the
throughput of your insert operations. The append and prepend only patterns are
discussed in "Range Partitions" section. +
4.5.3 Considering Queries
Knowing the queries that you will be using will allow you to determine which
properties are important to consider for the PartitionKey. The properties that are
used in the queries are candidates for the PartitionKey. The following table provides
a general guideline of how to determine the PartitionKey. +
If the entity
Action
If there is more than one equally dominant query, you can insert the information
multiple times with different RowKey values that you need. The secondary (or
tertiary, etc) rows will be managed by your application. This pattern will allow you to
satisfy the performance requirements of your queries. The following example uses
the data from the footrace registration example. It has two dominant queries. They
are: +
Query by age
To serve both dominant queries, insert two rows as an entity group transaction.
The following table shows the Partitionkey and RowKey properties for this
scenario. The RowKey values provide a prefix for the bib and age to enable the
application to distinguish from the two values.
PartitionKey
RowKey
BIB:01234__John__M__55
AGE:055__1234__John__M
2.
Load the test table with data so that it contains entities with the PartitionKey
that you will be targeting.
3.
Use the application to simulate peak load to the table, and target a single
partition by using the PartitionKey from step 2. This step is different for every
application, but the simulation should include all the necessary queries and
+
4.7 Load Balancing
Load balancing at the partition layer occurs when a partition gets too hot, which
means the partition, specifically the partition server, is operating beyond its target
scalability. For Azure storage, each partition has a scalability target of 500 entities
per second. Load balancing also occurs at the Distributed File System layer, or DFS
layer. The load balancing at the DFS layer deals with I/O load, and is outside the
scope of this article. Load balancing at the partition layer does not immediately
occur after exceeding the scalability target. Instead, the system waits a few minutes
before beginning the load balancing process. This ensures that a partition has truly
become hot. It is not necessary to prime partitions with generated load that triggers
load balancing because the system will automatically perform the task. It may be
possible that if a table was primed with a certain load, the system will balance the
partitions based on actual load, which results in a very different distribution of the
partitions. Instead of priming partitions, you should consider writing code that
handles the Timeout and Server Busy errors. Such errors are returned when the
system is performing load balancing. By handling those errors using a retry
strategy, your application can better handle peak load. Retry strategies are
discussed in more detail in the following section. When load balancing occurs, the
partition will become offline for a few seconds. During the offline period, the system
Fixed Backoff The operation is retried N times with a constant backoff value
The No Retry strategy is a simple (and evasive) way to handle operation failures.
However it is not very useful. Not imposing any retry attempts poses obvious
risks with data not being stored correctly after failed operations. Therefore, a
better strategy is to use the Fixed Backoff strategy that provides the ability to
retry operations with the same backoff duration. However, this strategy is not
optimized for handling highly scalable tables because if many threads or
processes are waiting for the same duration, collisions can occur. The
recommended retry strategy is one that uses an exponential backoff where each
retry attempt is longer than the last attempt. It is similar to the collision
avoidance (CA) algorithm used in computer networks, such as Ethernet. The
exponential backoff uses a random factor to provide an additional variance to the
resulting interval. The backoff value is then constrained to minimum and
maximum limits. The following formula can be used for calculating the next
backoff value using an exponential algorithm:
y = Rand(0.8z, 1.2z)(2x-1
y = Min(zmin + y, zmax
Where:
z = default backoff in milliseconds
zmin = default minimum backoff in milliseconds
zmax = default maximum backoff in milliseconds
x = the number of retries
y = the backoff value in milliseconds
The 0.8 and 1.2 multipliers used in the Rand (random) function produces a
random variance of the default backoff within 20% of the original value. The
20% range is acceptable for most retry strategies and prevents further
collisions. The formula can be implemented using the following code:
int retries = 1;
+
5.1.1 Deployment models and methods for Site-to-Site connections
It's important to understand that Azure currently works with two deployment
models: Resource Manager and classic. Before you begin your configuration,
verify that you are using the instructions for the deployment model that you
want to work in. The two models are not completely compatible with each
other.+
For example, if you are working with a virtual network that was created using
the classic deployment model and wanted to add a connection to the VNet,
you would use the deployment methods that correspond to the classic
deployment model, not Resource Manager. If you are working with a virtual
network that was created using the Resource Manager deployment model,
you would use the deployment methods that correspond with Resource
Manager, not classic.+
For information about the deployment models, see Understanding Resource
Manager deployment and classic deployment.+
The following table shows the currently available deployment models and
methods for Site-to-Site configurations. When an article with configuration
steps is available, we link directly to it from this table.+
Deployment
Model/Method
Azure
Portal
Classic
Portal
PowerShe
ll
Resource Manager
Article
Not Supported
Article
Classic
Supported**
Article*
Article+
(*) denotes that the classic portal can only support creating one S2S VPN
connection.+
(**) denotes that an end-to-end scenario is not yet available for the Azure
portal.+
(+) denotes that this article is written for multi-site connections.+
5.1.1.1 Additional configurations
A compatible VPN device and someone who is able to configure it. See About
VPN Devices. If you aren't familiar with configuring your VPN device, or are
unfamiliar with the IP address ranges located in your on-premises network
configuration, you need to coordinate with someone who can provide those
details for you.
An externally facing public IP address for your VPN device. This IP address
cannot be located behind a NAT.
2.
In the lower left corner of the screen, click New. In the navigation pane, click
Network Services, and then click Virtual Network. Click Custom Create to
begin the configuration wizard.
3.
To create your VNet, enter your configuration settings on the following pages:
Name: Name your virtual network. For example, EastUSVNet. You'll use this
virtual network name when you deploy your VMs and PaaS instances, so you may
not want to make the name too complicated.
DNS Servers: Enter the DNS server name and IP address, or select a
previously registered DNS server from the shortcut menu. This setting does not
create a DNS server. It allows you to specify the DNS servers that you want to use
for name resolution for this virtual network.
Configure Site-To-Site VPN: Select the checkbox for Configure a site-tosite VPN.
Name: The name you want to call your local (on-premises) network site.
VPN Device IP Address: The public facing IPv4 address of your on-premises
VPN device that you use to connect to Azure. The VPN device cannot be located
behind a NAT.
Address Space: Include Starting IP and CIDR (Address Count). You specify
the address range(s) that you want to be sent through the virtual network
Add address space: If you have multiple address ranges that you want to
be sent through the virtual network gateway, specify each additional address
range. You can add or remove ranges later on the Local Network page.
Address Space: Include Starting IP and Address Count. Verify that the
address spaces you specify don't overlap any of the address spaces that you
have on your on-premises network.
Add subnet: Include Starting IP and Address Count. Additional subnets are
not required, but you may want to create a separate subnet for VMs that will have
static DIPS. Or you might want to have your VMs in a subnet that is separate from
your other role instances.
Add gateway subnet: Click to add the gateway subnet. The gateway subnet
is used only for the virtual network gateway and is required for this configuration.
Click the checkmark on the bottom of the page and your virtual network will
begin to create. When it completes, you will see Created listed under
Status on the Networks page in the Azure Classic Portal. After the VNet has
been created, you can then configure your virtual network gateway.+
5.7.1.1.1 Important
manage groups in Azure Active directory you can read more in Azure Active
Directory preview cmdlets for group management.+
6.1.1.1.1 Note
To use Azure Active Directory, you need an Azure account. If you don't have
an account, you can sign up for a free Azure account.+
Within Azure AD, one of the major features is the ability to manage access to
resources. These resources can be part of the directory, as in the case of
permissions to manage objects through roles in the directory, or resources
that are external to the directory, such as SaaS applications, Azure services,
and SharePoint sites or on premise resources. There are four ways a user can
be assigned access rights to a resource:+
1.
Direct assignment
Users can be assigned directly to a resource by the owner of that resource.
2.
Group membership
A group can be assigned to a resource by the resource owner, and by doing
so, granting the members of that group access to the resource. Membership
of the group can then be managed by the owner of the group. Effectively, the
resource owner delegates the permission to assign users to their resource to
the owner of the group.
3.
Rule-based
The resource owner can use a rule to express which users should be assigned
access to a resource. The outcome of the rule depends on the attributes used
in that rule and their values for specific users, and by doing so, the resource
owner effectively delegates the right to manage access to their resource to
the authoritative source for the attributes that are used in the rule. The
resource owner still manages the rule itself and determines which attributes
and values provide access to their resource.
4.
External authority
The access to a resource is derived from an external source; for example, a
group that is synchronized from an authoritative source such as an onpremises directory or a SaaS app such as WorkDay. The resource owner
assigns the group to provide access to the resource, and the external source
manages the members of the group.
+
The owner of a group can also make that group available for self-service
requests. In doing so, an end user can search for and find the group and
make a request to join, effectively seeking permission to access the
resources that are managed through the group. The owner of the group can
set up the group so that join requests are approved automatically or require
approval by the owner of the group. When a user makes a request to join a
group, the join request is forwarded to the owners of the group. If one of the
owners approves the request, the requesting user is notified and the user is
joined to the group. If one of the owners denies the request, the requesting
user is notified but not joined to the group.+
either some or all messages sent to the queue. The latter refers to the
publish/subscribe capability natively provided by Service Bus.
Your messaging solution must be able to support the "At-Most-Once" delivery
guarantee without the need for you to build the additional infrastructure
components.
You would like to be able to publish and consume batches of messages.
You require full integration with the Windows Communication Foundation (WCF)
communication stack in the .NET Framework.
+
Comparing Azure Queues and Service Bus queues
The tables in the following sections provide a logical grouping of queue features and
let you compare, at a glance, the capabilities available in both Azure Queues and
Service Bus queues.+
Foundational capabilities
This section compares some of the fundamental queuing capabilities provided by
Azure Queues and Service Bus queues.+
Compariso
n Criteria
Ordering
guarantee
Delivery
guarantee
Azure Queues
No
Yes - First-In-First-Out
(FIFO)
At-Least-Once
At-Least-Once
At-Most-Once
Atomic
operation
support
No
Yes
Receive
behavior
Non-blocking
Blocking with/without
timeout
(completes immediately if no
new message is found)
Compariso
n Criteria
Azure Queues
Push-style
API
No
Yes
OnMessage and
OnMessage sessions .NET
API.
Receive
mode
Exclusive
access
mode
Lease-based
Lock-based
Lease/Lock
duration
30 seconds (default)
60 seconds (default)
Message level
Queue level
Lease/Lock
precision
Compariso
n Criteria
Batched
receive
Batched
send
Azure Queues
Yes
Yes
No
Yes
(through the use of
transactions or client-side
batching)
Additional information
Messages in Azure Queues are typically first-in-first-out, but sometimes they can be
out of order; for example, when a message's visibility timeout duration expires (for
example, as a result of a client application crashing during processing). When the
visibility timeout expires, the message becomes visible again on the queue for
another worker to dequeue it. At that point, the newly visible message might be
placed in the queue (to be dequeued again) after a message that was originally
enqueued after it.
If you are already using Azure Storage Blobs or Tables and you start using queues,
you are guaranteed 99.9% availability. If you use Blobs or Tables with Service Bus
queues, you will have lower availability.
The guaranteed FIFO pattern in Service Bus queues requires the use of messaging
sessions. In the event that the application crashes while processing a message
received in the Peek & Lock mode, the next time a queue receiver accepts a
messaging session, it will start with the failed message after its time-to-live (TTL)
period expires.
Azure Queues are designed to support standard queuing scenarios, such as
decoupling application components to increase scalability and tolerance for failures,
load leveling, and building process workflows.
Service Bus queues support the At-Least-Once delivery guarantee. In addition, the
At-Most-Once semantic can be supported by using session state to store the
application state and by using transactions to atomically receive messages and
update the session state.
Azure Queues provide a uniform and consistent programming model across queues,
tables, and BLOBs both for developers and for operations teams.
Service Bus queues provide support for local transactions in the context of a single
queue.
The Receive and Delete mode supported by Service Bus provides the ability to
reduce the messaging operation count (and associated cost) in exchange for
lowered delivery assurance.
Azure Queues provide leases with the ability to extend the leases for messages.
This allows the workers to maintain short leases on messages. Thus, if a worker
crashes, the message can be quickly processed again by another worker. In
addition, a worker can extend the lease on a message if it needs to process it longer
than the current lease time.
Azure Queues offer a visibility timeout that you can set upon the enqueueing or
dequeuing of a message. In addition, you can update a message with different lease
values at run-time, and update different values across messages in the same
queue. Service Bus lock timeouts are defined in the queue metadata; however, you
can renew the lock by calling the RenewLock method.
The maximum timeout for a blocking receive operation in Service Bus queues is 24
days. However, REST-based timeouts have a maximum value of 55 seconds.
Client-side batching provided by Service Bus enables a queue client to batch
multiple messages into a single send operation. Batching is only available for
asynchronous send operations.
Features such as the 200 TB ceiling of Azure Queues (more when you virtualize
accounts) and unlimited queues make it an ideal platform for SaaS providers.
Azure Queues provide a flexible and performant delegated access control
mechanism.
+
Advanced capabilities
This section compares advanced capabilities provided by Azure Queues and Service
Bus queues.+
Compariso
n Criteria
Azure
Queues
Scheduled
delivery
Yes
Yes
Automatic
No
Yes
Compariso
n Criteria
Azure
Queues
Increasing
queue
time-tolive value
Yes
Yes
(via inplace
update of
visibility
timeout)
Poison
message
support
Yes
Yes
In-place
update
Yes
Yes
Serverside
transactio
n log
Yes
No
Storage
metrics
Yes
Yes
Minute
Metrics:
provides
real-time
metrics
for
availabilit
y, TPS,
API call
counts,
error
counts,
dead
lettering
Compariso
n Criteria
Azure
Queues
and more,
all in real
time
(aggregat
ed per
minute
and
reported
within a
few
minutes
from what
just
happened
in
productio
n. For
more
informatio
n, see
About
Storage
Analytics
Metrics.
State
managem
ent
No
Message
autoforwarding
No
Yes
Microsoft.ServiceBus.Messaging.EntityStatus.Activ
e,
Microsoft.ServiceBus.Messaging.EntityStatus.Disa
bled,
Microsoft.ServiceBus.Messaging.EntityStatus.Sen
dDisabled,
Microsoft.ServiceBus.Messaging.EntityStatus.Rece
iveDisabled
Yes
Compariso
n Criteria
Azure
Queues
Purge
queue
function
Yes
No
Message
groups
No
Yes
(through the use of messaging sessions)
Applicatio
n state
per
message
group
No
Yes
Duplicate
detection
No
Yes
(configurable on the sender side)
WCF
integratio
n
No
Yes
WF
integratio
n
Custom
Native
(requires
building a
custom
WF
activity)
Browsing
message
groups
No
Yes
Fetching
No
Yes
Compariso
n Criteria
Azure
Queues
message
sessions
by ID
Additional information
Both queuing technologies enable a message to be scheduled for delivery at a later
time.
Queue auto-forwarding enables thousands of queues to auto-forward their
messages to a single queue, from which the receiving application consumes the
message. You can use this mechanism to achieve security, control flow, and isolate
storage between each message publisher.
Azure Queues provide support for updating message content. You can use this
functionality for persisting state information and incremental progress updates into
the message so that it can be processed from the last known checkpoint, instead of
starting from scratch. With Service Bus queues, you can enable the same scenario
through the use of message sessions. Sessions enable you to save and retrieve the
application processing state (by using SetState and GetState).
Dead lettering, which is only supported by Service Bus queues, can be useful for
isolating messages that cannot be processed successfully by the receiving
application or when messages cannot reach their destination due to an expired
time-to-live (TTL) property. The TTL value specifies how long a message remains in
the queue. With Service Bus, the message will be moved to a special queue called
$DeadLetterQueue when the TTL period expires.
To find "poison" messages in Azure Queues, when dequeuing a message the
application examines the DequeueCount property of the message. If DequeueCount
is above a given threshold, the application moves the message to an applicationdefined "dead letter" queue.
Azure Queues enable you to obtain a detailed log of all of the transactions executed
against the queue, as well as aggregated metrics. Both of these options are useful
for debugging and understanding how your application uses Azure Queues. They are
also useful for performance-tuning your application and reducing the costs of using
queues.
The concept of "message sessions" supported by Service Bus enables messages
that belong to a certain logical group to be associated with a given receiver, which
in turn creates a session-like affinity between messages and their respective
receivers. You can enable this advanced functionality in Service Bus by setting the
SessionID property on a message. Receivers can then listen on a specific session ID
and receive messages that share the specified session identifier.
Maximum
message size
Azure Queues
200 TB
1 GB to 80 GB
64 KB
256 KB or 1 MB
Maximum
message TTL
7 days
TimeSpan.Max
Maximum
number of
queues
Unlimited
10,000
Maximum
number of
Unlimited
Comparison
Criteria
Azure Queues
concurrent
clients
Additional information
Service Bus enforces queue size limits. The maximum queue size is specified upon
creation of the queue and can have a value between 1 and 80 GB. If the queue size
value set on creation of the queue is reached, additional incoming messages will be
rejected and an exception will be received by the calling code. For more information
about quotas in Service Bus, see Service Bus Quotas.
You can create Service Bus queues in 1, 2, 3, 4, or 5 GB sizes (the default is 1 GB).
With partitioning enabled (which is the default), Service Bus creates 16 partitions for
each GB you specify. As such, if you create a queue that is 5 GB in size, with 16
partitions the maximum queue size becomes (5 * 16) = 80 GB. You can see the
maximum size of your partitioned queue or topic by looking at its entry on the Azure
portal.
With Azure Queues, if the content of the message is not XML-safe, then it must be
Base64 encoded. If you Base64-encode the message, the user payload can be up to
48 KB, instead of 64 KB.
With Service Bus queues, each message stored in a queue is comprised of two
parts: a header and a body. The total size of the message cannot exceed the
maximum message size supported by the service tier.
When clients communicate with Service Bus queues over the TCP protocol, the
maximum number of concurrent connections to a single Service Bus queue is
limited to 100. This number is shared between senders and receivers. If this quota is
reached, subsequent requests for additional connections will be rejected and an
exception will be received by the calling code. This limit is not imposed on clients
connecting to the queues using REST-based API.
If you require more than 10,000 queues in a single Service Bus namespace, you can
contact the Azure support team and request an increase. To scale beyond 10,000
queues with Service Bus, you can also create additional namespaces using the
Azure portal.
+
Management and operations
This section compares the management features provided by Azure Queues and
Service Bus queues.+
Comparison
Criteria
Azure Queues
Management
protocol
Runtime
protocol
.NET Managed
API
Yes
Yes
(.NET managed
brokered messaging
API)
Native C++
Yes
No
Java API
Yes
Yes
PHP API
Yes
Yes
Node.js API
Yes
Yes
Arbitrary
metadata
support
Yes
No
Queue naming
rules
Up to 63 characters long
Up to 260 characters
long
Get queue
Yes
Comparison
Criteria
Azure Queues
(Exact, point-in-time
value.)
Yes
Yes
length function
Peek function
Additional information
Azure Queues provide support for arbitrary attributes that can be applied to the
queue description, in the form of name/value pairs.
Both queue technologies offer the ability to peek a message without having to lock
it, which can be useful when implementing a queue explorer/browser tool.
The Service Bus .NET brokered messaging APIs leverage full-duplex TCP connections
for improved performance when compared to REST over HTTP, and they support the
AMQP 1.0 standard protocol.
Names of Azure queues can be 3-63 characters long, can contain lowercase letters,
numbers, and hyphens. For more information, see Naming Queues and Metadata.
Service Bus queue names can be up to 260 characters long and have less restrictive
naming rules. Service Bus queue names can contain letters, numbers, periods,
hyphens, and underscores.
+
Authentication and authorization
This section discusses the authentication and authorization features supported by
Azure Queues and Service Bus queues.+
Comparison Criteria
Azure Queues
Service Bus
Queues
Authentication
Symmetric key
Symmetric key
Security model
SAS
Comparison Criteria
Azure Queues
Service Bus
Queues
Identity provider
federation
No
Yes
Additional information
Every request to either of the queuing technologies must be authenticated. Public
queues with anonymous access are not supported. Using SAS, you can address this
scenario by publishing a write-only SAS, read-only SAS, or even a full-access SAS.
The authentication scheme provided by Azure Queues involves the use of a
symmetric key, which is a hash-based Message Authentication Code (HMAC),
computed with the SHA-256 algorithm and encoded as a Base64 string. For more
information about the respective protocol, see Authentication for the Azure Storage
Services. Service Bus queues support a similar model using symmetric keys. For
more information, see Shared Access Signature Authentication with Service Bus.
+
Cost
This section compares Azure Queues and Service Bus queues from a cost
perspective.+
Comparison
Criteria
Azure Queues
Queue
transaction
cost
$0.0036
Billable
operations
All
Send/Receive only
(no charge for other
operations)
Idle
transactions
Billable
Billable
Comparison
Criteria
Azure Queues
Storage cost
$0.07
$0.00
(per GB/month)
Outbound data
transfer costs
$0.12 - $0.19
$0.12 - $0.19
(Depending on
geography.)
(Depending on geography.)
Additional information
Data transfers are charged based on the total amount of data leaving the Azure
datacenters via the internet in a given billing period.
Data transfers between Azure services located within the same region are not
subject to charge.
As of this writing, all inbound data transfers are not subject to charge.
Given the support for long polling, using Service Bus queues can be cost effective in
situations where low-latency delivery is required.
+
Note
All costs are subject to change. This table reflects current pricing and does not
include any promotional offers that may currently be available. For up-to-date
information about Azure pricing, see the Azure pricing page. For more information
about Service Bus pricing, see Service Bus pricing.+
Conclusion
By gaining a deeper understanding of the two technologies, you will be able to
make a more informed decision on which queue technology to use, and when. The
decision on when to use Azure Queues or Service Bus queues clearly depends on a
number of factors. These factors may depend heavily on the individual needs of
your application and its architecture. If your application already uses the core
capabilities of Microsoft Azure, you may prefer to choose Azure Queues, especially if
you require basic communication and messaging between services or need queues
that can be larger than 80 GB in size.+
Because Service Bus queues provide a number of advanced features, such as
sessions, transactions, duplicate detection, automatic dead-lettering, and durable
Within the Azure Platform, there is a set of services named .NET Services. These set of services were
originally known as BizTalk.NET, and it includes the Workflow Services, the Access Control
Services, and the one we will talk about, the Service Bus.
The Service Bus implements the familiar Enterprise Service Bus Pattern. In a nutshell, the service
bus allows for service location unawareness between the service and its consumer, along with a set of
other, rather important, capabilities. The Service Bus allows you to build composite applications
based on services that you really do not need to know where they are. They could be in servers inside
your company, or on a server on the other side of the world; the location is irrelevant. There are,
nevertheless, important things you need to know about the service you are calling, namely, security.
The Access Control Service integrates seamlessly with the Service Bus to provide authentication and
authorization. The Access Control Service will be addressed in some other entry, for now we are
concentrating on the Service Bus.
The following diagrams depict different scenarios where it makes sense to use the Service Bus.
Depending on the Service Bus location, it can take a slightly different designation. If the Service Bus
is installed and working on-premises, it is commonly known as an ESB (Enterprise Service Bus), if it
is on the cloud, it takes the designation ISB (Internet Service Bus). It is still not clear what Microsoft
s intentions are regarding an on-premises offering of the Azure Platform. The following diagram
shows another possible scenario for using the Service Bus.
As I mentioned before, there are several other benefits associated with the use of the Service Bus that
can be leveraged by the configuration shown in this diagram. For instance, the Service Bus also
provides protocol mediation allowing use of non-standard bindings inside the enterprise (e.g.,
NetTcpBinding), and more standard protocols once a request is forwarded to the cloud (e.g.,
BasicHttpBinding).
Going back to our example, we are going to setup the publisher/subscriber scenario depicted in the
following diagram.
// read the solution credentials to connect to the Service Bus. this type
of credentials are going to be deprecated, they just exist for
convenience, in a real scenario one should use CardSpace, Certificates,
Live Services Id, etc.
Console.ReadLine();
Notice that I chose the Tcp protocol as the connectivity mode. In the client side, I will specify the
Http protocol. This is to show that protocol mediation can be accomplished with the use of the
Service Bus.
9) Add an app.config file to the project
10) Add the following configuration to the app.config file
<system.serviceModel>
<services>
<service name=ESBServiceConsole.EchoService>
<endpoint contract=ESBServiceConsole.IEchoContract
binding=netEventRelayBinding />
</service>
</services>
</system.serviceModel>
11) Compile and run the service. Enter the solution credentials, and you should get the following:
// read the solution credentials to connect to the Service Bus. this type
of credentials are going to be deprecated, they just exist for
convenience, in a real scenario one should use CardSpace, Certificates,
Live Services Id, etc.
Console.Write(Your Solution Name: );
string solutionName = Console.ReadLine();
Console.Write(Your Solution Password: );
string solutionPassword = Console.ReadLine();
userNamePasswordServiceBusCredential.CredentialType =
TransportClientCredentialType.UserNamePassword;
userNamePasswordServiceBusCredential.Credentials.UserName.UserName =
solutionName;
userNamePasswordServiceBusCredential.Credentials.UserName.Password =
solutionPassword;
channel.Close();
channelFactory.Close();
<system.serviceModel>
<client>
<endpoint name=RelayEndpoint
contract=ESBClientConsole.IEchoContract
binding=netEventRelayBinding/>
</client>
</system.serviceModel>
8) Compile the client, run three instances of the service, enter the credentials, run the client and type
some text, the result should be as follows.
There you have it, a publish/subscribe example using the Service Bus.
8.2 Tips
Always Deploy VNet First - You should always build your VNet before you
deploy your VM instance. If you do not then Azure will create a default VNet,
which may contain an overlapping address range with your onprem network.
Moving VMs between VNets - A VM can be moved from one subnet to
another within a VNet. However to move a VM from one VNet to another VNet,
you must delete the VM and recreate using the previous VHD.
8.4 Components
Below are the key components within Microsoft Azure Virtual Networks.
8.4.1 Subnets
A subnet is a range of IP addresses in the VNet, you can divide a VNet into
multiple subnets for organization and security. Additionally you can configure
VNet routing tables and Network Security Groups (NSG) to a subnet [2].
8.4.2 IP Addresses
There are 2 types of IP addresses that can be assigned to an Azure resource Public or Private.
Public
Used for connectivity within a VNet, and also when using a VPN
gateway or ExpressRoute.
8.4.4 Loadbalancing
Azure provides three different load balancing solutions. They are,
8.5 VPN
There will be times when you will need to encrypt your data when sending it
over the internet. Or there may be times where you need to send traffic
between 2 VNets. This is where Azure VPNs come into play.
The amount of traffic and/or tunnels that your gateway can support is controlled
via a set of 3 SKUs, these SKUs are updated via powershell cmdlets.
Route Based - Traffic is routed via a tunnel interface. This interface then
encrypts or decrypts the packets that pass the tunnel. Route based VPNs are
[4]
On the shortcut menu for the role that interests you, choose Properties, and
then choose the Configuration tab in the roles Properties window.
2.
3.
Choose the ellipsis () button to specify the storage account where you
want the diagnostics data to be stored. The storage account you choose will
be the location where diagnostics data is stored.
4.
If you choose the Your subscription option, you can choose the Azure
subscription you want to use and the account name. You can choose the
Manage Accounts button to manage your Azure subscriptions.
5.
you the following diagnostics data collection options: Errors only, All
information, and Custom plan. The default option, Errors only, takes the
least amount of storage because it doesnt transfer warnings or tracing
messages. The All information option transfers the most information and is,
therefore, the most expensive option in terms of storage.
6.
For this example, select the Custom plan option so you can customize the
data collected.
7.
The Disk Quota in MB box specifies how much space you want to allocate in
your storage account for diagnostics data. You can change the default value if
you want.
8.
On each tab of diagnostics data you want to collect, select its Enable
Transfer of check box. For example, if you want to collect application logs, select
the Enable transfer of Application Logs check box on the Application Logs
tab. Also, specify any other information required by each diagnostics data type.
See the section Configure diagnostics data sources later in this topic for
configuration information on each tab.
9.
After youve enabled collection of all the diagnostics data you want, choose
the OK button.
10.
Run your Azure cloud service project in Visual Studio as usual. As you use
your application, the log information that you enabled is saved to the Azure
storage account you specified.
In Server Explorer, choose the Azure node and then connect to your Azure
subscription, if you're not already connected.
2.
Expand the Virtual Machines node. You can create a new virtual machine,
or select one that's already there.
3.
On the shortcut menu for the virtual machine that interests you, choose
Configure. This shows the virtual machine configuration dialog box.
4.
If it's not already installed, add the Microsoft Monitoring Agent Diagnostics
extension. This extension lets you gather diagnostics data for the Azure
virtual machine. In the Installed Extensions list, choose the Select an available
extension drop-down menu and then choose Microsoft Monitoring Agent
Diagnostics.
8.8.1.1.1 Note
Other diagnostics extensions are available for your virtual machines. For more
information, see Azure VM Extensions and Features.
5.
Choose the Add button to add the extension and view its Diagnostics
configuration dialog box.
6.
The default tab, General, offers you the following diagnostics data collection
options: Errors only, All information, and Custom plan. The default
option, Errors only, takes the least amount of storage because it doesnt
transfer warnings or tracing messages. The All information option transfers
the most information and is, therefore, the most expensive option in terms of
storage.
7.
For this example, select the Custom plan option so you can customize the
data collected.
8.
The Disk Quota in MB box specifies how much space you want to allocate in
your storage account for diagnostics data. You can change the default value if
you want.
9.
On each tab of diagnostics data you want to collect, select its Enable
Transfer of check box.
For example, if you want to collect application logs, select the Enable
transfer of Application Logs check box on the Application Logs tab. Also,
specify any other information required by each diagnostics data type. See the
section Configure diagnostics data sources later in this topic for
configuration information on each tab.
10.
After youve enabled collection of all the diagnostics data you want, choose
the OK button.
11.
You'll see a message in the Microsoft Azure Activity Log window that the
virtual machine has been updated.
+
+
See Enable diagnostics logging for web apps in Azure App Service for more
information about application logs.+
8.9.2 Windows event logs
If you want to capture Windows event logs, select the Enable transfer of
Windows Event Logs check box. You can increase or decrease the number
of minutes when the event logs are transferred to your storage account by
changing the Transfer Period (min) value. Select the check boxes for the
types of events that you want to track.+
+
If you're using Azure SDK 2.6 or later and want to specify a custom data
source, enter it in the text box and then choose the Add button next to it.
The data source is added to the diagnostics.cfcfg file.+
If you're using Azure SDK 2.5 and want to specify a custom data source, you
can add it to the WindowsEventLog section of the diagnostics.wadcfgx file,
such as in the following example.+
Copy
<WindowsEventLog scheduledTransferPeriod="PT1M">
<DataSource name="Application!*" />
<DataSource name="CustomDataSource!*" />
</WindowsEventLog>
+
To track a performance counter that isnt listed, enter it by using the
suggested syntax and then choose the Add button. The operating system on
the virtual machine determines which performance counters you can track.
For more information about syntax, see Specifying a Counter Path.+
8.9.4 Infrastructure logs
If you want to capture infrastructure logs, which contain information about
the Azure diagnostic infrastructure, the RemoteAccess module, and the
RemoteForwarder module, select the Enable transfer of Infrastructure
Logs check box. You can increase or decrease the number of minutes when
the logs are transferred to your storage account by changing the Transfer
Period (min) value.+
+
See Collect Logging Data by Using Azure Diagnostics for more information.+
8.9.5 Log directories
If you want to capture log directories, which contain data collected from log
directories for Internet Information Services (IIS) requests, failed requests, or
folders that you choose, select the Enable transfer of Log Directories
check box. You can increase or decrease the number of minutes when the
logs are transferred to your storage account by changing the Transfer
Period (min) value.+
You can select the boxes of the logs you want to collect, such as IIS Logs
and Failed Request Logs. Default storage container names are provided,
but you can change the names if you want.+
Also, you can capture logs from any folder. Just specify the path in the Log
from Absolute Directory section and then choose the Add Directory
button. The logs will be captured to the specified containers.+
+
8.9.6 ETW logs
If you use Event Tracing for Windows (ETW) and want to capture ETW logs,
select the Enable transfer of ETW Logs check box. You can increase or
decrease the number of minutes when the logs are transferred to your
storage account by changing the Transfer Period (min) value.+
The events are captured from event sources and event manifests that you
specify. To specify an event source, enter a name in the Event Sources
section and then choose the Add Event Source button. Similarly, you can
specify an event manifest in the Event Manifests section and then choose
the Add Event Manifest button.+
+
The ETW framework is supported in ASP.NET through classes in the
[System.Diagnostics.aspx]
(https://msdn.microsoft.com/library/system.diagnostics(v=vs.110)
namespace. The Microsoft.WindowsAzure.Diagnostics namespace, which
inherits from and extends standard [System.Diagnostics.aspx]
(https://msdn.microsoft.com/library/system.diagnostics(v=vs.110) classes,
enables the use of [System.Diagnostics.aspx]
(https://msdn.microsoft.com/library/system.diagnostics(v=vs.110) as a
logging framework in the Azure environment. For more information, see Take
Control of Logging and Tracing in Microsoft Azure and Enabling Diagnostics in
Azure Cloud Services and Virtual Machines.+
8.9.7 Crash dumps
If you want to capture information about when a role instance crashes, select
the Enable transfer of Crash Dumps check box. (Because ASP.NET
handles most exceptions, this is generally useful only for worker roles.) You
can increase or decrease the percentage of storage space devoted to the
crash dumps by changing the Directory Quota (%) value. You can change
the storage container where the crash dumps are stored, and you can select
whether you want to capture a Full or Mini dump.+
The processes currently being tracked are listed. Select the check boxes for
the processes that you want to capture. To add another process to the list,
enter the process name and then choose the Add Process button.+
+
See Take Control of Logging and Tracing in Microsoft Azure and Microsoft
Azure Diagnostics Part 4: Custom Logging Components and Azure
Diagnostics 1.3 Changes for more information.+
8.10.1
1.
2.
You can view the diagnostics data in either a report that Visual Studio
generates or tables in your storage account. To view the data in a report, open
Cloud Explorer or Server Explorer, open the shortcut menu of the node for
the role that interests you, and then choose View Diagnostic Data.
If the most recent data doesn't appear, you might have to wait for the transfer
period to elapse.
Choose the Refresh link to immediately update the data, or choose an
interval in the Auto-Refresh dropdown list box to have the data updated
automatically. To export the error data, choose the Export to CSV button to
create a comma-separated value file you can open in a spreadsheet.
In Cloud Explorer or Server Explorer, open the storage account that's
associated with the deployment.
3.
Open the diagnostics tables in the table viewer, and then review the data
that you collected. For IIS logs and custom logs, you can open a blob
container. By reviewing the following table, you can find the table or blob
container that contains the data that interests you. In addition to the data for
that log file, the table entries contain EventTickCount, DeploymentId, Role,
and RoleInstance to help you identify what virtual machine and role generated
the data and when.
Diagnostic
data
Description
Location
Applicatio
n Logs
WADLogsTable
Event Logs
WADWindowsEventLogsTable
Performan
ce
Counters
WADPerformanceCountersTable
Infrastruct
ure Logs
WADDiagnosticInfrastructureLogsTa
ble
IIS Logs
Diagnostic
data
4.
Description
Location
service gets a
significant amount of
traffic, these logs can
be quite lengthy, so you
should collect and store
this data only when you
need it.
Crash
dumps
This information
provides binary images
of your cloud services
process (typically a
worker role).
Custom
log files
If data of any type is truncated, you can try increasing the buffer for that data
type or shortening the interval between transfers of data from the virtual
machine to your storage account.
5.
(optional) Purge data from the storage account occasionally to reduce overall
storage costs.
6.
When you do a full deployment, the diagnostics.cscfg file (.wadcfgx for Azure
SDK 2.5) is updated in Azure, and your cloud service picks up any changes to
your diagnostics configuration. If you, instead, update an existing deployment,
the .cscfg file isnt updated in Azure. You can still change diagnostics settings,
though, by following the steps in the next section. For more information about
performing a full deployment and updating an existing deployment, see Publish
Azure Application Wizard.
8.10.2
1.
On the shortcut menu for the virtual machine, choose View Diagnostics
Data.
If the most recent data doesn't appear, you might have to wait for the transfer
period to elapse.
Choose the Refresh link to immediately update the data, or choose an
interval in the Auto-Refresh dropdown list box to have the data updated
automatically. To export the error data, choose the Export to CSV button to
create a comma-separated value file you can open in a spreadsheet.
+
or the role. If you configure the role node, any changes apply to all instances.
If you configure the instance node, any changes apply to that instance only.+
8.11.1
1.
In Server Explorer, expand the Cloud Services node, and then expand
nodes to locate the role or instance that you want to investigate or both.
2.
On the shortcut menu for an instance node or a role node, choose Update
Diagnostics Settings, and then choose the diagnostic settings that you
want to collect.
For information about the configuration settings, see Configure diagnostics
data sources in this topic. For information about how to view the diagnostics
data, see View the diagnostics data in this topic.
8.13 Q & A
What is the buffer size, and how large should it be?+
On each virtual machine instance, quotas limit how much diagnostic data
can be stored on the local file system. In addition, you specify a buffer size
for each type of diagnostic data that's available. This buffer size acts like an
individual quota for that type of data. By checking the bottom of the dialog
box, you can determine the overall quota and the amount of memory that
remains. If you specify larger buffers or more types of data, you'll approach
the overall quota. You can change the overall quota by modifying the
diagnostics.wadcfg/.wadcfgx configuration file. The diagnostics data is stored
PreciseTimeStamp is the ETW timestamp of the event. That is, the time the
event is logged from the client.
Timestamp is the timestamp at which the entity was created in the Azure
table.
collect more data than you need, and dont forget to disable data collection
when you no longer need it. You can always enable it again, even at runtime,
as shown in the previous section.+
How do I collect failed-request logs from IIS?+
By default, IIS doesnt collect failed-request logs. You can configure IIS to
collect them if you edit the web.config file for your web role.+
Im not getting trace information from RoleEntryPoint methods like
OnStart. Whats wrong?+
The methods of RoleEntryPoint are called in the context of WAIISHost.exe,
not IIS. Therefore, the configuration information in web.config that normally
enables tracing doesnt apply. To resolve this issue, add a .config file to your
web role project, and name the file to match the output assembly that
contains the RoleEntryPoint code. In the default web role project, the name
of the .config file would be WAIISHost.exe.config. Then add the following lines
to this file:+
Copy
<system.diagnostics>
<trace>
<listeners>
<add name AzureDiagnostics
type=Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener>
<filter type= />
</add>
</listeners>
</trace>
</system.diagnostics>
...
ServicePointManager.DefaultConnectionLimit = 12;
CloudStorageAccount.SetConfigurationSettingPublisher(
(configName, configSetter) =>
configSetter(RoleEnvironment.GetConfigurationSettingValue(configName)));
DiagnosticMonitorConfiguration dmc =
DiagnosticMonitor.GetDefaultInitialConfiguration();
dmc.Logs.BufferQuotaInMB = 4;
dmc.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);
dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;
DiagnosticMonitor.Start(
"Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", dmc);
autoscaler =
EnterpriseLibraryContainer.Current.GetInstance<Autoscaler>();
autoscaler.Start();
return base.OnStart();
}
}
}
Note:
If you decide to host the block in the same worker role as your application, you
should get the Autoscaler instance and call the Start method in the Run method
of the WorkerRole class instead of in the OnStart method.
To understand and troubleshoot the block's behavior, you must use the log messages that the block
writes. To ensure that the block can write log messages, you must configure logging for the worker
role. By default, the block uses the logging infrastructure from the System.Diagnostics namespace.
The block can also use the Enterprise Library Logging Application Block or a custom logger.
Note:
When you call the Start method of the Autoscaler class, the block attempts to
read and parse the rules in your rules store. If any error occurs during the reading
and validation of the rules, the block will log the exception with a "Rules store
exception" message and continue. You should correct the error condition identified
in the log message and save a new version of the rules to your rules store. The
block will automatically attempt to load your new set of rules.
By default, the block checks for changes in the rules store every 30 seconds. To
change this setting, see the topic "Entering Configuration Information."
For more information about how to configure the System.Diagnostics namespace logger or the
Enterprise Library Logging Application Block logger, see the topic "Autoscaling Application Block
Logging."
For more information about how to select the logging infrastructure that the Autoscaling Application
Block should use, see the topic "Entering Configuration Information."
When the block communicates with the target application, it uses a service certificate to secure the
Azure Service Management API calls that it makes. The administrator must upload the appropriate
service certificate to Azure. For more information, see the topic "Deploying the Autoscaling
Application Block."
For more details of the integration of Enterprise Library and Unity, see
"Creating and Referencing Enterprise Library Objects."
If you have multiple instances of your worker role, then the Autoscaler class
can use a lease on an Azure blob to ensure that only a single instance of the
Autoscaler can execute the autoscaling rules at any one time. See the topic
"Entering Configuration Information" for more details.
Note:
The default setting is that the lease is not enabled. If you are planning to run
multiple instances of the worker role that hosts the Autoscaling Application Block,
you must enable the lease.
It is important to call the Stop method in the Autoscaler class when the
worker stops. This ensures that the block releases its lease on the blob before
the role instance stops.
Tip
10.2 Overview
Azure Queue storage provides cloud messaging between application
components. In designing applications for scale, application components are
often decoupled, so that they can scale independently. Queue storage
delivers asynchronous messaging for communication between application
components, whether they are running in the cloud, on the desktop, on an
on-premises server, or on a mobile device. Queue storage also supports
managing asynchronous tasks and building process work flows.+
10.2.1
About this tutorial
This tutorial shows how to write .NET code for some common scenarios using
Azure Queue storage. Scenarios covered include creating and deleting
queues and adding, reading, and deleting queue messages.+
Estimated time to complete: 45 minutes+
Prerequisities:+
+
10.2.1.1.1
Note
We recommend that you use the latest version of the Azure Storage Client
Library for .NET to complete this tutorial. The latest version of the library is
7.x, available for download on Nuget. The source for the client library is
available on GitHub.+
If you are using the storage emulator, note that version 7.x of the client
library requires at least version 4.3 of the storage emulator +
URL format: Queues are addressable using the following URL format:
http:// <storage account> .queue.core.windows.net/ <queue>
The following URL addresses a queue in the diagram:
http://myaccount.queue.core.windows.net/images-to-download
+
All of the code examples in this tutorial can be added to the Main() method
in program.cs in your console application.+
Note that you can use the Azure Storage Client Library from any type of .NET
application, including an Azure cloud service, an Azure web app, a desktop
application, or a mobile application. In this guide, we use a console
application for simplicity.+
10.6.2
Use NuGet to install the required packages
There are two packages that you'll need to install to your project to complete
this tutorial:+
Microsoft Azure Storage Client Library for .NET: This package provides
programmatic access to data resources in your storage account.
Microsoft Azure Configuration Manager library for .NET: This package provides
a class for parsing a connection string from a configuration file, regardless of
where your application is running.
You can use NuGet to obtain both packages. Follow these steps:+
1.
2.
3.
Search online for "ConfigurationManager" and click Install to install the Azure
Configuration Manager.
+
10.6.2.1.1
Note
The Storage Client Library package is also included in the Azure SDK for
.NET. However, we recommend that you also install the Storage Client
Library from NuGet to ensure that you always have the latest version of the
client library.+
The ODataLib dependencies in the Storage Client Library for .NET are
resolved through the ODataLib (version 5.0.2 and greater) packages
available through NuGet, and not through WCF Data Services. The ODataLib
libraries can be downloaded directly or referenced by your code project
through NuGet. The specific ODataLib packages used by the Storage Client
Library are OData, Edm, and Spatial. While these libraries are used by the
You can run your code against an Azure Storage account in the cloud.
You can run your code against the Azure storage emulator. The storage
emulator is a local environment that emulates an Azure Storage account in the
cloud. The emulator is a free option for testing and debugging your code while
your application is under development. The emulator uses a well-known account
and key. For more details, see Use the Azure Storage Emulator for Development
and Testing
If you are targeting a storage account in the cloud, copy the primary access
key for your storage account from the Azure Portal. For more information, see
View and copy storage access keys.+
10.6.3.1.1
Note
You can target the storage emulator to avoid incurring any costs associated
with Azure Storage. However, if you do choose to target an Azure storage
account in the cloud, costs for performing this tutorial will be negligible.+
10.6.4
Configure your storage connection string
The Azure Storage Client Library for .NET supports using a storage
connection string to configure endpoints and credentials for accessing
storage services. The best way to maintain your storage connection string is
in a configuration file. +
For more information about connection strings, see Configure a Connection
String to Azure Storage.+
10.6.4.1.1
Note
Your storage account key is similar to the root password for your storage
account. Always be careful to protect your storage account key. Avoid
distributing it to other users, hard-coding it, or saving it in a plain-text file
that is accessible to others. Regenerate your key using the Azure Portal if you
believe it may have been compromised.+
To configure your connection string, open the app.config file from Solution
Explorer in Visual Studio. Add the contents of the <appSettings> element
shown below. Replace account-name with the name of your storage account,
and account-key with your account access key:+
Copy
xml
<configuration>
<startup>
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5.2" />
</startup>
<appSettings>
<add key="StorageConnectionString"
value="DefaultEndpointsProtocol=https;AccountName=accountname;AccountKey=account-key" />
</appSettings>
</configuration>
<add key="StorageConnectionString"
value="DefaultEndpointsProtocol=https;AccountName=storagesample;AccountKey=nYV0gl
n6fT7mvY+rxu2iWAEyzPKITGkhM88J8HUoyofvK7C6fHcZc2kRZp6cKgYRUM74lHI84L50Iau1+9
hPjB==" />
To target the storage emulator, you can use a shortcut that maps to the wellknown account name and key. In that case, your connection string setting
will be:+
Copy
xml
<add key="StorageConnectionString" value="UseDevelopmentStorage=true;" />
10.6.5
Add namespace declarations
Add the following using statements to the top of the program.cs file:+
Copy
C#
using Microsoft.Azure; // Namespace for CloudConfigurationManager
using Microsoft.WindowsAzure.Storage; // Namespace for CloudStorageAccount
using Microsoft.WindowsAzure.Storage.Queue; // Namespace for Queue storage types
10.6.6
Parse the connection string
The Microsoft Azure Configuration Manager Library for .NET provides a class
for parsing a connection string from a configuration file. The
CloudConfigurationManager class parses configuration settings regardless of
whether the client application is running on the desktop, on a mobile device,
in an Azure virtual machine, or in an Azure cloud service.+
To reference the CloudConfigurationManager package, add the following
using directive:+
Copy
C#
using Microsoft.Azure;
Using the Azure Configuration Manager is optional. You can also use an API
like the .NET Framework's ConfigurationManager class.+
10.6.7
Create the Queue service client
The CloudQueueClient class enables you to retrieve queues stored in
Queue storage. Here's one way to create the service client:+
Copy
C#
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
Now you are ready to write code that reads data from and writes data to
Queue storage.+
// Display message.
Console.WriteLine(peekedMessage.AsString);
10.10
You can change the contents of a message in-place in the queue. If the
message represents a work task, you could use this feature to update the
status of the work task. The following code updates the queue message with
new contents, and sets the visibility timeout to extend another 60 seconds.
This saves the state of work associated with the message, and gives the
client another minute to continue working on the message. You could use
this technique to track multi-step workflows on queue messages, without
having to start over from the beginning if a processing step fails due to
hardware or software failure. Typically, you would keep a retry count as well,
and if the message is retried more than n times, you would delete it. This
protects against a message that triggers an application error each time it is
processed.2
Copy
C#
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));
// Get the message from the queue and update the message contents.
CloudQueueMessage message = queue.GetMessage();
message.SetMessageContent("Updated contents.");
queue.UpdateMessage(message,
TimeSpan.FromSeconds(60.0), // Make it invisible for another 60 seconds.
MessageUpdateFields.Content | MessageUpdateFields.Visibility);
10.11
Your code de-queues a message from a queue in two steps. When you call
GetMessage, you get the next message in a queue. A message returned
from GetMessage becomes invisible to any other code reading messages
from this queue. By default, this message stays invisible for 30 seconds. To
finish removing the message from the queue, you must also call
DeleteMessage. This two-step process of removing a message assures that
if your code fails to process a message due to hardware or software failure,
another instance of your code can get the same message and try again. Your
code calls DeleteMessage right after the message has been processed.+
Copy
C#
// Retrieve storage account from connection string
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));
//Process the message in less than 30 seconds, and then delete the message
queue.DeleteMessage(retrievedMessage);
10.12
This example shows how to use the Async-Await pattern with common Queue
storage APIs. The sample calls the asynchronous version of each of the given
methods, as indicated by the Async suffix of each method. When an async
method is used, the async-await pattern suspends local execution until the
call completes. This behavior allows the current thread to do other work,
which helps avoid performance bottlenecks and improves the overall
responsiveness of your application. For more details on using the AsyncAwait pattern in .NET see Async and Await (C# and Visual Basic)+
Copy
C#
// Create the queue if it doesn't already exist
if(await queue.CreateIfNotExistsAsync())
{
10.13
There are two ways you can customize message retrieval from a queue. First,
you can get a batch of messages (up to 32). Second, you can set a longer or
shorter invisibility timeout, allowing your code more or less time to fully
process each message. The following code example uses the GetMessages
10.14
10.15
Delete a queue
To delete a queue and all the messages contained in it, call the Delete
method on the queue object.+
Copy
C#
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));
11.1.1
Fault Domains
When you put VMs in to an availability set, Azure guarantees to spread them across
Fault Domains and Update Domains. A Fault Domain (FD) is essentially a rack of servers.
It consumes subsystems like network, power, cooling etc. So 2 VMs in the same
availability set means Azure will provision them in to 2 different racks so that if say, the
network or the power failed, only one rack would be affected.
I discovered there are always only 2 fault domains: FD0 and FD1. It makes it seem like
your VMs only get spread across 2 racks but thats not the case. They can be spread
across more racks if youve got lots of VMs. But as far as your availability set is
concerned FD0 and FD1 are a way of saying This bit of infrastructure (FD0) is different
to this bit (FD1). As you boot VMs in to an availability set, they get allocated like this
FD0, FD1, FD0, FD1, FD0, FD1 and so on. The pattern never changes. Youve probably
seen this diagram hundreds of times:
VM
Fault Domain
IIS1
IIS2
IIS3
IIS4
They are allocated to FDs in the order in which they boot. So if Id booted these systems
in reverse order then theyd all be in different FDs.
11.1.2
Update Domains
Sometimes you need to update your app, or Microsoft needs to update the host on
which your VM(s) are running. Note that with IaaS VMs, Microsoft does not automatically
update your VMs. You have complete control (and responsibility) over that. But say if a
serious security vulnerability is identified and a patch created. Its in Microosfts interest
to get that applied to the host underneath your VM as soon as possible. So how is that
done without taking your service offline? Update Domains. Its similar to the FD
methods, only this time, instead of an accidental failure, there is a purposeful move to
take down one (or more) of your servers. So to make sure your service doesnt go offline
because of an update, it will walk through your update domains one after the other.
Whereas FDs are assigned in the pattern 0, 1, 0, 1, 0, 1, 0, 1. UDs are assigned 0, 1, 2,
3, 4, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4.
Both FDs and UDs are assigned in the order that Azure discovers them as they are
provisioned. So if you provision machines in the order Srv0, Srv1, Srv2, Srv3, Srv4, Srv5,
Srv6, Srv7, Srv8, Srv9, Srv10, Srv11 youll end up with a table that looks like this:
VM
Fault Domain
Update Domain
Srv0
Srv1
Srv2
Srv3
Srv4
Srv5
Srv6
Srv7
Srv8
Srv9
Srv10
Srv11
you can see that UDs loop around a count of 5 (0, 1, 2, 3, 4).
You can see that in the following screen shot of a collection of 9 VMs in a single
availability set.
Figure 3: Fault and Update Domains in a Cloud Service comprised of Azure VMs
11.2
With Azure VMs, FDs and UDs are assigned to the VMs in an availability set in the order
in which they are provisioned. With Cloud Servces its almost the same but roles are
used instead of availability sets. For example you might have a web role with 8
instances. The role would be assigned FDs and UDs as the instances are provisioned and
discovered by Azure. The order of the instance numbers is not necessarily the order in
which they are successfully provisioned. Its just a fact of life that some machines that
start the provisioning process slow down in the middle and machines that started later,
catch up and overtake them.
Instance Number Fault Domain
Update Domain
WebRole_IN0
WebRole_IN1
WebRole_IN2
WebRole_IN3
WebRole_IN4
WebRole_IN5
WebRole_IN6
WebRole_IN7
You can also see in this shot of a Cloud Service, the same pattern:
but notice how by the time we get to page three (there are 100 servers in this cloud
service), it starts to break down:
thats because UDs and FDs are assigned in the order that instances are provisioned.
Some of them provision more quickly than others and that causes the pattern to break
down. But there are still the correct number of FDs and UDs.
In Cloud Services, you can also set the number of update domains in the service
models .csdef file. By default its set to 5 but you can increase that to a maximum of
20.
11.2.1
Practice Questions:
Q: If you add a new VM to an availability set, how many extra fault domains and update
domains will you get if there are already 4 instances in the availability set?
A:
VMs
UD
FD
Srv0
1 UD and 1 FD
Srv1
Srv2
Srv3
Add new VM =
Srv4
Q: In a Cloud Service that has a Web Role with 12 instances, how many FDs and UDs
will you get by default?
A: 2 FDs (FD0 and FD1). 5 UDs (UD0, UD1, UD2, UD3, UD4).
Q: You have set the maximum number of UDs in the .csdef for a Cloud Service to 20.
You use Azure Virtual Machines to provision 18 VMs. How many Update Domains will you
have as a result?
A: This is a bit of a trick question the .csdef is only used for Cloud Services, not for
VMs. So regardless of what you set or even how you try to do it, Azure VM UDs come in
groups of 5. With 18 VMs, that means youll have 5 UDs. UD0 to UD4 a la:
VM
Update Domain
VM0
VM1
VM2
VM3
VM4
VM5
VM6
VM7
VM8
VM9
VM10
VM11
VM12
VM13
VM14
VM15
VM16
VM17
Q: If you have 13 VMs in an availability set, how many VMs will be in UD0?
A: Use the table above. You can see UD0 lines up with VM0, VM5 and VM10. So there
will be 3 VMs in UD0.
12.1
Address Spaces
Address spaces and subnets are usually declared in CIDR notation. A number like
10.0.0.0/8. Thats a 32-bit base IP address plus a mask. This example means mask off
the top 8 bits, and the range of addresses that are left in the last 24 bits is the address
space.
12.2
Subnets
A subnet must exist within an address space. That means the range of addresses in a
subnet must fit inside the address space. So hopefully you can see how subnetting an
address space gives you a way to logically divide up the address space.
Look back up at figure 1. The lowest-order 24 bits represent all the possible addresses
you could have in the address space. By further dividing those 24 bits, you could
segment the network. The higher order bits would determine which subnet you are in,
and the lower order bits would define which host on that particular network you are on.
IP Address
Subnet
Host (within
subnet)
10.0.0.20
0.20
10.1.0.20
0.20
10.50.2.20
50
2.20
10.200.0.200
200
0.200
10.250.255.147
250
255.147
11.0.0.10
invalid (its
outside the
address space)
invalid (its
outside the
address space)
8.0.0.12
invalid (its
outside the
address space)
invalid (its
outside the
address space)
This is all very easy because Ive divided the address space and subnet boundaries up
on very neat byte boundaries. So the numbers fit exactly in the dotted address notation.
But you can also create address spaces and subnets that dont divide on these
boundaries. You could for example have a subnet defined by the top say, 13 bits of a 32bit address. The reason it makes things more difficult is because part of the subnet
identifier would come from the second number and part of it would come from the third
number of an IP address. And theres nothing to be done other than visualise the
address space in your mind, or draw it out.
12.3
The security in Azure works in such a way that IP addresses that are not known to the
infrastructure (on other words it didnt issue them in the first place) are not routed
anywhere. The result is that if you want things to talk to each other, youve got to let
Azures infrastructure give you a dynamically assigned IP address. You might be getting
all uppity at this point thinking you want to deploy something like an AD Domain
Controller and best practice says to avoid DHCP. Well, with VNets you can relax. You will
be issued an address over the DHCP protocol, but the lease duration is set to either 168
years or until you delete the VNet, whichever comes first. Im pretty sure youll delete
the VNet first and in fact, I havent thought about what will happen in 168 years. Itll be
somebody elses problem by then.
This means whenever you shut a machine down, when it reboots, itll get the same IP
address back again. So now you can relax about your AD Domain Controller
deployments in Azure. The fact that every time you boot the machine and it gets the
same address makes it just as good as a statically assigned IP address.
Each VNet reserves a few addresses. You might have noticed whenever you fire up the
first machine in a VNet it gets a .4 address. Thats because .0 is reserved for broadcast
requests while .1 is reserved for the default gateway (router). .2 and .3 are reserved for
a special sort of gateway that you might later configure to communicate with your onpremises network. Note that .255 is also a broadcast address.
In the following figure, I configured a VNet thus:
10.0.0.0/8
10.0.0.0/16
subnet = SubNet-1
You can see this is the first host in the subnet because it is assigned a .4 address. You
can see its a 16-bit subnet, because the subnet mask (255.255.0.0) masks off the top
16 bits. You can also see the default gateway (router) is at 10.0.0.1 as predicted.
It doesnt matter how large or small the subnet is, any range of available addresses are
automatically reduced by 5 (because of the 5 reserved addresses mentioned above). So
for example a 16-bit subnet (which gives a total of 65536 host addresses) is reduced to
65531. An 8 bit subnet has a maximum of 251 addresses (256-5). You can see this
illustrated in the following figure.
12.4
Practice Questions
Q: How many hosts can occupy a single subnet in the following VNet?
Address space: 10.0.0.0/16
Subnet: 10.0.0.0/24
A: the /24 means the first 3 octets fill in from the most significant bit and mask off the
first 3 bytes. That leaves one byte (or 8 bits) of subnet space or 256 hosts. But 5
addresses have to be subtracted which leaves 251 hosts.
Q: How many hosts can occupy a single subnet in the following VNet?
Address space: 10.0.0.0/8
Subnet: 10.0.0.0/24
A: THis is just a question to test that you understand the difference between the
address space and the subnet. The subnet definition in this question is the same as the
first question, the number of hosts doesnt change. 251.
You connect to a VM on this network over RDP and assign the following fixed IP address
10.0.0.100. Will you be able to connect to a VM at IP address 10.0.0.101?
A: No the Azure VNet infrastructure prevents traffic to/from hosts it didnt assign an IP
address to.
I hope this will help you if youre taking the Azure Infrastructure Exam (533)
I cant remember exactly but Igal Figlin from Microsoft did some background research in
to this and found that 40% (it might be higher I cant remember exactly you can watch
the video here) of deployments are not in availability sets. Have a read of the email
below and youll start to realise how much risk you are putting yourself to if you dont
use multiple VMs in availability sets.
When you put VMs in to availability sets they are also distributed across up to 5 update
domains. When Microsoft updates Azure, theyll walk from one update-domain to the
next. You can see what they are saying in this email theyll leave 30 minutes between
updating each update domain. Lets say you have 2 machines in an availability set.
Theyll be spread across 2 fault domains and 2 update domains. That means if an
infrastructure fault occurs (like say power or a network segment), only one of your VMs
will be affected. It also means if Microsoft has to do an update, it will take one of your
machines out of the configuration at a time.
If you want to be super-cautious, you could protect against the scenario that while
Microsoft is walking the update domains in your availability set, you also get an
infrastructure failure that could take out a further machine. The table below shows
how.
Update Domain Update
0
Domain1
Fault Domain Instance 0
0
Fault Domain
1
Update
Domain 2
Instance 2
Instance 1
Imagine the update process had done the update on the instance in Update Domain 0, it
had then walked on to Update Domain 1 and was in the middle of updating that
instance. Instance 1 is now offline. At the same time a power failure occurs to the rack
on Fault Domain 0. That would cause Instance 0 and instance 2 to also be taken offline.
Youd now have an availability set with no running machines. You can counter this by
adding a VM to the availability set. Because there can only ever be one Update Domain
in an availability set undergoing an update you are protected. Lets say you are in the
middle of updating one of the services yourself. Your update will be stalled, the Microsoft
update will complete and then your update will continue. In other words updates are
applied to an update domain synchronously. And if you are in the middle of updating
one Update Domain, Microsoft wont start simultaneously updating a different Update
Domain. So the following table will remove all risk from simultaneous Update Domain
and Fault Domain operations.
Update
Domain 2
Update Domain
3
Instance 2
Instance 1
Instance 3
The failure of any Fault Domain will take out 2 instances and a simultaneous update can
take out only one Update Domain. This means a maximum of 3 instances can be offline
because of simultaneous Update Domain/Fault Domain operations. That would leave you
with one running instance.
Youd have to be very unlucky to get an infrastructure failure occur while an update is
going on. The availability SLA takes the above scenarios in to consideration you only
have to have 2 instances in your availability sets to enjoy the uptime guarantee. If you
are unlucky enough to suffer a double problem and the availability drops below the
guarantee then Microsoft compensates you.
I made a post about Update Domains and Fault Domains a couple of weeks ago.
Interesting stuff if youre going to take the Azure Infrastructure exam.
PDT
UTC
North Central US
08:00
Monday, June 1, 2015
15:00
Monday, June 1, 2015
North Europe
08:00
Tuesday, June 2, 2015
15:00
Tuesday, June 2, 2015
East US
08:00
Wednesday, June 3,
2015
15:00
Wednesday, June 3,
2015
domain for the availability set may be rebooted at the same time, and
there will be at least a 30-minute interval between processing each
update domain. VMs that are in different availability sets may be taken
down at the same time. For more information, please visit the
availability sets documentation webpage.
If youre not already, we recommend using availability sets in your
architecture to ensure higher availability of your service. You can read
our multiple instances service level agreement (SLA) commitment for
Virtual Machines.
To learn more about our planned maintenance, please visit the Planned
maintenance for Azure virtual machines documentation webpage. If you
have questions, please visit the Azure Virtual Machines forums.
To ensure higher availability, the maintenance is scheduled in region
pairs. To help determine whether the reboot you observed on your VM is
due to a planned maintenance event, please visit the Viewing VM
Reboot Logs blog post.
Microsoft Azure Cloud Services (PaaS)
All Cloud Services running web and/or worker roles referenced below
will experience downtime during this maintenance. Cloud Services with
two or more role instances in different upgrade domains will have
external connectivity at least 99.95 percent of the time. Please note
that the SLA guaranteeing service availability only applies to services
that are deployed with more than one instance per role. Azure updates
one upgrade domain at a time. For more information about distribution
of roles across upgrade domains and the update process, please visit
the Update an Azure Service webpage. If you have questions, please
visit the Azure Cloud Services forums.
Please note that email addresses provided for any of the following
account roles also received this communication: account and service
administrators, and co-administrators.
Thank you,
Your Azure Team
First Id recommend you read my post on how SSL actually works first. You might then
find, you dont know enough about the relationships between keys, certificates and
signatures, so Id recommend you watch my crypto primer video.
There is a video of the whole article here
14.1
Yes, thats right. Lets imagine you go to the portal and create an entirely naked website
in the Free tier called plankytronixx.azurewebsites.net, it will already be enabled for SSL.
You can just type in to the browser address bar https://plankytronixx.azurewebsites.net
and it will work just fine.
You can see the certificate that is being used to protect it. More interesting is the cert
path up to the Microsoft Internal Corporate Root.
Of course the disadvantage of this SSL implementation is that you have to use the
.azurewebsites.net address. Even if you map to a new custom domain, like say,
plankytronixx.co.uk, the .azurewebsites.net configuration is still there, it still exists, you
can still connect to it But users might get suspicious if you ask them for a password
over SSL and they now appear to be on an entirely different site. Most of them wont
know how Azure works so their suspicions would be reasonable
You can actually see in the screenshot above that a wildcard cert exists for every single
Azure Web Site/Web App in the world. *.azurewebsites.net. Plus this certificate is
included in the price (and that means the Free tier as well!). SO free SSL is available,
but but but
14.2
This is the bit where it gets a bit more complicated and this is the bit everybody talks
about: when you want to SSL-enable a site to give a URL like
https://plankytronixx.co.uk First things first you cant get a custom domain name in
the Free Tier. So you can now start to see where the confusion comes in? People say SSL
is not available for Free sites because of that. But as youve just seen, you do get SSL,
only it comes with a collection of limitations. You have to move up to Basic or Standard
to get SSL on custom domain names.
To set up a custom domain name you have to set up a mapping between the IP address
of the web server and the DNS name. You do this at a Domain Registrar. I use Go Daddy.
The record you add is called an A (address) record. So plankytronixx.co.uk might get
mapped to say 104.45.81.79. Its dead easy to configure this at a domain registrar you
just update the zone file. Theyll give you some kind of tool to do it, usually web-based.
This is what it looks like on Go Daddy
To get Azure to recognise it involves a hurdle to jump. If you just update the A record
and nothing else, Azure will give you this error:
Its because Azure wants to be assured that you own this domain. It wants you to
prove that you own it. It does this using the azure websites verify process more
normally known as awverify.
14.2.1.1
Lets have a quick review of what happens when you type http://plankytronixx.com.uk in
to a web browser.
1. The browser does a DNS query against the name plankytronixx.couk.
2. The DNS server returns the IP address lets say its 104.45.81.79.
3. The web browser formats an HTTP GET and sends it to IP address 104.45.81.79.
4. In the header of the GET request is the host name plankytronixx..co.uk
5. Azure is sat at IP address 104.45.81.79; it inspects the host name in the header
and looks to see if it has a site that matches the name. If it does
6. It returns the page.
7. If it doesnt you get the error in the screenshot above.
When you are setting this up for the first time, Azure gives you a piece of DNS
information which you add to your DNS registrars zone file for your domain
(plankytronixx.co,uk in this case). When you set up the Azure end of the configuration,
Azure sends a query to your domain registrar and expects to see the DNS information it
gave you. By this mechanism you are proving that you have control of the records in the
domain you are trying to configure: that you own the domain.
If this works, any requests received in steps 5 and 6 above return the correct page. If
you cant prove you own the domain, Azure wont configure the domain for you and it
returns the blue-page 404 you can see in the screenshot above.
14.2.2
To prove you own the domain, Azure gives you some instructions on the Domains
configuration page
Ill go through it assuming the custom domain name I want is plankytronixx.co.uk and
the default Azure Website/WebApps name is plankytronixx.azurewebsites.net.
1. Add a CNAME record called awverify and point it to
awverify.plankytronixx.azurewebsites.net. Im showing this in the Go Daddy
screen below:
2.
3. You might have to wait a few minutes (or even a few hours in some cases) for the
DNS records to propagate (dont forget to save the changes)
4. If you get it right, when you type your domain name in to the Azure configuration
page youll get a little green tick in the text-box.
5.
6. If something has gone wrong (or the domain update hasnt yet propagated), you
get a little red exclamation mark.
7. You can now save the configuration. Youll see both the custom domain name and
the raw Azure Websites/WebApps name in the portal screen.
8.
Essentially what Azure did during this process was send a DNS query for
awverify.plankytronixx.co.uk and it expected to see back, exactly what it told you to
configure in the first place awverify.plankytronixx.azurewebsites.net. If it gets that
back, it reasons that you must have the power to make that record change at the DNS
registrar and you must therefore own that domain name.
14.3
Now you have a custom domain name, you can set up SSL for it (in this case,
plankytronixx.co.uk). But there are 2 types of SSL certificate. Traditional certificates,
known as IP-based SSL certs. And SNI certs (Server Name Indication).
Lets go back to 1994 when SSL was first introduced by Netscape. There werent really
all that many web servers running. And the assumption made at the time was that each
web site ran on a single server with a single IP address. But these days, IP addresses are
very scarce resources. Its not unusual to have many thousands, or even tens of
thousands of web sites all running at the same IP address. One of the things that helped
this was the introduction of the host name in the HTTP header as mentioned in Anatomy
of an HTTP GET request above.
You can still use IP-based certificates for SSL with Azure. But theres a 1:1 mapping
between the certificate and the site its attached to. For an IP-based certificate, it will
run on exactly one IP address. Youd think it must be possible to use the host-header
trick I mentioned above. But the trouble is that the SSL session is set up before the
HTTP request is executed. Its therefore not possible for the server to see what site to
route the traffic to. When theres only one site to send the traffic to, theres no decision
to make. But it means the site needs to have its own dedicated public IP address. The
disadvantage is that it costs more as a result.
SNI certificates were introduced to get round the problem of the 1:1 mapping between
sites and IP addresses. When an SNI certificate is used, the browser sends the host
name as part of the SSL setup. But youve probably already spotted the problem? Older
browsers that dont support SNI certificates wont work. So you have to decide between
high coverage and higher costs (IP-based certs) or lower coverage and lower costs (SNIbased certs). Your ball
The certificate needs to be in a format that can transport not only the public key, but
also the private key. This usually means there is some form of protection on the file that
contains the certificate, like a password. But theres another problem. Normally, the
server on which the certificate sits will generate both the public and the private key. You
will then send the public key, plus some information about your site (its DNS name for
example) and yourself as the site admin. All this gets wrapped up in to a Certificate
Signing Request (CSR). Azure Websites dont give you access to the underlying server
(in the way that say Cloud Services do). You keep the private key, well, private (you
dont even reveal the private key to the certificate provider). You add the private key in
to the certificate file. THe file has to be in .pfx format.
Probably the easiest way to do this is to fire up IIS Manager on a machine and create a
Certificate Request on it. Open the root then on the main pane double-click Server
Certificates. Youll see an option in the panel on the right hand side to Create
Certificate Request. Go through the wizard and save the text file in a convenient
location. This process generates a public and a private key. The public key is in the CSR
text file. The private key is kept on the machine and is, well, private.
Now you need to present this Certificate Request to a Cert Issuer such as VeriSign,
InstantSSL, Thawte and so on. Or your own CA Quite a few of the providers will give
you a 30-day or 90-day trial certificate. Typically if you apply for a certificate for say,
plankytronixx.co.uk, youll need an email address at plankytronixx.co.uk. Each issuer
has its own process: some ask for lots of information, others ask for a little. What you
will end up with is a signed certificate. They will email the response to you.
Remember that when you created the request, the private key was kept private? Well,
its still hanging around. When you complete the certificate request in IIS, it will marry
up with the file the CA sent you in email. You go back and click Complete Certificate
Request.
Youll be asked for the file the CA sent you. Put all the details in and click OK.
You now get the option to export the certificate and it and thats a good thing because
you can export the certificate in .pfx format which is exactly what Azure
WebSites/WebApps requires.
The easiest way to do this is to find your certificate in the Server Certificates section
of IIS. Identify your certificate, right-click it and select Export. Youll end up with a .pfx
file.
You go to the Azure Portal and click upload a certificate on the Configure page (this is
as long as you are in the Basic or higher tier).
Thats not the end of the process. You now have a certificate in Azure to get your
Azure Website/WebApp to use it you need to configure the SSL Bindings section of
the page. Youll specify which DNS name you want to protect with SSL, youll select the
certificate you just uploaded and, depending on the type of cert you bought, youll
select either SNI or IP based certificate.
Once you save those settings youre done. Try it out by connecting a browser to your
site. If you used an EV certificate, the address bar turns green. A padlock icon appears
in the address bar of most browsers. If you click it in IE you can actually view the
certificate plus its certification path all the way up to the root authority.
Azure, from FileZilla to full-featured IDEs like NetBeans. This is strictly a file
upload process. No additional services are provided by App Service, such as
version control, file structure management, etc.
App Service. Push your code to Kudu directly from any repository. Kudu also
provides added services whenever code is pushed to it, including version
control, package restore, MSBuild, and web hooks for continuous deployment
and other automation tasks. The Kudu deployment engine supports 3 different
types of deployment sources:
o
tools such as Visual Studio using the same tooling that automates deployment to
IIS servers. This tool supports diff-only deployment, database creation, transforms
of connection strings, etc. Web Deploy differs from Kudu in that application
binaries are built before they are deployed to Azure. Similar to FTP, no additional
services are provided by App Service.
+
choose. For example, if you perform Web Deploy from Visual Studio with
Azure SDK, even though you don't get automation from Kudu, you do get
package restore and MSBuild automation in Visual Studio. +
15.1.1.1.1
Note
Having to know how to deploy files to the correct directories in App Service.
Potential long deployment times because many FTP tools don't provide diffonly copying and simply copy all the files.
One-click deployment.
15.3.1
How to deploy by syncing with a cloud folder
In the Azure Portal, you can designate a folder for content sync in your
OneDrive or Dropbox cloud storage, work with your app code and content in
that folder, and sync to App Service with the click of a button.+
15.4.1
How to deploy continuously from a cloud-based source control
service
In the Azure Portal, you can configure continuous deployment from GitHub,
Bitbucket, and Visual Studio Team Services.+
15.5.1
How to deploy from local Git
In the Azure Portal, you can configure local Git deployment.+
Additional pros of deploying using Visual Studio with Azure SDK are:+
Azure SDK makes Azure resources first-class citizens in Visual Studio. Create,
delete, edit, start, and stop apps, query the backend SQL database, live-debug
the Azure app, and much more.
Diff-only deployment.
15.6.1
Get started with Azure and ASP.NET. How to create and deploy a simple
ASP.NET MVC web project by using Visual Studio and Web Deploy.
How to Deploy Azure WebJobs using Visual Studio. How to configure Console
Application projects so that they deploy as WebJobs.
Deploy a Secure ASP.NET MVC 5 app with Membership, OAuth, and SQL
Database to Web Apps. How to create and deploy an ASP.NET MVC web project
with a SQL database, by using Visual Studio, Web Deploy, and Entity Framework
Code First Migrations.
ASP.NET Web Deployment using Visual Studio. A 12-part tutorial series that
covers a more complete range of deployment tasks than the others in this list.
Some Azure deployment features have been added since the tutorial was written,
but notes added later explain what's missing.
Studio, using the Git plug-in to commit the code to Git and connecting Azure to
the Git repository. Starting in Visual Studio 2013, Git support is built-in and
doesn't require installation of a plug-in.
+
15.6.2
How to deploy using the Azure Toolkits for Eclipse and IntelliJ
IDEA
Microsoft makes it possible to deploy Web Apps to Azure directly from Eclipse
and IntelliJ via the Azure Toolkit for Eclipse and Azure Toolkit for IntelliJ. The
following tutorials illustrate the steps that are involved in deploying simple a
"Hello" world Web App to Azure using either IDE:+
Create a Hello World Web App for Azure in Eclipse. This tutorial shows you
how to use the Azure Toolkit for Eclipse to create and deploy a Hello World Web
App for Azure.
Create a Hello World Web App for Azure in IntelliJ. This tutorial shows you how
to use the Azure Toolkit for IntelliJ to create and deploy a Hello World Web App for
Azure.
Inside the Microsoft Build Engine: Using MSBuild and Team Foundation Build.
Hard-copy book that includes chapters on how to use MSBuild for deployment.
15.7.2
Automate deployment with Windows PowerShell
You can perform MSBuild or FTP deployment functions from Windows
PowerShell. If you do that, you can also use a collection of Windows
PowerShell cmdlets that make the Azure REST management API easy to call.
+
For more information, see the following resources:+
15.7.3
Automate deployment with .NET management API
You can write C# code to perform MSBuild or FTP functions for deployment. If
you do that, you can access the Azure management REST API to perform site
management functions.+
For more information, see the following resource:+
15.7.4
Deploy from Azure Command-Line Interface (Azure CLI)
You can use the command line in Windows, Mac or Linux machines to deploy
by using FTP. If you do that, you can also access the Azure REST
management API using the Azure CLI.+
For more information, see the following resource:+
Azure Command line tools. Portal page in Azure.com for command line tool
information.
15.7.5
Deploy from Web Deploy command line
Web Deploy is Microsoft software for deployment to IIS that not only provides
intelligent file sync features but also can perform or coordinate many other
deployment-related tasks that can't be automated when you use FTP. For
example, Web Deploy can deploy a new database or database updates along
with your web app. Web Deploy can also minimize the time required to
update an existing site since it can intelligently copy only changed files.
Microsoft Visual Studio and Team Foundation Server have support for Web
Deploy built-in, but you can also use Web Deploy directly from the command
line to automate deployment. Web Deploy commands are very powerful but
the learning curve can be steep.+
For more information, see the following resource:+
Simple Web Apps: Deployment. Blog by David Ebbo about a tool he wrote to
make it easier to use Web Deploy.
Using Web Deploy. Official documentation on the Microsoft IIS.NET site. Also
dated but a good place to start.
+
This sample application works with Azure queues and Azure blobs. The tutorial
shows how to deploy the application to Azure App Service and Azure SQL Database.
+
Prerequisites
The tutorial assumes that you know how to work with ASP.NET MVC 5 projects in
Visual Studio.+
The tutorial was written for Visual Studio 2013. If you don't have Visual Studio
already, it will be installed for you automatically when you install the Azure SDK
for .NET.+
The tutorial can be used with Visual Studio 2015, but before you run the application
locally you have to change the Data Source part of the SQL Server LocalDB
+
When a user uploads an image, the web app stores the image in an Azure blob, and
it stores the ad information in the database with a URL that points to the blob. At
the same time, it writes a message to an Azure queue. In a backend process
running as an Azure WebJob, the WebJobs SDK polls the queue for new messages.
When a new message appears, the WebJob creates a thumbnail for that image and
updates the thumbnail URL database field for that ad. Here's a diagram that shows
how the parts of the application interact:+
+
Set up the development environment
To start, set up your development environment by installing the Azure SDK for Visual
Studio 2015 or the Azure SDK for Visual Studio 2013.+
If you don't have Visual Studio installed, use the link for Visual Studio 2015, and
Visual Studio will be installed along with the SDK.+
Note
Depending on how many of the SDK dependencies you already have on your
machine, installing the SDK could take a long time, from several minutes to a half
hour or more.+
The tutorial instructions apply to Azure SDK for .NET 2.7.1 or later.+
Create an Azure Storage account
An Azure storage account provides resources for storing queue and blob data in the
cloud. It's also used by the WebJobs SDK to store logging data for the dashboard.+
In a real-world application, you typically create separate accounts for application
data versus logging data, and separate accounts for test data versus production
data. For this tutorial you'll use just one account.+
Open the Server Explorer window in Visual Studio.
Right-click the Azure node, and then click Connect to Microsoft Azure.
In the Create Storage Account dialog, enter a name for the storage account.
The name must be must be unique (no other Azure storage account can have the
same name). If the name you enter is already in use you'll get a chance to change
it.
The URL to access your storage account will be {name}.core.windows.net.
Set the Region or Affinity Group drop-down list to the region closest to you.
This setting specifies which Azure datacenter will host your storage account. For this
tutorial, your choice won't make a noticeable difference. However, for a production
web app, you want your web server and your storage account to be in the same
region to minimize latency and data egress charges. The web app (which you'll
+
Download the application
Download and unzip the completed solution.
Start Visual Studio.
From the File menu choose Open > Project/Solution, navigate to where you
downloaded the solution, and then open the solution file.
Press CTRL+SHIFT+B to build the solution.
By default, Visual Studio automatically restores the NuGet package content, which
was not included in the .zip file. If the packages don't restore, install them manually
by going to the Manage NuGet Packages for Solution dialog and clicking the Restore
button at the top right.
In the Properties window, click Storage Account Keys, and then click the ellipsis.
Replace the storage connection string in the Web.config file with the connection
string you just copied. Make sure you select everything inside the quotation marks
but not including the quotation marks before pasting.
Open the App.config file in the ContosoAdsWebJob project.
This file has two storage connection strings, one for application data and one for
logging. You can use separate storage accounts for application data and logging,
and you can use multiple storage accounts for data. For this tutorial you'll use a
single storage account. The connection strings have placeholders for the storage
account keys.
<configuration>
<connectionStrings>
<add name="AzureWebJobsDashboard"
connectionString="DefaultEndpointsProtocol=https;AccountName=[accountname];
AccountKey=[accesskey]"/>
<add name="AzureWebJobsStorage"
connectionString="DefaultEndpointsProtocol=https;AccountName=[accountname];
AccountKey=[accesskey]"/>
<add name="ContosoAdsContext" connectionString="Data
Source=(localdb)\v11.0; Initial Catalog=ContosoAds; Integrated Security=True;
MultipleActiveResultSets=True;"/>
</connectionStrings>
<startup>
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />
</startup>
</configuration>
The app goes to the Index page, but it doesn't show a thumbnail for the new ad
because that processing hasn't happened yet.
Meanwhile, after a short wait a logging message in the console application window
shows that a queue message was received and has been processed.
After you see the logging messages in the console application window, refresh the
Index page to see the thumbnail.
+
You've been running the application on your local computer, and it's using a SQL
Server database located on your computer, but it's working with queues and blobs
in the cloud. In the following section you'll run the application in the cloud, using a
cloud database as well as cloud blobs and queues. +
In the Create web app on Microsoft Azure dialog box, enter a unique name in the
Web app name box.
The complete URL will consist of what you enter here plus .azurewebsites.net (as
shown next to the Web app name text box). For example, if the web app name is
ContosoAds, the URL will be ContosoAds.azurewebsites.net.
In the App Service plan drop-down list choose Create new App Service plan. Enter a
name for the App Service plan, such as ContosoAdsPlan.
In the Resource group drop-down list choose Create new resource group.
Enter a name for the resource group, such as ContosoAdsGroup.
In the Region drop-down list, choose the same region you chose for your storage
account.
This setting specifies which Azure datacenter your web app will run in. Keeping the
web app and storage account in the same datacenter minimizes latency and data
egress charges.
In the Database server drop-down list choose Create new server.
Enter a name for the database server, such as contosoadsserver + a number or
your name to make the server name unique.
The server name must be unique. It can contain lower-case letters, numeric digits,
and hyphens. It cannot contain a trailing hyphen.
Alternatively, if your subscription already has a server, you can select that server
from the drop-down list.
Enter an administrator Database username and Database password.
If you selected New SQL Database server you aren't entering an existing name and
password here, you're entering a new name and password that you're defining now
to use later when you access the database. If you selected a server that you created
previously, you'll be prompted for the password to the administrative user account
you already created.
Click Create.
Visual Studio creates the solution, the web project, the web app in Azure, and the
Azure SQL Database instance.
In the Connection step of the Publish Web wizard, click Next.
In the Settings step, clear the Use this connection string at runtime check box, and
then click Next.
You don't need to use the publish dialog to set the SQL connection string because
you'll set that value in the Azure environment later.
You can ignore the warnings on this page.
Normally the storage account you use when running in Azure would be different
from the one you use when running locally, but for this tutorial you're using the
same one in both environments. So the AzureWebJobsStorage connection string
does not need to be transformed. Even if you did want to use a different storage
account in the cloud, you wouldn't need to transform the connection string because
the app uses an Azure environment setting when it runs in Azure. You'll see this
later in the tutorial.
For this tutorial you aren't going to be making changes to the data model used for
the ContosoAdsContext database, so there is no need to use Entity Framework Code
First Migrations for deployment. Code First automatically creates a new database
the first time the app tries to access SQL data.
For this tutorial, the default values of the options under File Publish Options are fine.
In the Preview step, click Start Preview.
You can ignore the warning about no databases being published. Entity Framework
Code First creates the database; it doesn't need to be published.
The preview window shows that binaries and configuration files from the WebJob
project will be copied to the app_data\jobs\continuous folder of the web app.
Click Publish.
Visual Studio deploys the application and opens the home page URL in the browser.
You won't be able to use the web app until you set connection strings in the Azure
environment in the next section. You'll see either an error page or the home page
depending on web app and database creation options you chose earlier.
+
Configure the web app to use your Azure SQL database and storage account.
It's a security best practice to avoid putting sensitive information such as
connection strings in files that are stored in source code repositories. Azure provides
a way to do that: you can set connection string and other setting values in the
Azure environment, and ASP.NET configuration APIs automatically pick up these
values when the app runs in Azure. You can set these values in Azure by using
Server Explorer, the Azure Portal, Windows PowerShell, or the cross-platform
command-line interface. For more information, see How Application Strings and
Connection Strings Work.+
In this section you use Server Explorer to set connection string values in Azure.+
In Server Explorer, right-click your web app under Azure > App Service > {your
resource group}, and then click View Settings.
The Azure Web App window opens on the Configuration tab.
Change the name of the DefaultConnection connection string to
ContosoAdsContext.
Azure automatically created this connection string when you created the web app
with an associated database, so it already has the right connection string value.
You're changing just the name to what your code is looking for.
Add two new connection strings, named AzureWebJobsStorage and
AzureWebJobsDashboard. Set type to Custom, and set the connection string value to
the same value that you used earlier for the Web.config and App.config files. (Make
sure you include the entire connection string, not just the access key, and don't
include the quotation marks.)
These connection strings are used by the WebJobs SDK, one for application data and
one for logging. As you saw earlier, the one for application data is also used by the
web front end code.
Click Save.
In Server Explorer, right-click the web app, and then click Stop.
After the web app stops, right-click the web app again, and then click Start.
The WebJob automatically starts when you publish, but it stops when you make a
configuration change. To restart it you can either restart the web app or restart the
WebJob in the Azure Portal. It's generally recommended to restart the web app after
a configuration change.
Refresh the browser window that has the web app URL in its address bar.
A new browser tab opens to the WebJobs SDK dashboard. The dashboard shows that
the WebJob is running and shows a list of functions in your code that the WebJobs
SDK triggered.
Click one of the functions to see details about its execution.
The Replay Function button on this page causes the WebJobs SDK framework to call
the function again, and it gives you a chance to change the data passed to the
function first.
+
Note
When you're finished testing, delete the web app and the SQL Database instance.
The web app is free, but the SQL Database instance and storage account accrue
charges (minimal due to small size). Also, if you leave the web app running, anyone
who finds your URL can create and view ads. In the classic portal, go to the
Dashboard tab for your web app, and then click the Delete button at the bottom of
the page. You can then select a check box to delete the SQL Database instance at
the same time. If you just want to temporarily prevent others from accessing the
web app, click Stop instead. In that case, charges will continue to accrue for the SQL
Database and Storage account. You can follow a similar procedure to delete the SQL
database and storage account when you no longer need them.+
Create the application from scratch
In this section you'll do the following tasks:+
Create a Visual Studio solution with a web project.
Add a class library project for the data access layer that is shared between front end
and backend.
Add a Console Application project for the backend, with WebJobs deployment
enabled.
Add NuGet packages.
Set project references.
Copy application code and configuration files from the downloaded application that
you worked with in the previous section of the tutorial.
Review the parts of the code that work with Azure blobs and queues and the
WebJobs SDK.
+
Create a Visual Studio solution with a web project and class library project
In Visual Studio, choose New > Project from the File menu.
In the New Project dialog, choose Visual C# > Web > ASP.NET Web Application.
Name the project ContosoAdsWeb, name the solution ContosoAdsWebJobsSDK
(change the solution name if you're putting it in the same folder as the downloaded
solution), and then click OK.
In the New ASP.NET Project dialog, choose the MVC template, and clear the Host in
the cloud check box under Microsoft Azure.
Selecting Host in the cloud enables Visual Studio to automatically create a new
Azure web app and SQL Database. Since you already created these earlier, you
don't need to do so now while creating the project. If you want to create a new one,
select the check box. You can then configure the new web app and SQL database
the same way you did earlier when you deployed the application.
Click Change Authentication.
In the Change Authentication dialog, choose No Authentication, and then click OK.
In the Add Azure WebJob dialog, enter ContosoAdsWebJob as both the Project name
and the WebJob name. Leave WebJob run mode set to Run Continuously.
Click OK.
Visual Studio creates a Console application that is configured to deploy as a WebJob
whenever you deploy the web project. To do that, it performed the following tasks
after creating the project:
Added a webjob-publish-settings.json file in the WebJob project Properties folder.
Added a webjobs-list.json file in the web project Properties folder.
Installed the Microsoft.Web.WebJobs.Publish NuGet package in the WebJob project.
For more information about these changes, see How to deploy WebJobs by using
Visual Studio.
+
Add NuGet packages
The new-project template for a WebJob project automatically installs the WebJobs
SDK NuGet package Microsoft.Azure.WebJobs and its dependencies.+
One of the WebJobs SDK dependencies that is installed automatically in the WebJob
project is the Azure Storage Client Library (SCL). However, you need to add it to the
web project to work with blobs and queues.+
Open the Manage NuGet Packages dialog for the solution.
In the left pane, select Installed packages.
Find the Azure Storage package, and then click Manage.
In the Select Projects box, select the ContosoAdsWeb check box, and then click OK.
All three projects use the Entity Framework to work with data in SQL Database.
In the left pane, select Online.
Find the EntityFramework NuGet package, and install it in all three projects.
+
Set project references
Both web and WebJob projects work with the SQL database, so both need a
reference to the ContosoAdsCommon project.+
In the ContosoAdsWeb project, set a reference to the ContosoAdsCommon project.
(Right-click the ContosoAdsWeb project, and then click Add > Reference. In the
Reference Manager dialog box, select Solution > Projects > ContosoAdsCommon,
and then click OK.)
In the ContosoAdsWebJob project, set a reference to the ContosAdsCommon project.
The WebJob project needs references for working with images and for accessing
connection strings.
In the ContosoAdsWebJob project, set a reference to System.Drawing and
System.Configuration.
+
Add code and configuration files
This tutorial does not show how to create MVC controllers and views using
scaffolding, how to write Entity Framework code that works with SQL Server
databases, or the basics of asynchronous programming in ASP.NET 4.5. So all that
remains to do is copy code and configuration files from the downloaded solution into
the new solution. After you do that, the following sections show and explain key
parts of the code.+
To add files to a project or a folder, right-click the project or folder and click Add >
Existing Item. Select the files you want and click Add. If asked whether you want to
replace existing files, click Yes.+
In the ContosoAdsCommon project, delete the Class1.cs file and add in its place the
following files from the downloaded project.
Ad.cs
ContosoAdscontext.cs
BlobInformation.cs
In the ContosoAdsWeb project, add the following files from the downloaded project.
Web.config
Global.asax.cs
In the Controllers folder: AdController.cs
In the Views\Shared folder: _Layout.cshtml file
In the Views\Home folder: Index.cshtml
In the Views\Ad folder (create the folder first): five .cshtml files
In the ContosoAdsWebJob project, add the following files from the downloaded
project.
App.config (change the file type filter to All Files)
Program.cs
Functions.cs
+
You can now build, run, and deploy the application as instructed earlier in the
tutorial. Before you do that, however, stop the WebJob that is still running in the first
web app you deployed to. Otherwise that WebJob will process queue messages
created locally or by the app running in a new web app, since all are using the same
storage account.+
Review the application code
The following sections explain the code related to working with the WebJobs SDK
and Azure Storage blobs and queues.+
Note
For the code specific to the WebJobs SDK, go to the Program.cs and Functions.cs
sections.+
ContosoAdsCommon - Ad.cs
The Ad.cs file defines an enum for ad categories and a POCO entity class for ad
information.+
Copy
public class Ad
{
public int AdId { get; set; }
[StringLength(100)]
public string Title { get; set; }
[StringLength(1000)]
[DataType(DataType.MultilineText)]
public string Description { get; set; }
[StringLength(1000)]
[DisplayName("Full-size Image")]
public string ImageURL { get; set; }
[StringLength(1000)]
[DisplayName("Thumbnail")]
public string ThumbnailURL { get; set; }
[DataType(DataType.Date)]
[DisplayFormat(DataFormatString = "{0:yyyy-MM-dd}",
ApplyFormatInEditMode = true)]
public DateTime PostedDate { get; set; }
runtime environment. The second constructor enables you to pass in the actual
connection string. That is needed by the WebJob project since it doesn't have a
Web.config file. You saw earlier where this connection string was stored, and you'll
see later how the code retrieves the connection string when it instantiates the
DbContext class.+
ContosoAdsCommon - BlobInformation.cs
The BlobInformation class is used to store information about an image blob in a
queue message.+
Copy
Code that is called from the Application_Start method creates an images blob
container and an images queue if they don't already exist. This ensures that
whenever you start using a new storage account, the required blob container and
queue are created automatically.+
The code gets access to the storage account by using the storage connection string
from the Web.config file or Azure runtime environment.+
Copy
imagesQueue.CreateIfNotExists();
ContosoAdsWeb - _Layout.cshtml
The _Layout.cshtml file sets the app name in the header and footer, and creates an
"Ads" menu entry.+
ContosoAdsWeb - Views\Home\Index.cshtml
The Views\Home\Index.cshtml file displays category links on the home page. The
links pass the integer value of the Category enum in a querystring variable to the
Ads Index page.+
Copy
[HttpPost]
[ValidateAntiForgeryToken]
public async Task<ActionResult> Create(
[Bind(Include = "Title,Price,Description,Category,Phone")] Ad ad,
HttpPostedFileBase imageFile)
If the user selected a file to upload, the code uploads the file, saves it in a blob, and
updates the Ad database record with a URL that points to the blob.+
Copy
[Blob("images/{BlobNameWithoutExtension}_thumbnail.jpg")] CloudBlockBlob
outputBlob)
{
using (Stream output = outputBlob.OpenWrite())
{
ConvertImageToThumbnailJPG(input, output);
outputBlob.Properties.ContentType = "image/jpeg";
}
deleted. If the method fails before completing, the queue message is not deleted;
after a 10-minute lease expires, the message is released to be picked up again and
processed. This sequence won't be repeated indefinitely if a message always causes
an exception. After 5 unsuccessful attempts to process a message, the message is
moved to a queue named {queuename}-poison. The maximum number of attempts
is configurable.
The two Blob attributes provide objects that are bound to blobs: one to the existing
image blob and one to a new thumbnail blob that the method creates.
Copy
use in a Windows or ASP.NET service. For more information about image processing
options, see Dynamic Image Generation and Deep Inside Image Resizing.
+
Next steps
In this tutorial you've seen a simple multi-tier application that uses the WebJobs SDK
for backend processing. This section offers some suggestions for learning more
about ASP.NET multi-tier applications and WebJobs.+
Missing features
The application has been kept simple for a getting-started tutorial. In a real-world
application you would implement dependency injection and the repository and unit
of work patterns, use an interface for logging, use EF Code First Migrations to
manage data model changes, and use EF Connection Resiliency to manage
transient network errors.+
Scaling WebJobs
WebJobs run in the context of a web app and are not scalable separately. For
example, if you have one Standard web app instance, you have only one instance of
your background process running, and it is using some of the server resources (CPU,
memory, etc.) that otherwise would be available to serve web content.+
If traffic varies by time of day or day of week, and if the backend processing you
need to do can wait, you could schedule your WebJobs to run at low-traffic times. If
the load is still too high for that solution, you can run the backend as a WebJob in a
separate web app dedicated for that purpose. You can then scale your backend web
app independently from your frontend web app.+
For more information, see Scaling WebJobs.+
Avoiding web app timeout shut-downs
To make sure your WebJobs are always running, and running on all instances of your
web app, you have to enable the AlwaysOn feature.+
Using the WebJobs SDK outside of WebJobs
A program that uses the WebJobs SDK doesn't have to run in Azure in a WebJob. It
can run locally, and it can also run in other environments such as a Cloud Service
worker role or a Windows service. However, you can only access the WebJobs SDK
dashboard through an Azure web app. To use the dashboard you have to connect
the web app to the storage account you're using by setting the
AzureWebJobsDashboard connection string on the Configure tab of the classic
portal. Then you can get to the Dashboard by using the following URL:+
https://{webappname}.scm.azurewebsites.net/azurejobs/#/functions+
For more information, see Getting a dashboard for local development with the
WebJobs SDK, but note that it shows an old connection string name.
17 Copy Blob
The Copy Blob operation copies a blob to a destination within the storage
account. In version 2012-02-12 and later, the source for a Copy Blob
operation can be a committed blob in any Azure storage account. +
Beginning with version 2015-02-21, the source for a Copy Blob operation can
be an Azure file in any Azure storage account. +
17.1.1.1.1
Note
Only storage accounts created on or after June 7th, 2012 allow the Copy Blob
operation to copy from another storage account. +
17.2 Request
The Copy Blob request may be constructed as follows. HTTPS is
recommended. Replace myaccount with the name of your storage account,
mycontainer with the name of your container, and myblob with the name of
your destination blob. +
Beginning with version 2013-08-15, you may specify a shared access
signature for the destination blob if it is in the same account as the source
blob. Beginning with version 2015-04-05, you may also specify a shared
access signature for the destination blob if it is in a different storage account.
+
PUT
Metho
d
Reque
st URI
https://myaccount.blob.core.windows.net/mycontaine
r/myblob
HTTP
Versi
on
HTTP/
1.1
17.2.1
PUT
Metho
d
Reque
st URI
http://127.0.0.1:10000/devstoreaccount1/mycontaine
r/myblob
HTTP
Versi
on
HTTP/
1.1
For more information, see Using the Azure Storage Emulator for
Development and Testing. +
17.2.2
URI Parameters
The following additional parameters may be specified on the request URI. +
Paramet
er
timeou
t
Description
Optional. The timeout parameter is expressed in seconds. For
more information, see Setting Timeouts for Blob Service
Operations.
17.2.3
Request Headers
The following table describes required and optional request headers. +
Request
Header
Description
Authoriza
tion
Date or
x-msdate
x-msversion
x-msmetaname:val
ue
x-mssource-ifmodifiedsince
x-mssource-if-
Request
Header
Description
unmodifi
ed-since
x-mssource-ifmatch
x-mssource-ifnonematch
IfModifiedSince
IfUnmodifi
ed-Since
If-Match
Request
Header
If-NoneMatch
Description
Optional. An ETag value, or the wildcard character ().
Specify an ETag value for this conditional header to copy the
blob only if the specified ETag value does not match the ETag
value for the destination blob.
Specify the wildcard character (\) to perform the operation only
if the destination blob does not exist.
If the specified condition isn't met, the Blob service returns
status code 412 (Precondition Failed).
x-mscopysource:n
ame
https://myaccount.blob.core.windows.net/mycontainer/myb
lob
-
https://myaccount.blob.core.windows.net/mycontainer/myb
lob?snapshot=<DateTime>
Request
Header
Description
When the source object is a file in the Azure File service, the
source URL uses the following format; note that the URL must
include a valid SAS token for the file:
-
https://myaccount.file.core.windows.net/myshare/mydirect
orypath/myfile?sastoken
In versions before 2012-02-12, blobs can only be copied within
the same account, and a source name can use these formats:
- Blob in named container:
/accountName/containerName/blobName
- Snapshot in named container:
/accountName/containerName/blobName?
snapshot=<DateTime>
- Blob in root container: /accountName/blobName
- Snapshot in root container: /accountName/blobName?
snapshot=<DateTime>
x-msleaseid:<ID>
x-mssourcelease-id:
Optional, versions before 2012-02-12 (unsupported in 2012-0212 and newer). Specify this header to perform the Copy Blob
operation only if the lease ID given matches the active lease ID
Request
Header
Description
<ID>
x-msclientrequestid
17.2.4
None. +
Request Body
17.3 Response
The response includes an HTTP status code and a set of response headers. +
17.3.1
Status Code
In version 2012-02-12 and newer, a successful operation returns status code
202 (Accepted). +
In versions before 2012-02-12, a successful operation returns status code
201 (Created). +
For information about status codes, see Status and Error Codes. +
17.3.2
Response Headers
The response for this operation includes the following headers. The response
may also include additional standard HTTP headers. All standard headers
conform to the HTTP/1.1 protocol specification. +
Response header
Description
ETag
Last-Modified
x-ms-request-id
x-ms-version
Date
x-ms-copy-id:
<id>
x-ms-copy-status:
<success |
pending>
Response Status:
HTTP/1.1 202 Accepted
Response Headers:
Last-Modified: <date>
ETag: "0x8CEB669D794AFE2"
Server: Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: cc6b209a-b593-4be1-a38a-dde7c106f402
x-ms-version: 2015-02-21
x-ms-copy-id: 1f812371-a41d-49e6-b123-f4b542e851c5
x-ms-copy-status: pending
Date: <date>
17.6 Authorization
This operation can be called by the account owner. For requests made
against version 2013-08-15 and later, a shared access signature that has
permission to write to the destination blob or its container is supported for
copy operations within the same account. Note that the shared access
signature specified on the request applies only to the destination blob. +
Authentication
with Shared
Key/Shared Key
Lite
Authentication
with Shared
Access
Signature
Public Object
Not Requiring
Authentication
Destination
blob
Yes
Yes
No
Source blob in
same account
Yes
Yes
Yes
Source blob in
another
account
No
Yes
Yes
Source file in
the same
account or
another
account
No
Yes
N/A
17.7 Remarks
In version 2012-02-12 and newer, the Copy Blob operation can complete
asynchronously. This operation returns a copy ID you can use to check or
abort the copy operation. The Blob service copies blobs on a best-effort
basis. +
The source blob for a copy operation may be a block blob, an append blob, or
a page blob, or a snapshot. If the destination blob already exists, it must be
of the same blob type as the source blob. Any existing destination blob will
be overwritten. The destination blob cannot be modified while a copy
operation is in progress. +
In version 2015-02-21 and newer, the source for the copy operation may also
be a file in the Azure File service. If the source is a file, the destination must
be a block blob. +
Multiple pending Copy Blob operations within an account might be processed
sequentially. A destination blob can only have one outstanding copy blob
operation. In other words, a blob cannot be the destination for multiple
pending Copy Blob operations. An attempt to Copy Blob to a destination blob
that already has a copy pending fails with status code 409 (Conflict). +
Only storage accounts created on or after June 7th, 2012 allow the Copy Blob
operation to copy from another storage account. An attempt to copy from
another storage account to an account created before June 7th, 2012 fails
with status code 400 (Bad Request). +
The Copy Blob operation always copies the entire source blob or file; copying
a range of bytes or set of blocks is not supported. +
A Copy Blob operation can take any of the following forms: +
You can copy a source blob to a destination blob with a different name.
The destination blob can be an existing blob of the same blob type (block,
append, or page), or can be a new blob created by the copy operation.
You can copy a source blob to a destination blob with the same name,
effectively replacing the destination blob. Such a copy operation removes any
uncommitted blocks and overwrites the blob's metadata.
You can copy a source file in the Azure File service to a destination blob.
The destination blob can be an existing block blob, or can be a new block blob
created by the copy operation. Copying from files to page blobs or append
blobs is not supported.
You can copy a snapshot over its base blob. By promoting a snapshot to
the position of the base blob, you can restore an earlier version of a blob.
You can copy a snapshot to a destination blob with a different name. The
resulting destination blob is a writeable blob and not a snapshot.
When copying from a page blob, the Blob service creates a destination page
blob of the source blobs length, initially containing all zeroes. Then the source
page ranges are enumerated, and non-empty ranges are copied.
For a block blob or an append blob, the Blob service creates a committed blob
of zero length before returning from this operation.
When copying from a block blob, all committed blocks and their block IDs are
copied. Uncommitted blocks are not copied. At the end of the copy operation,
the destination blob will have the same committed block count as the source.
When copying from an append blob, all committed blocks are copied. At the
end of the copy operation, the destination blob will have the same committed
block count as the source.
For all blob types, you can call Get Blob or Get Blob Properties on the
destination blob to check the status of the copy operation. The final blob will
be committed when the copy completes.
When the source of a copy operation provides ETags, if there are any changes
to the source while the copy is in progress, the copy will fail. An attempt to
change the destination blob while a copy is in progress will fail with 409
Conflict. If the destination blob has an infinite lease, the lease ID must be
passed to Copy Blob . Finite-duration leases are not allowed.
The ETag for a block blob changes when the Copy Blob operation is initiated
and when the copy finishes. The ETag for a page blob changes when the Copy
Content-Type
Content-Encoding
Content-Language
Content-Length
Cache-Control
Content-MD5
Content-Disposition
The source blob's committed block list is also copied to the destination blob, if
the blob is a block blob. Any uncommitted blocks are not copied.
The destination blob is always the same size as the source blob, so the value
of the Content-Length header for the destination blob matches that for the
source blob.
When the source blob and destination blob are the same, Copy Blob removes
any uncommitted blocks. If metadata is specified in this case, the existing
metadata is overwritten with the new metadata.
Copying a Leased Blob
The Copy Blob operation only reads from the source blob so the lease state of
the source blob does not matter. However, the Copy Blob operation saves the
ETag of the source blob when the copy is initiated. If the ETag value changes
before the copy completes, the copy fails. You can prevent changes to the
source blob by leasing it during the copy operation.
If the destination blob has an active infinite lease, you must specify its lease
ID in the call to the Copy Blob operation. If the lease you specify is an active
finite-duration lease, this call fails with a status code 412 (Precondition Failed).
While the copy is pending, any lease operation on the destination blob will fail
with status code 409 (Conflict). An infinite lease on the destination blob is
locked in this way during the copy operation whether you are copying to a
destination blob with a different name from the source, copying to a
destination blob with the same name as the source, or promoting a snapshot
over its base blob. If the client specifies a lease ID on a blob that does not yet
exist, the Blob service will return status code 412 (Precondition Failed) for
requests made against version 2013-08-15 and later; for prior versions the
Blob service will return status code 201 (Created).
Copying Snapshots
When a source blob is copied, any snapshots of the source blob are not copied
to the destination. When a destination blob is overwritten with a copy, any
snapshots associated with the destination blob stay intact under its name.
You can perform a copy operation to promote a snapshot blob over its base
blob. In this way you can restore an earlier version of a blob. The snapshot
remains, but its destination is overwritten with a copy that can be both read
and written.
Working with a Pending Copy (version 2012-02-12 and newer)
The Copy Blob operation completes the copy asynchronously. Use the
following table to determine the next step based on the status code returned
by Copy Blob :
+
Status Code
Meaning
Copy failed.
During and after a Copy Blob operation, the properties of the destination
blob contain the copy ID of the Copy Blob operation and URL of the source
blob. When the copy completes, the Blob service writes the time and
outcome value (success, failed, or aborted) to the destination blob
When you copy a source blob to a destination blob with a different name
within the same account, you use additional storage resources for the new
blob, so the copy operation results in a charge against the storage accounts
capacity usage for those additional resources. However, if the source and
destination blob name are the same within the same account (for example,
when you promote a snapshot to its base blob), no additional charge is
incurred other than the extra copy metadata stored in version 2012-02-12
and newer. +
When you promote a snapshot to replace its base blob, the snapshot and
base blob become identical. They share blocks or pages, so the copy
operation does not result in an additional charge against the storage
account's capacity usage. However, if you copy a snapshot to a destination
blob with a different name, an additional charge is incurred for the storage
resources used by the new blob that results. Two blobs with different names
cannot share blocks or pages even if they are identical. For more information
about snapshot cost scenarios, see Understanding How Snapshots Accrue
Charges.
DNS Level: Load balancing for traffic to different cloud services located in
different data centers, to different Azure websites located in different data
centers, or to external endpoints. This is done with Azure Traffic Manager and the
Round Robin load balancing method.
18.1 Traffic Manager load balancing for cloud services and websites
Traffic Manager allows you to control the distribution of user traffic to
endpoints, which can include cloud services, websites, external sites, and
other Traffic Manager profiles. Traffic Manager works by applying an
intelligent policy engine to Domain Name System (DNS) queries for the
domain names of your Internet resources. Your cloud services or websites
can be running in different datacenters across the world.+
You must use either REST or Windows PowerShell to configure external
endpoints or Traffic Manager profiles as endpoints.+
Traffic Manager uses three load-balancing methods to distribute traffic:+
Failover: Use this method when you want to use a primary endpoint for all
traffic, but provide backups in case the primary becomes unavailable.
Round Robin: Use this method when you want to distribute load across a set
of cloud services in the same datacenter or across cloud services or websites in
different datacenters.
For more information, see About Traffic Manager Load Balancing Methods.+
The following diagram shows an example of the Round Robin load balancing
method for distributing traffic between different cloud services.+
+
The basic process is the following:+
1.
2.
3.
Traffic Manager chooses the next cloud service in the Round Robin list and
sends back the DNS name. The Internet client's DNS server resolves the name to
an IP address and sends it to the Internet client.
4.
The Internet client connects with the cloud service chosen by Traffic Manager.
+
For more information, see Azure Load Balancer. For the steps to create a
load-balanced set, see Configure a load-balanced set.+
Azure can also load balance within a cloud service or virtual network. This is
known as internal load balancing and can be used in the following ways:+
19.1 Concepts
By default, minimal monitoring is provided for a new cloud service using
performance counters gathered from the host operating system for the roles
instances (virtual machines). The minimal metrics are limited to CPU
Percentage, Data In, Data Out, Disk Read Throughput, and Disk Write
Throughput. By configuring verbose monitoring, you can receive additional
metrics based on performance data within the virtual machines (role
instances). The verbose metrics enable closer analysis of issues that occur
during application operations.+
By default performance counter data from role instances is sampled and
transferred from the role instance at 3-minute intervals. When you enable
verbose monitoring, the raw performance counter data is aggregated for
each role instance and across role instances for each role at intervals of 5
minutes, 1 hour, and 12 hours. The aggregated data is purged after 10 days.
+
After you enable verbose monitoring, the aggregated monitoring data is
stored in tables in your storage account. To enable verbose monitoring for a
role, you must configure a diagnostics connection string that links to the
storage account. You can use different storage accounts for different roles.+
Enabling verbose monitoring increases your storage costs related to data
storage, data transfer, and storage transactions. Minimal monitoring does
not require a storage account. The data for the metrics that are exposed at
the minimal monitoring level are not stored in your storage account, even if
you set the monitoring level to verbose.+
19.2.1
Create a classic storage account to store the monitoring data. You can use
different storage accounts for different roles. For more information, see How to
create a storage account.
Enable Azure Diagnostics for your cloud service roles. See Configuring
Note
Projects targeting Azure SDK 2.5 did not automatically include the
diagnostics connection string in the project template. For these projects, you
need to manually add the diagnostics connection string to the Role
configuration.+
To manually add diagnostics connection string to Role
configuration+
1.
2.
Double-click on the Role to open the Role designer and select the Settings
tab
3.
4.
If this setting is not present, click on the Add Setting button to add it to the
configuration and change the type for the new setting to ConnectionString
5.
Set the value for connection string the by clicking on the ... button. This
will open up a dialog allowing you to select a storage account.
19.2.2
1.
In the Azure classic portal, open the Configure page for the cloud service
deployment.
2.
3.
Click Save.
After you turn on verbose monitoring, you should start seeing the monitoring
data in the Azure classic portal within the hour.+
The raw performance counter data and aggregated monitoring data are
stored in the storage account in tables qualified by the deployment ID for the
roles. +
triggered. For more information, see How to: Receive Alert Notifications and
Manage Alert Rules in Azure.+
In the Azure classic portal, open the Monitor page for the cloud service.
By default, the metrics table displays a subset of the available metrics. The
illustration shows the default verbose metrics for a cloud service, which is
limited to the Memory\Available MBytes performance counter, with data
aggregated at the role level. Use Add Metrics to select additional aggregate
and role-level metrics to monitor in the Azure classic portal.
2.
b.
options.
Select the check box for each monitoring option you want to
display.
You can display up to 50 metrics in the metrics table.
19.4.1.1.1
Tip
3.
4.
To delete a metric from the metrics table, click the metric to select it, and
then click Delete Metric. (You only see Delete Metric when you have a metric
selected.)
19.4.2
To add custom metrics to the metrics table
The Verbose monitoring level provides a list of default metrics that you can
monitor on the portal. In addition to these you can monitor any custom
metrics or performance counters defined by your application through the
portal.+
The following steps assume that you have turned on Verbose monitoring
level and have configured your application to collect and transfer custom
performance counters. +
To display the custom performance counters in the portal you need to update
the configuration in wad-control-container:+
1.
2.
3.
Download the configuration file for your role instance and update it to
include any custom performance counters. For example to monitor Disk Write
Bytes/sec for the C drive add the following under
PerformanceCounters\Subscriptions node
Copy
xml
<PerformanceCounterConfiguration>
4.
Save the changes and upload the configuration file back to the same location
overwriting the existing file in the blob.
5.
Toggle to Verbose mode in the Azure classic portal configuration. If you were
in Verbose mode already you will have to toggle to minimal and back to verbose.
6.
The custom performance counter will now be available in the Add Metrics
dialog box.
2.
To switch between displaying relative values (final value only for each
metric) and absolute values (Y axis displayed), select Relative or Absolute at
the top of the chart.
3.
To change the time range the metrics chart displays, select 1 hour, 24
hours, or 7 days at the top of the chart.
On the dashboard metrics chart, the method for plotting metrics is different. A
standard set of metrics is available, and metrics are added or removed by
selecting the metric header.
+
19.5.1
1.
2.
To plot a new metric, select the check box for the metric in the chart
headers. On a narrow display, click the down arrow by n??metrics to plot a
metric the chart header area can't display.
To delete a metric that is plotted on the chart, clear the check box by
its header.
3.
4.
19.6 How to: Access verbose monitoring data outside the Azure classic
portal
Verbose monitoring data is stored in tables in the storage accounts that you
specify for each role. For each cloud service deployment, six tables are
created for the role. Two tables are created for each (5 minutes, 1 hour, and
12 hours). One of these tables stores role-level aggregations; the other table
stores aggregations for role instances. +
The table names have the following format:+
Copy
WAD*deploymentID*PT*aggregation_interval*[R|RI]Table
where:+
role-level aggregations = R
For example, the following tables would store verbose monitoring data
aggregated at 1-hour intervals:+
Copy
You can configure your Visual Studio Team Services team projects to
automatically build and deploy to Azure web apps or cloud services. (For
information on how to set up a continuous build and deploy system using an
on-premises Team Foundation Server, see Continuous Delivery for Cloud
Services in Azure.)+
This tutorial assumes you have Visual Studio 2013 and the Azure SDK
installed. If you don't already have Visual Studio 2013, download it by
choosing the Get started for free link at www.visualstudio.com. Install the
Azure SDK from here.+
20.1.1.1.1
Note
You need an Visual Studio Team Services account to complete this tutorial:
You can open a Visual Studio Team Services account for free.+
To set up a cloud service to automatically build and deploy to Azure by using
Visual Studio Team Services, follow these steps.+
In Visual Studio, open the solution you want to deploy, or create a new
one. You can deploy a web app or a cloud service (Azure Application) by
following the steps in this walkthrough. If you want to create a new solution,
create a new Azure Cloud Service project, or a new ASP.NET MVC project.
Make sure that the project targets .NET Framework 4 or 4.5, and if you are
creating a cloud service project, add an ASP.NET MVC web role and a worker
role, and choose Internet application for the web role. When prompted,
choose Internet Application. If you want to create a web app, choose the
ASP.NET Web Application project template, and then choose MVC. See Create
an ASP.NET web app in Azure App Service.
20.3.1.1.1 Note
Visual Studio Team Services only support CI deployments of Visual Studio Web
Applications at this time. Web Site projects are out of scope.
2.
Open the context menu for the solution, and choose Add Solution to
Source Control.
3.
Accept or change the defaults and choose the OK button. Once the
process completes, source control icons appear in Solution Explorer.
4.
Open the shortcut menu for the solution, and choose Check In.
5.
Note the options to include or exclude specific changes when you check in. If
desired changes are excluded, choose the Include All link.
Now that you have a VS Team Services team project with some source
code in it, you are ready to connect your team project to Azure. In the Azure
classic portal, select your cloud service or web app, or create a new one by
choosing the + icon at the bottom left and choosing Cloud Service or Web
App and then Quick Create. Choose the Set up publishing with Visual
Studio Team Services link.
2.
In the wizard, type the name of your Visual Studio Team Services account
in the textbox and click the Authorize Now link. You might be asked to sign
in.
3.
4.
5.
After your project is linked, you will see some instructions for checking in
changes to your Visual Studio Team Services team project. On your next
check-in, Visual Studio Team Services will build and deploy your project to
Azure. Try this now by clicking the Check In from Visual Studio link, and
then the Launch Visual Studio link (or the equivalent Visual Studio button
at the bottom of the portal screen).
2.
3.
In Solution Explorer, open up a file and change it. For example, change
the file _Layout.cshtml under the Views\Shared folder in an MVC web role.
4.
Edit the logo for the site and choose Ctrl+S to save the file.
5.
6.
7.
Choose the Home button to return to the Team Explorer home page.
8.
Team Explorer shows that a build has been triggered for your check-in.
9.
10.
While the build is in-progress, take a look at the build definition that was
created when you linked TFS to Azure by using the wizard. Open the shortcut
menu for the build definition and choose Edit Build Definition.
In the Trigger tab, you will see that the build definition is set to build on
every check-in by default.
In the Process tab, you can see the deployment environment is set to the
name of your cloud service or web app. If you are working with web apps, the
properties you see will be different from those shown here.
11.
Specify values for the properties if you want different values than the
defaults. The properties for Azure publishing are in the Deployment section.
The following table shows the available properties in the Deployment
section:
12.
Property
Default Value
Allow Untrusted
Certificates
Allow Upgrade
Do Not Delete
Path to Deployment
Settings
Sharepoint
Deployment
Environment
Azure Deployment
Environment
If you are using multiple service configurations (.cscfg files), you can
13.
Summary, including any test results from associated unit test projects.
14.
In the Azure classic portal, you can view the associated deployment on
15.
Browse to your site's URL. For a web app, just click the Browse button on
the command bar. For a cloud service, choose the URL in the Quick Glance
section of the Dashboard page that shows the Staging environment for a
cloud service. Deployments from continuous integration for cloud services are
published to the Staging environment by default. You can change this by
setting the Alternate Cloud Service Environment property to Production.
This screenshot shows where the site URL is on the cloud service's dashboard
page.
For cloud services, if you make other changes to your project, you trigger
more builds, and you will accumulate multiple deployments. The latest one
marked as Active.
2.
3.
Add some unit tests. To get started, try a dummy test that will always
pass.
Copy
```
using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;
namespace UnitTestProject1
{
[TestClass]
public class UnitTest1
{
[TestMethod]
[ExpectedException(typeof(NotImplementedException))]
public void TestMethod1()
{
throw new NotImplementedException();
}
}
}
```
4.
Edit the build definition, choose the Process tab, and expand the Test node.
5.
Set the Fail build on test failure to True. This means that the
deployment won't occur unless the tests pass.
6.
7.
8.
9.
Try creating a test that will fail. Add a new test by copying the first one,
rename it, and comment out the line of code that states
NotImplementedException is an expected exception.
Copy
```
[TestMethod]
//[ExpectedException(typeof(NotImplementedException))]
public void TestMethod2()
{
10.
11.
Note
The procedures in this task apply to Azure Cloud Services; for App Services,
see this article.+
This task uses a production deployment. Information on using a staging
deployment is provided at the end of this topic.+
Read this article first if you have not yet created a cloud service.+
21.1.1.1.2
Note
who issues certificates for this purpose. If you do not already have one, you
need to obtain one from a company that sells SSL certificates.+
The certificate must meet the following requirements for SSL certificates in
Azure:+
The certificate's subject name must match the domain used to access the
cloud service. You cannot obtain an SSL certificate from a certificate authority
(CA) for the cloudapp.net domain. You must acquire a custom domain name to
use when access your service. When you request a certificate from a CA, the
certificate's subject name must match the custom domain name used to access
your application. For example, if your custom domain name is contoso.com you
would request a certificate from your CA for .contoso.com* or
**www.contoso.com.
For test purposes, you can create and use a self-signed certificate. A selfsigned certificate is not authenticated through a CA and can use the
cloudapp.net domain as the website URL. For example, the following task
uses a self-signed certificate in which the common name (CN) used in the
certificate is sslexample.cloudapp.net.+
Next, you must include information about the certificate in your service
definition and service configuration files.+
storeName="CA"
permissionLevel="limitedOrElevated" />
</Certificates>
...
</WebRole>
The Certificates section defines the name of our certificate, its location, and
the name of the store where it is located.
Permissions ( permisionLevel attribute) can be set to one of the following
values:
Permission Value
Description
limitedOrElevated
elevated
2.
...
</WebRole>
3.
In your service definition file, add a Binding element within the Sites
section. This section adds an HTTPS binding to map the endpoint to your site:
Copy
xml
<WebRole name="CertificateTesting" vmsize="Small">
...
<Sites>
<Site name="Web">
<Bindings>
<Binding name="HttpsIn" endpointName="HttpsIn" />
</Bindings>
</Site>
</Sites>
...
</WebRole>
All the required changes to the service definition file have been completed,
but you still need to add the certificate information to the service
configuration file.
4.
xml
<Role name="Deployment">
...
<Certificates>
<Certificate name="SampleCertificate"
thumbprint="9427befa18ec6865a9ebdc79d4c38de50e6316ff"
thumbprintAlgorithm="sha1" />
<Certificate name="CAForSampleCertificate"
thumbprint="79d4c38de50e6316ff9427befa18ec6865a9ebdc"
thumbprintAlgorithm="sha1" />
</Certificates>
...
</Role>
(The preceding example uses sha1 for the thumbprint algorithm. Specify the
appropriate value for your certificate's thumbprint algorithm.)+
Now that the service definition and service configuration files have been
updated, package your deployment for uploading to Azure. If you are using
cspack, don't use the /generateConfigurationFile flag, as that overwrites
the certificate information you inserted.+
2.
3.
4.
5.
6.
In the Azure classic portal, select your deployment, then click the link
under Site URL.
2.
In your web browser, modify the link to use https instead of http, and
then visit the page.
21.5.1.1.1 Note
Important
Your storage account key is similar to the root password for your storage
account. Always be careful to protect your account key. Avoid distributing it
The interval over which the SAS is valid, including the start time and the
expiry time.
The permissions granted by the SAS. For example, a SAS on a blob might
grant a user read and write permissions to that blob, but not delete permissions.
The protocol over which Azure Storage will accept the SAS. You can use this
optional parameter to restrict access to clients using HTTPS.
A common scenario where a SAS is useful is a service where users read and
write their own data to your storage account. In a scenario where a storage
account stores user data, there are two typical design patterns:+
1. Clients upload and download data via a front-end proxy service, which
performs authentication. This front-end proxy service has the advantage of
allowing validation of business rules, but for large amounts of data or highvolume transactions, creating a service that can scale to match demand may
be expensive or difficult.+
+
2. A lightweight service authenticates the client as needed and then
generates a SAS. Once the client receives the SAS, they can access storage
account resources directly with the permissions defined by the SAS and for
the interval allowed by the SAS. The SAS mitigates the need for routing all
data through the front-end proxy service.+
When you copy a blob to another blob that resides in a different storage
account, you must use a SAS to authenticate the source blob. With version 201504-05, you can optionally use a SAS to authenticate the destination blob as well.
When you copy a file to another file that resides in a different storage
account, you must use a SAS to authenticate the source file. With version 201504-05, you can optionally use a SAS to authenticate the destination file as well.
When you copy a blob to a file, or a file to a blob, you must use a SAS to
authenticate the source object, even if the source and destination objects reside
within the same storage account.
Service SAS. The service SAS delegates access to a resource in just one of
the storage services: the Blob, Queue, Table, or File service. See Constructing a
Service SAS and Service SAS Examples for in-depth information about
constructing the service SAS token.
+
Note that the SAS token is a string generated on the client side (see the SAS
examples section below for code examples). The SAS token generated by the
storage client library is not tracked by Azure Storage in any way. You can
create an unlimited number of SAS tokens on the client side.+
When a client provides a SAS URI to Azure Storage as part of a request, the
service checks the SAS parameters and signature to verify that it is valid for
authenticating the request. If the service verifies that the signature is valid,
then the request is authenticated. Otherwise, the request is declined with
error code 403 (Forbidden).+
Api version An optional parameter that specifies the storage service version
to use to execute the request.
Start time. This is the time at which the SAS becomes valid. The start time
for a shared access signature is optional; if omitted, the SAS is effective
immediately. Must be expressed in UTC (Coordinated Universal Time), with a
special UTC designator ("Z") i.e. 1994-11-05T13:15:30Z.
Expiry time. This is the time after which the SAS is no longer valid. Best
practices recommend that you either specify an expiry time for a SAS, or
associate it with a stored access policy. Must be expressed in UTC (Coordinated
Universal Time), with a special UTC designator ("Z") i.e. 1994-11-05T13:15:30Z
(see more below).
as part token and then encrypted. It's used to authenticate the SAS.
+
22.6.2
the storage services. For example, you can create an account SAS that delegates
access to the Blob and File service. Or you can create a SAS that delegates
access to all four services (Blob, Queue, Table, and File).
Storage resource types. An account SAS applies to one or more classes of
storage resources, rather than a specific resource. You can create an account SAS
to delegate access to:
Service-level APIs, which are called against the storage account
each service: blob containers, queues, tables, and file shares. Examples
include Create/Delete Container, Create/Delete Queue, Create/Delete
Table, Create/Delete Share, and List Blobs/Files and Directories.
Object-level APIs, which are called against blobs, queue messages,
table entities, and files. For example, Put Blob, Query Entity, Get
Messages, and Create File.
+
22.6.3
Storage resource. Storage resources for which you can delegate access
Queues
https://myaccount.blob.core.windows.net/sascontainer/sasblob.txt?sv=2015-0405&st=2015-04-29T22%3A18%3A26Z&se=2015-0430T02%3A23%3A26Z&sr=b&sp=rw&sip=168.1.5.60-168.1.5.70&spr=https&sig=Z
%2FRHIX5Xcg0Mq2rqI3OlWTjEg2tYkboXr1P9ZUXDtkk%3D
Descriptio
n
Name
SAS portion
Blob URI
https://myaccount.blob.core.windows.net/sascontai
ner/sasblob.txt
The
address of
the blob.
Note that
using
HTTPS is
highly
recommen
ded.
Storage
services
version
sv=2015-04-05
For
storage
services
version
2012-0212 and
later, this
parameter
indicates
the
version to
use.
Start
st=2015-04-29T22%3A18%3A26Z
Specified
Name
SAS portion
time
Descriptio
n
in UTC
time. If
you want
the SAS to
be valid
immediate
ly, omit
the start
time.
Expiry
time
se=2015-04-30T02%3A23%3A26Z
Specified
in UTC
time.
Resourc
e
sr=b
The
resource
is a blob.
Permissi
ons
sp=rw
The
permissio
ns granted
by the
SAS
include
Read (r)
and Write
(w).
IP range
sip=168.1.5.60-168.1.5.70
The range
of IP
addresses
from
which a
request
will be
accepted.
Descriptio
n
Name
SAS portion
Protocol
spr=https
Only
requests
using
HTTPS are
permitted.
Signatur
e
sig=Z
%2FRHIX5Xcg0Mq2rqI3OlWTjEg2tYkboXr1P9ZUXDt
kk%3D
Used to
authentica
te access
to the
blob. The
signature
is an
HMAC
computed
over a
string-tosign and
key using
the
SHA256
algorithm,
and then
encoded
using
Base64
encoding.
And here is an example of an account SAS that uses the same common
parameters on the token. Since these parameters are described above, they
are not described here. Only the parameters that are specific to account SAS
are described in the table below.+
Copy
https://myaccount.blob.core.windows.net/?restype=service&comp=properties&sv=2015-0405&ss=bf&srt=s&st=2015-04-29T22%3A18%3A26Z&se=2015-0430T02%3A23%3A26Z&sr=b&sp=rw&sip=168.1.5.60-168.1.5.70&spr=https&sig=F
%6GRVAZ5Cdj2Pw4tgU7IlSTkWgn7bUkkAg8P6HESXwmf%4B
Descript
ion
Name
SAS portion
Resourc
e URI
https://myaccount.blob.core.windows.net/?
restype=service&comp=properties
The
Blob
service
endpoin
t, with
parame
ters for
getting
service
properti
es
(when
called
with
GET) or
setting
service
properti
es
(when
called
with
SET).
Services
ss=bf
The SAS
applies
to the
Blob
and File
services
Resourc
e types
srt=s
The SAS
applies
to
service-
Name
SAS portion
Descript
ion
level
operatio
ns.
Permissi
ons
sp=rw
The
permissi
ons
grant
access
to read
and
write
operatio
ns.
Ad hoc SAS: When you create an ad hoc SAS, the start time, expiry time,
and permissions for the SAS are all specified on the SAS URI (or implied, in the
case where start time is omitted). This type of SAS may be created as an account
SAS or a service SAS.
+
22.8.1.1.1
Note
Currently, an account SAS must be an ad hoc SAS. Stored access policies are
not yet supported for account SAS.+
The difference between the two forms is important for one key scenario:
revocation. A SAS is a URL, so anyone who obtains the SAS can use it,
regardless of who requested it to begin with. If a SAS is published publicly, it
can be used by anyone in the world. A SAS that is distributed is valid until
one of four things happens:+
1.
2.
The expiry time specified on the stored access policy referenced by the SAS is
reached (if a stored access policy is referenced, and if it specifies an expiry time).
This can either occur because the interval elapses, or because you have modified
the stored access policy to have an expiry time in the past, which is one way to
revoke the SAS.
3.
The stored access policy referenced by the SAS is deleted, which is another
way to revoke the SAS. Note that if you recreate the stored access policy with
exactly the same name, all existing SAS tokens will again be valid according to
the permissions associated with that stored access policy (assuming that the
expiry time on the SAS has not passed). If you are intending to revoke the SAS,
be sure to use a different name if you recreate the access policy with an expiry
time in the future.
4.
The account key that was used to create the SAS is regenerated. Note that
doing this will cause all application components using that account key to fail to
authenticate until they are updated to use either the other valid account key or
the newly regenerated account key.
22.8.1.1.2
Important
A shared access signature URI is associated with the account key used to
create the signature, and the associated stored access policy (if any). If no
stored access policy is specified, the only way to revoke a shared access
signature is to change the account key.+
BlobEndpoint=myBlobEndpoint;
QueueEndpoint=myQueueEndpoint;
TableEndpoint=myTableEndpoint;
FileEndpoint=myFileEndpoint;
SharedAccessSignature=sasToken
Note
BlobEndpoint=https://storagesample.blob.core.windows.net;SharedAccessSignature=sv=20
15-04-05&sr=b&si=tutorial-policy635959936145100803&sig=9aCzs76n0E7y5BpEi2GvsSv433BZa22leDOZXX%2BXXIU%3D
BlobEndpoint=https://storagesample.blob.core.windows.net;SharedAccessSignature=sv=20
15-04-05&sr=b&si=tutorial-policy635959936145100803&sig=9aCzs76n0E7y5BpEi2GvsSv433BZa22leDOZXX%2BXXIU
%3D
22.9.3
Account SAS example
Here's an example of a connection string that includes an account SAS for
Blob and File storage. Note that endpoints for both services are specified:+
Copy
BlobEndpoint=https://storagesample.blob.core.windows.net;
FileEndpoint=https://storagesample.file.core.windows.net;
SharedAccessSignature=sv=2015-07-08&sig=iCvQmdZngZNW
%2F4vw43j6%2BVz6fndHF5LI639QJba4r8o%3D&spr=https&st=2016-0412T03%3A24%3A31Z&se=2016-04-13T03%3A29%3A31Z&srt=s&ss=bf&sp=rwl
And here's an example of the same connection string with URL encoding:+
Copy
BlobEndpoint=https://storagesample.blob.core.windows.net;
FileEndpoint=https://storagesample.file.core.windows.net;
SharedAccessSignature=sv=2015-07-08&sig=iCvQmdZngZNW
%2F4vw43j6%2BVz6fndHF5LI639QJba4r8o%3D&spr=https&st=2016-0412T03%3A24%3A31Z&se=2016-0413T03%3A29%3A31Z&srt=s&ss=bf&sp=rwl
22.9.4
Using a SAS in a constructor or method
Several Azure Storage client library constructors and method overloads offer
a SAS parameter, so that you can authenticate a request to the service with
a SAS.+
For example, here a SAS URI is used to create a reference to a block blob.
The SAS provides the only credentials needed for the request. The block blob
reference is then used for a write operation:+
Copy
C#
string sasUri = "https://storagesample.blob.core.windows.net/sample-container/" +
"sampleBlob.txt?sv=2015-0708&sr=b&sig=39Up9JzHkxhUIhFEjEH9594DJxe7w6cIRCg0V6lCGSo%3D" +
"&se=2016-10-18T21%3A51%3A37Z&sp=rcw";
// Create operation: Upload a blob with the specified name to the container.
// If the blob does not exist, it will be created. If it does exist, it will be overwritten.
try
{
MemoryStream msWrite = new MemoryStream(Encoding.UTF8.GetBytes(blobContent));
msWrite.Position = 0;
using (msWrite)
{
await blob.UploadFromStreamAsync(msWrite);
}
{
Console.WriteLine(e.Message);
Console.ReadLine();
throw;
}
}
22.10
When you use shared access signatures in your applications, you need to be
aware of two potential risks:+
If a SAS is leaked, it can be used by anyone who obtains it, which can
potentially compromise your storage account.
The following recommendations for using shared access signatures will help
balance these risks:+
1.
2.
3.
4.
5.
Be careful with SAS start time. If you set the start time for a SAS to now,
then due to clock skew (differences in current time according to different
machines), failures may be observed intermittently for the first few minutes. In
general, set the start time to be at least 15 minutes ago, or don't set it at all,
which will make it valid immediately in all cases. The same generally applies to
expiry time as well - remember that you may observe up to 15 minutes of clock
skew in either direction on any request. Note for clients using a REST version prior
to 2012-02-12, the maximum duration for a SAS that does not reference a stored
access policy is 1 hour, and any policies specifying longer term than that will fail.
6.
7.
Understand that your account will be billed for any usage, including
that done with SAS. If you provide write access to a blob, a user may choose to
upload a 200GB blob. If you've given them read access as well, they may choose
to download it 10 times, incurring 2TB in egress costs for you. Again, provide
limited permissions, to help mitigate the potential of malicious users. Use shortlived SAS to reduce this threat (but be mindful of clock skew on the end time).
8.
Validate data written using SAS. When a client application writes data to
your storage account, keep in mind that there can be problems with that data. If
your application requires that that data be validated or authorized before it is
ready to use, you should perform this validation after the data is written and
before it is used by your application. This practice also protects against corrupt or
malicious data being written to your account, either by a user who properly
acquired the SAS, or by a user exploiting a leaked SAS.
9.
Don't always use SAS. Sometimes the risks associated with a particular
operation against your storage account outweigh the benefits of SAS. For such
operations, create a middle-tier service that writes to your storage account after
performing business rule validation, authentication, and auditing. Also,
sometimes it's simpler to manage access in other ways. For example, if you want
to make all blobs in a container publically readable, you can make the container
Public, rather than providing a SAS to every client for access.
10.
Use Storage Analytics to monitor your application. You can use logging
and metrics to observe any spike in authentication failures due to an outage in
your SAS provider service or to the inadvertent removal of a stored access policy.
See the Azure Storage Team Blog for additional information.
22.11
SAS examples
Below are some examples of both types of shared access signatures, account
SAS and service SAS.+
To run these examples, you'll need to download and reference these
packages:+
Azure Storage Client Library for .NET, version 6.x or later (to use account
SAS).
Azure Configuration Manager
For additional examples that show how to create and test a SAS, see Azure
Code Samples for Storage.+
22.11.1
Example: Create and use an account SAS
The following code example creates an account SAS that is valid for the Blob
and File services, and gives the client permissions read, write, and list
permissions to access service-level APIs. The account SAS restricts the
protocol to HTTPS, so the request must be made with HTTPS.+
Copy
C#
static string GetAccountSASToken()
{
// To create the account SAS, you need to use your shared key credentials. Modify for your
account.
const string ConnectionString = "DefaultEndpointsProtocol=https;AccountName=accountname;AccountKey=account-key";
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);
ResourceTypes = SharedAccessAccountResourceTypes.Service,
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
Protocols = SharedAccessProtocol.HttpsOnly
};
To use the account SAS to access service-level APIs for the Blob service,
construct a Blob client object using the SAS and the Blob storage endpoint
for your storage account.+
Copy
C#
static void UseAccountSAS(string sasToken)
{
// Create new storage credentials using the SAS token.
StorageCredentials accountSAS = new StorageCredentials(sasToken);
// Use these credentials and the account name to create a Blob service client.
CloudStorageAccount accountWithSAS = new CloudStorageAccount(accountSAS,
"account-name", endpointSuffix: null, useHttps: true);
CloudBlobClient blobClientWithSAS = accountWithSAS.CreateCloudBlobClient();
// Now set the service properties for the Blob client created with the SAS.
blobClientWithSAS.SetServiceProperties(new ServiceProperties()
{
// The permissions granted by the account SAS also permit you to retrieve service
properties.
ServiceProperties serviceProperties = blobClientWithSAS.GetServiceProperties();
Console.WriteLine(serviceProperties.HourMetrics.MetricsLevel);
Console.WriteLine(serviceProperties.HourMetrics.RetentionDays);
Console.WriteLine(serviceProperties.HourMetrics.Version);
22.11.2
Example: Create a stored access policy
The following code creates a stored access policy on a container. You can use
the access policy to specify constraints for a service SAS on the container or
its blobs.+
Copy
C#
private static async Task CreateSharedAccessPolicyAsync(CloudBlobContainer container,
string policyName)
{
// Create a new shared access policy and define its constraints.
// The access policy provides create, write, read, list, and delete permissions.
SharedAccessBlobPolicy sharedPolicy = new SharedAccessBlobPolicy()
{
// When the start time for the SAS is omitted, the start time is assumed to be the time
when the storage service receives the request.
// Omitting the start time for a SAS that is effective immediately helps to avoid clock
skew.
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
Permissions = SharedAccessBlobPermissions.Read | SharedAccessBlobPermissions.List |
SharedAccessBlobPermissions.Write | SharedAccessBlobPermissions.Create |
SharedAccessBlobPermissions.Delete
};
// Add the new policy to the container's permissions, and set the container's permissions.
permissions.SharedAccessPolicies.Add(policyName, sharedPolicy);
await container.SetPermissionsAsync(permissions);
}
22.11.3
Example: Create a service SAS on a container
The following code creates a SAS on a container. If the name of an existing
stored access policy is provided, that policy is associated with the SAS. If no
stored access policy is provided, then the code creates an ad-hoc SAS on the
container.+
Copy
C#
private static string GetContainerSasUri(CloudBlobContainer container, string
storedPolicyName = null)
{
string sasContainerToken;
// If no stored policy is specified, create a new access policy and define its constraints.
if (storedPolicyName == null)
{
// Note that the SharedAccessBlobPolicy class is used both to define the parameters of
an ad-hoc SAS, and
// to construct a shared access policy that is saved to the container's shared access
policies.
SharedAccessBlobPolicy adHocPolicy = new SharedAccessBlobPolicy()
{
// When the start time for the SAS is omitted, the start time is assumed to be the
time when the storage service receives the request.
skew.
// Omitting the start time for a SAS that is effective immediately helps to avoid clock
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
Permissions = SharedAccessBlobPermissions.Write |
SharedAccessBlobPermissions.List
};
// Generate the shared access signature on the container, setting the constraints
directly on the signature.
sasContainerToken = container.GetSharedAccessSignature(adHocPolicy, null);
// Return the URI string for the container, including the SAS token.
return container.Uri + sasContainerToken;
}
22.11.4
Example: Create a service SAS on a blob
The following code creates a SAS on a blob. If the name of an existing stored
access policy is provided, that policy is associated with the SAS. If no stored
access policy is provided, then the code creates an ad-hoc SAS on the blob.+
Copy
C#
private static string GetBlobSasUri(CloudBlobContainer container, string blobName, string
policyName = null)
{
string sasBlobToken;
if (policyName == null)
{
// Create a new access policy and define its constraints.
// Note that the SharedAccessBlobPolicy class is used both to define the parameters of
an ad-hoc SAS, and
// to construct a shared access policy that is saved to the container's shared access
policies.
SharedAccessBlobPolicy adHocSAS = new SharedAccessBlobPolicy()
{
// When the start time for the SAS is omitted, the start time is assumed to be the
time when the storage service receives the request.
// Omitting the start time for a SAS that is effective immediately helps to avoid clock
skew.
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
Permissions = SharedAccessBlobPermissions.Read |
SharedAccessBlobPermissions.Write | SharedAccessBlobPermissions.Create
};
// Generate the shared access signature on the blob, setting the constraints directly on
the signature.
sasBlobToken = blob.GetSharedAccessSignature(adHocSAS);
// Return the URI string for the container, including the SAS token.
22.12
Conclusion
Shared access signatures are useful for providing limited permissions to your
storage account to clients that should not have the account key. As such,
they are a vital part of the security model for any application using Azure
Storage. If you follow the best practices listed here, you can use SAS to
provide greater flexibility of access to resources in your storage account,
without compromising the security of your application
Note
Additional costs are associated with examining monitoring data in the Azure
Portal. For more information, see Storage Analytics and Billing.
+
Azure File storage currently supports Storage Analytics metrics, but does not
yet support logging. You can enable metrics for Azure File storage via the
Azure Portal.+
In the Azure Portal, click Storage, and then click the storage account name
to open the dashboard.
2.
Click Configure, and scroll down to the monitoring settings for the blob,
table, and queue services.
3.
In monitoring, set the level of monitoring and the data retention policy
for each service:
To set the data retention policy, in Retention (in days), type the number of
days of data to retain from 1 to 365 days. If you do not want to set a retention
policy, enter zero. If there is no retention policy, it is up to you to delete the
monitoring data. We recommend setting a retention policy based on how long you
want to retain storage analytics data for your account so that old and unused
analytics data can be deleted by system at no cost.
+
1.
You should start seeing monitoring data on the dashboard and the Monitor
page after about an hour.+
Until you configure monitoring for a storage account, no monitoring data is
collected, and the metrics charts on the dashboard and Monitor page are
empty.+
After you set the monitoring levels and retention policies, you can choose
which of the available metrics to monitor in the Azure Portal, and which
metrics to plot on metrics charts. A default set of metrics is displayed at each
monitoring level. You can use Add Metrics to add or remove metrics from
the metrics list.+
Metrics are stored in the storage account in four tables named
$MetricsTransactionsBlob, $MetricsTransactionsTable,
$MetricsTransactionsQueue, and $MetricsCapacityBlob. For more information,
see About Storage Analytics Metrics.+
In the Azure Portal, click Storage, and then click the name of the storage
account to open the dashboard.
2.
To change the metrics that are plotted on the chart, take one of the
following actions:
To add a new metric to the chart, click the colored check box next to
the metric header in the table below the chart.
To hide a metric that is plotted on the chart, clear the colored check
box next to the metric header.
3.
By default, the chart shows trends, displaying only the current value of each
metric (the Relative option at the top of the chart). To display a Y axis so you can
see absolute values, select Absolute.
4.
To change the time range the metrics chart displays, select 6 hours, 24 hours,
or 7 days at the top of the chart.
If your storage account has verbose monitoring configured, the metrics are
available at a finer resolution of individual storage operations in addition to the
service-level aggregates.
Use the following procedures to choose which storage metrics to view in the
metrics charts and table that are displayed on the Monitor page. These
In the Azure Portal, click Storage, and then click the name of the storage
account to open the dashboard.
2.
Click Monitor.
The Monitor page opens. By default, the metrics table displays a subset of
the metrics that are available for monitoring. The illustration shows the
default Monitor display for a storage account with verbose monitoring
configured for all three services. Use Add Metrics to select the metrics you
want to monitor from all available metrics.
23.5.1.1.1 Note
Consider costs when you select the metrics. There are transaction and egress
costs associated with refreshing monitoring displays. For more information,
see Storage Analytics and Billing.
3.
4.
Hover over the right side of the dialog box to display a scrollbar that you
can drag to scroll additional metrics into view.
5.
Click the down arrow by a metric to expand a list of operations the metric
is scoped to include. Select each operation that you want to view in the
metrics table in the Azure Portal.
In the following illustration, the AUTHORIZATION ERROR PERCENTAGE metric
has been expanded.
6.
After you select metrics for all services, click OK (checkmark) to update the
monitoring configuration. The selected metrics are added to the metrics table.
7.
To delete a metric from the table, click the metric to select it, and then
click Delete Metric.
23.6 How to: Customize the metrics chart on the Monitor page
1.
On the Monitor page for the storage account, in the metrics table, select up
to 6 metrics to plot on the metrics chart. To select a metric, click the check box on
its left side. To remove a metric from the chart, clear the check box.
2.
To switch the chart between relative values (final value only displayed) and
absolute values (Y axis displayed), select Relative or Absolute at the top of the
chart.
3.
To change the time range the metrics chart displays, select 6 hours, 24
hours, or 7 days at the top of the chart.
1.
In the Azure Portal, click Storage, and then click the name of the storage
account to open the dashboard.
2.
Click Configure, and use the Down arrow on the keyboard to scroll down
to logging.
3.
For each service (blob, table, and queue), configure the following:
The types of request to log: Read Requests, Write Requests, and Delete
Requests.
The number of days to retain the logged data. Enter zero is if you do
not want to set a retention policy. If you do not set a retention policy, it is up to
you to delete the logs.
4.
+
Click Save.
The diagnostics logs are saved in a blob container named $logs in your
storage account. For information about accessing the $logs container, see
About Storage Analytics Logging.
Azure AD Graph API functionality is also available through Microsoft Graph, a unified API that also
includes APIs from other Microsoft services like Outlook, OneDrive, OneNote, Planner, and Office
Graph, all accessed through a single endpoint with a single access token.
Documentation Overview
Additional Resources
Azure Active Directory Graph API topic on Azure.com: Provides a brief overview of Graph
API features and scenarios.
Quickstart for the Azure AD Graph API on Azure.com: Provides essential details and
introduces resources like the Graph Explorer for those who want to jumpstart their
experience with the Graph API.
Azure AD Graph API reference: Provides explicit examples of Graph API operations
(requests and responses) on users, groups, organizational contacts, directory roles, domains
(preview), functions, actions and others, as well as a reference for the Azure AD entities and
types exposed by the Graph API. The documentation is interactive and many of the topics
contain a Try It feature that you can use to execute Graph API requests against a sample
tenant and see the responses from inside the documentation itself.
An Azure AD Tenant: You need an Azure AD tenant that you can use to develop, configure,
and publish your app. This requires a valid subscription to one of Microsoft's cloud services,
such as Azure, Office 365, Microsoft Dynamic CRM, etc. If you don't already have a
subscription, you can get a free trial for Azure here: Azure Free Trial.
Your App Must be Registered with Azure AD: Your app must be registered with Azure AD.
This can be done through the Azure portal (which requires an Azure subscription), or through
tooling like Visual Studio 2013 or 2015. For information about how to register an app using
the Azure portal, see Adding an Application.
Azure AD Tenant Permissions to Access Directory Data: After your app is registered with
Azure AD, in order to call the Graph API against a directory tenant, you must first configure
your app to request permissions to the Graph API, and then a user or tenant administrator
must grant access to your app (and its configured permissions) during consent. For more
information about Azure AD consent flow and configuring your app for the Graph API, see
Understanding the Consent Framework and Accessing the Graph API in Integrating
Applications with Azure Active Directory.
Azure AD Graph Code Samples: We highly recommend downloading the sample applications
that demonstrate the capabilities of the Azure AD Graph API. For more information about the
code samples available for the Graph API, see Calling Azure AD Graph API.
Graph Explorer: You can use the Graph Explorer to execute read operations against your
own tenant or a sample tenant and view the responses returned by the Graph API. See
Quickstart for the Azure AD Graph API for instructions on how to use the Graph Explorer.
Azure portal: The Azure portal can be used by an administrator to perform administrative
tasks on Azure AD directory entities. An administrator (or a developer with sufficient
privileges) can also use the portal to register an app with Azure AD and to configure it with
the resources and access that it will request during consent. For more information about
registering an app and configuring it using the Azure portal, see the following topic:
Integrating Applications with Azure Active Directory.
Azure AD Graph API Team blog: Keep up with the latest announcements from the Graph
API team on the Microsoft Azure Active Directory Graph Team blog.
2. Run Certmgr.msc, click Personal on the left-hand side, then right-click the certificate you
created and click All Tasks, then Export.
3. Follow the wizard and choose the option to not export the private key. Choose the option to
export a CER cert, and then provide a filename ending with .cer .
4. Repeat the export process, this time choosing to export the private key in a PFX file. Then
select a name ending with .PFX .
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Net;
using System.Runtime.Serialization;
using System.Security.Cryptography.X509Certificates;
using System.ServiceModel.Syndication;
using System.Text;
using System.Threading.Tasks;
using System.Xml;
namespace telemetry1
{
class Program
{
[DataContract(Name = "properties", Namespace =
"http://schemas.microsoft.com/ado/2007/08/dataservices")]
public class MetricValue
{
[DataMember(Name = "Timestamp")]
public DateTime Timestamp { get; set; }
[DataMember(Name = "Min")]
public long Min { get; set; }
[DataMember(Name = "Max")]
public long Max { get; set; }
[DataMember(Name = "Total")]
public long Total { get; set; }
[DataMember(Name = "Average")]
public float Average { get; set; }
}
try
{
HttpWebResponse response =
(HttpWebResponse)sendNotificationRequest.GetResponse();
When you deploy your web app, mobile back end, and API app to App
Service, you can deploy to a separate deployment slot instead of the default
production slot when running in the Standard or Premium App Service plan
mode. Deployment slots are actually live apps with their own hostnames.
App content and configurations elements can be swapped between two
deployment slots, including the production slot. Deploying your application to
a deployment slot has the following benefits:+
You can validate app changes in a staging deployment slot before swapping it
with the production slot.
Deploying an app to a slot first and swapping it into production ensures that
all instances of the slot are warmed up before being swapped into production.
This eliminates downtime when you deploy your app. The traffic redirection is
seamless, and no requests are dropped as a result of swap operations. This entire
workflow can be automated by configuring Auto Swap when pre-swap validation
is not needed.
After a swap, the slot with previously staged app now has the previous
production app. If the changes swapped into the production slot are not as you
expected, you can perform the same swap immediately to get your "last known
good site" back.
When your app has multiple slots, you cannot change the mode.
2.
26.1.1.1.1 Note
If the app is not already in the Standard or Premium mode, you will receive
a message indicating the supported modes for enabling staged publishing. At
this point, you have the option to select Upgrade and navigate to the Scale
tab of your app before continuing.
3.
In the Add a slot blade, give the slot a name, and select whether to clone
app configuration from another existing deployment slot. Click the check mark
to continue.
The first time you add a slot, you will only have two choices: clone
configuration from the default slot in production or not at all. After you have
created several slots, you will be able to clone configuration from a slot other
than the one in production:
4.
5.
Click the app URL in the slot's blade. Notice the deployment slot has its own
hostname and is also a live app. To limit public access to the deployment slot, see
App Service Web App block web access to non-production deployment slots.
There is no content after deployment slot creation. You can deploy to the slot
from a different repository branch, or an altogether different repository. You
can also change the slot's configuration. Use the publish profile or
deployment credentials associated with the deployment slot for content
updates. For example, you can publish to this slot with git.+
+
Handler mappings
WebJobs content
Publishing endpoints
Scale settings
WebJobs schedulers
+
+
Important
Before you swap an app from a deployment slot into production, make sure
that all non-slot specific settings are configured exactly as you want to have
it in the swap target.+
1.
To swap deployment slots, click the Swap button in the command bar of
the app or in the command bar of a deployment slot.
2.
Make sure that the swap source and swap target are set properly. Usually,
the swap target is the production slot. Click OK to complete the operation.
When the operation finishes, the deployment slots have been swapped.
For the Swap with preview swap type, see Swap with preview (multi-phase
swap).
+
When you use the Swap with preview option (see Swap deployment slots),
App Service does the following:+
Keeps the destination slot unchanged so existing workload on that slot (e.g.
production) is not impacted.
Applies the configuration elements of the destination slot to the source slot,
including the slot-specific connection strings and app settings.
Restarts the worker processes on the source slot using these aforementioned
configuration elements.
When you complete the swap: Moves the pre-warmed-up source slot into the
destination slot. The destination slot is moved into the source slot as in a manual
swap.
When you cancel the swap: Reapplies the configuration elements of the
source slot to the source slot.
You can preview exactly how the app will behave with the destination slot's
configuration. Once you completes validation, you complete the swap in a
separate step. This step has the added advantage that the source slot is
already warmed up with the desired configuration, and clients will not
experience any downtime. +
Samples for the Azure PowerShell cmdlets available for multi-phase swap are
included in the Azure PowerShell cmdlets for deployment slots section.+
will automatically swap the app into production after it has already warmed
up in the slot.+
26.5.1.1.1
Important
When you enable Auto Swap for a slot, make sure the slot configuration is
exactly the configuration intended for the target slot (usually the production
slot).+
Configuring Auto Swap for a slot is easy. Follow the steps below:+
1.
2.
Select On for Auto Swap, select the desired target slot in Auto Swap
Slot, and click Save in the command bar. Make sure configuration for the slot
is exactly the configuration intended for the target slot.
The Notifications tab will flash a green SUCCESS once the operation is
complete.
26.5.1.1.2 Note
To test Auto Swap for your app, you can first select a non-production target
slot in Auto Swap Slot to become familiar with the feature.
3.
Execute a code push to that deployment slot. Auto Swap will happen after a
short time and the update will be reflected at your target slot's URL.
<applicationInitialization>
<add initializationPage="/" hostName="[app hostname]" />
<add initializationPage="/Home/About" hostname="[app hostname]" />
</applicationInitialization>
+
+
26.9.1
Copy
26.9.2
Copy
26.9.3
Initiate a swap with review (multi-phase swap) and apply
destination slot configuration to source slot
Copy
26.9.4
Cancel a pending swap (swap with review) and restore source
slot configuration
Copy
26.9.5
Copy
26.9.6
Copy
26.10
Azure Command-Line Interface (Azure CLI) commands for
Deployment Slots
The Azure CLI provides cross-platform commands for working with Azure,
including support for managing App Service deployment slots.+
To list the commands available for Azure App Service in the Azure CLI, call
azure site -h .
+
26.10.1.1.1 Note
For Azure CLI 2.0 (Preview) commands for deployment slots, see az
appservice web deployment slot.+
26.10.2
azure site list
For information about the apps in the current subscription, call azure site
list, as in the following example.+
azure site list webappslotstest+
26.10.3
azure site create
To create a deployment slot, call azure site create and specify the name of
an existing app and the name of the slot to create, as in the following
example.+
azure site create webappslotstest --slot staging+
To enable source control for the new slot, use the --git option, as in the
following example.+
azure site create --git webappslotstest --slot staging+
26.10.4
azure site swap
To make the updated deployment slot the production app, use the azure
site swap command to perform a swap operation, as in the following
example. The production app will not experience any down time, nor will it
undergo a cold start.+
azure site swap webappslotstest+
26.10.5
azure site delete
To delete a deployment slot that is no longer needed, use the azure site
delete command, as in the following example.+
azure site delete webappslotstest --slot staging+
26.10.5.1.1 Note
See a web app in action. Try App Service immediately and create a shortlived starter appno credit card required, no commitments.
On the Scale page of the Azure classic portal, you can manually scale your
web role or worker role, or you can enable automatic scaling based on CPU
load or a message queue.+
27.1.1.1.1
Note
This article focuses on Cloud Service web and worker roles. When you create
a virtual machine (classic) directly, it is hosted in a cloud service. Some of
this information applies to these types of virtual machines. Scaling an
availability set of virtual machines is really just shutting them on and off
based on the scale rules you configure. For more information about Virtual
Machines and availability sets, see Manage the Availability of Virtual
Machines+
You should consider the following information before you configure scaling
for your application:+
Scaling is affected by core usage. Larger role instances use more cores. You
can scale an application only within the limit of cores for your subscription. For
example, if your subscription has a limit of twenty cores and you run an
application with two medium sized cloud services (a total of four cores), you can
only scale up other cloud service deployments in your subscription by sixteen
cores. See Cloud Service Sizes for more information about sizes.
You must create a queue and associate it with a role before you can scale an
application based on a message threshold. For more information, see How to use
the Queue Storage Service.
You can scale resources that are linked to your cloud service. For more
information about linking resources, see How to: Link a resource to a cloud
service.
Weekdays
Weekends
Week nights
Week mornings
Specific dates
In the Azure classic portal, click Cloud Services, and then click the name
of the cloud service to open the dashboard.
27.3.1.1.1 Tip
If you don't see your cloud service, you may need to change from Production
to Staging or vice versa.
2.
Click Scale.
3.
Select the schedule you want to change scaling options for. Defaults to No
scheduled times if you have no schedules defined.
4.
Find the Scale by metric section and select NONE. This is the default
setting for all roles.
5.
Each role in the cloud service has a slider for changing the number of
instances to use.
If you need more instances, you may need to change the cloud service virtual
machine size.
6.
Click Save.
Role instances will be added or removed based on your selections.
+
27.3.1.1.2
Tip
In the Azure classic portal, click Cloud Services, and then click the name
of the cloud service to open the dashboard.
27.4.1.1.1 Tip
If you don't see your cloud service, you may need to change from Production
to Staging or vice versa.
2.
Click Scale.
3.
Select the schedule you want to change scaling options for. Defaults to No
scheduled times if you have no schedules defined.
4.
5.
Now you can configure a minimum and maximum range of roles instances,
the target CPU usage (to trigger a scale up), and how many instances to scale up
and down by.
+
27.4.1.1.2
Tip
In the Azure classic portal, click Cloud Services, and then click the name
of the cloud service to open the dashboard.
27.5.1.1.1 Tip
If you don't see your cloud service, you may need to change from Production
to Staging or vice versa.
2.
Click Scale.
3.
4.
Now you can configure a minimum and maximum range of roles instances,
the queue and amount of queue messages to process for each instance, and how
many instances to scale up and down by.
27.5.1.1.2
Tip
In the Azure classic portal, click Cloud Services, and then click the name
of the cloud service to open the dashboard.
27.6.1.1.1 Tip
If you don't see your cloud service, you may need to change from Production
to Staging or vice versa.
2.
Click Scale.
3.
Find the linked resources section and clicked on Manage scale for this
database.
27.6.1.1.2 Note
If you don't see a linked resources section, you probably do not have any
linked resources.
+
Stability: 2 - Stable
Class: https.Agent#
Added in: v0.4.5
Class: https.Server#
Added in: v0.3.4
server.setTimeout(msecs, callback)#
Added in: v0.11.2
See http.Server#setTimeout().
server.timeout#
Added in: v0.11.2
See http.Server#timeout.
https.createServer(options[,
requestListener])#
Added in: v0.3.4
Or
const https = require('https');
const fs = require('fs');
const options = {
pfx: fs.readFileSync('server.pfx')
};
https.createServer(options, (req, res) => {
res.writeHead(200);
res.end('hello world\n');
}).listen(8000);
28.1.1 server.close([callback])#
Added in: v0.1.90
Example:
const https = require('https');
https.get('https://encrypted.google.com/', (res) => {
console.log('statusCode:', res.statusCode);
console.log('headers:', res.headers);
res.on('data', (d) => {
process.stdout.write(d);
});
}).on('error', (e) => {
console.error(e);
});
28.3 https.globalAgent#
Added in: v0.5.9
Example:
const https = require('https');
var options = {
hostname: 'encrypted.google.com',
port: 443,
path: '/',
method: 'GET'
};
var req = https.request(options, (res) => {
console.log('statusCode:', res.statusCode);
console.log('headers:', res.headers);
res.on('data', (d) => {
process.stdout.write(d);
});
});
req.on('error', (e) => {
console.error(e);
});
req.end();
host : A domain name or IP address of the server to issue the request to.
Defaults to 'localhost' .
family : IP address family to use when resolving host and hostname . Valid
values are 4 or 6 . When unspecified, both IP v4 and v6 will be used.
path : Request path. Defaults to '/' . Should include query string if any. E.G.
'/index.html?page=12' . An exception is thrown when the request path
contains illegal characters. Currently, only spaces are rejected but that may
change in the future.
Authorization header.
pfx : Certificate, Private key and CA certificates to use for SSL. Default
null .
passphrase : A string of passphrase for the private key or pfx. Default null .
PEM format. If this is omitted several well known "root" CAs will be used,
like VeriSign. These are used to authorize connections.
We are excited to introduce some changes to the Copy Blob API with 2012-02-12 version
that allows you to copy blobs between storage accounts. This enables some interesting
scenarios like:
Backup your blobs to another storage account without having to retrieve the
content and saving it yourself
Migrate your blobs from one account to another efficiently with respect to cost
and time
NOTE: To allow cross-account copy, the destination storage account needs to have been
created on or after June 7th 2012. This limitation is only for cross-account copy, as
accounts created prior can still copy within the same account. If the account is created
before June 7th 2012, a copy blob operation across accounts will fail with HTTP Status
code 400 (Bad Request) and the storage error code will be
CopyAcrossAccountsNotSupported.
In this blog, we will go over some of the changes that were made along with some of
the best practices to use this API. We will also show some sample code on using the new
Copy Blob APIs with SDK 1.7.1 which is available on GitHub.
29.1.1.1
In versions prior to 2012-02-12, the source request header was specified as /<account
name>/<fully qualified blob name with container name and snapshot time if applicable
>. With 2012-02-12 version, we now require x-ms-copy-source to be specified as a URL.
This is a versioned change, as specifying the old format with this new version will now
fail with 400 (Bad Request). The new format allows users to specify a shared access
signature or use a custom storage domain name. When specifying a source blob from a
different account than the destination, the source blob must either be
A private blob, only if the source URL is pre-authenticated with a Shared Access
Signature (i.e. pre-signed URL), allowing read permissions on the source blob
A copy operation preserves the type of the blob: a block blob will be copied as a block
blob and a page blob will be copied to the destination as a page blob. If the destination
blob already exists, it will be overwritten. However, if the destination type (for an
existing blob) does not match the source type, the operation fails with HTTP status code
400 (Bad Request).
Note: The source blob could even be a blob outside of Windows Azure, as long
as it is publicly accessible or accessible via some form of a Signed URL. For
source blobs outside of Windows Azure, they will be copied to block blobs.
29.1.1.1.2
Making copy asynchronous is a major change that greatly differs from previous versions.
Previously, the Blob service returns a successful response back to the user only when
the copy operation has completed. With version 2012-02-12, the Blob service will
instead schedule the copy operation to be completed asynchronously: a success
response only indicates that the copy operation has been successfully scheduled. As a
consequence, a successful response from Copy Blob will now return HTTP status code
202 (Accepted) instead of 201 (Created).
A few important points:
1. There can be only one pending copy operation to a given destination blob name
URL at time. But a source blob can be a source for many outstanding copies at
once.
2. The asynchronous copy blob runs in the background using spare bandwidth
capacity, so there is no SLA in terms of how fast a blob will be copied.
3. Currently there is no limit on the number of pending copy blobs that can be
queued up for a storage account, but a pending copy blob operation can live in
the system for at most 2 weeks. If longer than that, then the copy blob operation
will be terminated.
4. If the source storage account is in a different location from the destination
storage account, then the source storage account will be charged egress for the
copy using the bandwidth rates as shown here.
5. When a copy is pending, any attempt to modify, snapshot, or lease the
destination blob will fail.
Below we break down the key concepts of the new Copy Blob API.
Copy Blob Scheduling: when the Blob service receives a Copy Blob request, it will first
ensure that the source exists and it can be accessed. If source does not exist or cannot
be accessed, an HTTP status code 400 (Bad Request) is returned. If any source access
conditions are provided, they will be validated too. If conditions do not match, then an
HTTP status code 412 (Precondition Failed) error is returned. Once the source is
validated, the service then validates any conditions provided for the destination blob (if
it exists). If condition checks fail on destination blob, an HTTP status code 412
(Precondition Failed) is returned. If there is already a pending copy operation, then the
service returns an HTTP status code 409 (Conflict). Once the validations are completed,
the service then initializes the destination blob before scheduling the copy and then
returns a success response to the user. If the source is a page blob, the service will
create a page blob with the same length as the source blob but all the bytes are zeroed
out. If the source blob is a block blob, the service will commit a zero length block blob
for the pending copy blob operation. The service maintains a few copy specific
properties during the copy operation to allow clients to poll the status and progress of
their copy operations.
Copy Blob Response: when a copy blob operation returns success to the client, this
indicates the Blob service has successfully scheduled the copy operation to be
completed. Two new response headers are introduced:
1. x-ms-copy-status: The status of the copy operation at the time the response was
sent. It can be one of the following:
o
pending: Copy operation is still pending and the user is expected to poll
the status of the copy. (See Polling for Copy Blob properties below.)
2. x-ms-copy-id: The string token that is associated with the copy operation. This
can be used when polling the copy status, or if the user wishes to abort a
pending copy operation.
Polling for Copy Blob properties: we now provide the following additional properties
that allow users to track the progress of the copy, using Get Blob Properties, Get Blob,
or List Blobs:
1. x-ms-copy-status (or CopyStatus): The current status of the copy operation. It can
be one of the following:
o
2. x-ms-copy-id (CopyId): The id returned by the copy operation which can be used
to monitor the progress or abort a copy.
3. x-ms-copy-status-description (CopyStatusDescription): Additional error
information that can be used for diagnostics.
4. x-ms-copy-progress (CopyProgress): The amount of the blob copied so far. This
has the format X/Y where X=number of bytes copied and Y is the total number of
bytes.
5. x-ms-copy-completion-time (CopyCompletionTime): The completion time of the
last copy.
These properties can be monitored to track the progress of a copy operation that
returns pending status. However, it is important to note that except for Put Page, Put
Block and Lease Blob operations, any other write operation (i.e., Put Blob, Put Block List,
Set Blob Metadata, Set Blob Properties) on the destination blob will remove the
properties pertaining to the copy operation.
Asynchronous Copy Blob: for the cases where the Copy Blob response returns with xms-copy-status set to pending, the copy operation will complete asynchronously.
1. Block blobs: The source block blob will be retrieved using 4 MB chunks and
copied to the destination.
2. Page blobs: The source page blobs valid ranges are retrieved and copied to
destination
Copy Blob operations are retried on any intermittent failures such as network failures,
server busy etc. but any failures are recorded in x-ms-copy-status-description which
would let users know why the copy is still pending.
When the copy operation is pending, any writes to the destination blob is disallowed
and the write operation will fail with HTTP status code 409 (Conflict). One would need to
abort the copy before writing to the destination.
Data integrity during asynchronous copy: The Blob service will lock onto a version
of the source blob by storing the source blob ETag at the time of copy. This is done to
ensure that any source blob changes can be detected during the course of the copy
operation. If the source blob changes during the copy, the ETag will no longer match its
value at the start of the copy, causing the copy operation to fail.
Aborting the Copy Blob operation: To allow canceling a pending copy, we have
introduced the Abort Copy Blob operation in the 2012-02-12 version of REST API. The
Abort operation takes the copy-id returned by the Copy operation and will cancel the
operation if it is in the pending state. An HTTP status code 409 (Conflict) is returned if
the state is not pending or the copy-id does not match the pending copy. The blobs
metadata is retained but the content is zeroed out on a successful abort.
29.1.1.2
Best Practices
Example: Monitoring code without error handling for brevity. NOTE: This sample
assumes that no one else would start a different copy operation on the same
destination blob. If such assumption is not valid for your scenario, please see How do I
prevent someone else from starting a new copy operation to overwrite my successful
copy? below.
public static void MonitorCopy(CloudBlobContainer destContainer)
{
bool pendingCopy = true;
while (pendingCopy)
{
pendingCopy = false;
var destBlobList = destContainer.ListBlobs(
true, BlobListingDetails.Copy);
if (destBlob.CopyState.Status == CopyStatus.Aborted ||
destBlob.CopyState.Status == CopyStatus.Failed)
{
pendingCopy = true;
}
// else we completed this pending copy
}
Thread.Sleep(waitTime);
};
}
29.1.1.2.2
How do I prevent the source from changing until the copy completes?
With 2012-02-12 version, we have introduced the concept of lock (i.e. infinite lease)
which makes it easy for a client to hold on to the lease. A good option is for the copy job
to acquire an infinite lease on the source blob before issuing the copy operation. The
monitor job can then break the lease when the copy completes.
Example: Sample code that acquires a lock (i.e. infinite lease) on source.
// Acquire infinite lease on source blob
srcBlob.AcquireLease(null, leaseId);
29.1.1.2.3 How do I prevent someone else from starting a new copy operation to
overwrite my successful copy?
During a pending copy, the blob service ensures that no client requests can write to the
destination blob. The copy blob properties are maintained on the blob after a copy is
completed (failed/aborted/successful). However, these copy properties are removed
when any write command like Put Blob, Put Block List, Set Blob Metadata or Set Blob
Properties are issued on the destination blob. The following operations will however
retain the copy properties: Lease Blob, Put Page, and Put Block. Hence, a monitoring
component which may require providing confirmation that a copy is completed will need
these properties to be retained until it verifies the copy. To prevent any writes on
destination blob once the copy is completed, the copy job should acquire an infinite
lease on destination blob and provide that as destination access condition when starting
the copy blob operation. The copy operation only allows infinite leases on the
destination blob. This is because the service prevents any writes to the destination blob
and any other granular lease would require client to issue Renew Lease on the
destination blob. Acquiring a lease on destination blob requires the blob to exist and
hence client would need to create an empty blob before the copy operation is issued. To
terminate an infinite lease on a destination blob with pending copy operation, you would
have to abort the copy operation before issuing the break request on the lease.
On top of the standard command-line shell, you can also find the Windows PowerShell ISE, which
stands for Integrated Scripting Environment, and is a graphical user interface that allows you to
easily create different scripts without having to type all the commands in the command line.
In order to connect your Azure Subscription with Powershell, you need to follow the steps below.
Get-AzurePublishSettingsFile
Get-ChildItem -Recurse cert:\ | Where-Object {$_.Issuer -like '*Azure*'} | select FriendlyName, Subject
Get-AzureSubscription -Current
where you will get all available details about your current subscription.
or
Get-AzureVM
The publish settings file is just an XML file with your subscription details (id, name, url) as well as a
management certificate for authenticating management API requests. It is available for download
from the Windows Azure Management Portal at:
https://windows.azure.com/download/publishprofile.aspx
31.1.1
32.1 Syntax
Copy
PowerShell
Switch-AzureWebsiteSlot [[-Name] <String>] [-Force] [-Confirm] [-Profile <AzureSMProfile>]
[-Slot1 <String>]
[-Slot2 <String>] [-WhatIf] [<CommonParameters>]
32.2 Description
The Switch-AzureWebsiteSlot cmdlet swaps the production slot for a
website with another slot.
This works on websites with two slots only.+
32.3 Examples
32.3.1
Copy
PowerShell
C:\PS>Switch-AzureWebsiteSlot -Name MyWebsite
Switch the azure website MyWebsite backup slot with production slot.
32.4 Parameters
32.4.1
+
-Name
+
+
Type:
String
Required:
False
Position:
Default value:
None
True (ByPropertyName)
False
32.4.2
+
-Force
+
+
Type:
SwitchParameter
Required:
False
Position:
Named
Default value:
None
False
False
32.4.3
+
-Confirm
+
+
Type:
SwitchParameter
Aliases:
cf
Required:
False
Position:
Named
Default value:
False
False
False
32.4.4
+
-Profile
+
+
Type:
AzureSMProfile
Required:
False
Position:
Named
Default value:
None
False
False
32.4.5
+
-Slot1
+
+
Type:
String
Required:
False
Position:
Named
Default value:
None
True (ByPropertyName)
False
32.4.6
+
-Slot2
+
+
Type:
String
Required:
False
Position:
Named
Default value:
None
True (ByPropertyName)
False
32.4.7
+
-WhatIf
+
+
Type:
SwitchParameter
Aliases:
wi
Required:
False
Position:
Named
Default value:
False
False
False
Most of you would be aware of the Deployment Slots in Azure Web Sites. For those who
are not familiar, Deployment Slots provide an option to deploy your changes to
staging environment instead of directly moving the changes to the production
environment. This helps you to validate your changes in the Azure Web Sites
environment before you can swap the changes with the production environment. For
details about the deployment slots, check http://azure.microsoft.com/enus/documentation/articles/web-sites-staged-publishing/.
The swapping can be achieved easily using the Azure Management Portal using the
SWAP option under DASHBOARD of the website/slots. It is also discussed in
http://azure.microsoft.com/en-us/documentation/articles/web-sites-stagedpublishing/#Swap.
But there are many instances where people want to make use of the powerful Azure
PowerShell cmdlets to swap the slots.
Switch-AzureWebsiteSlot is the cmdlet to swap between the slots and the actual
production environment. Below are some examples.
When there is only one slot and you want to swap the slot with the production
environment, the example can be as simple as:
Switch-AzureWebsiteSlot Name <AzureWebsiteName>
When there are two or more slots and you want to swap between the slots, the example
can be as simple as:
Switch-AzureWebsiteSlot Name <AzureWebsiteName> -Slot1 <slotName>
-Slot2 <slotName>
But when there are two or more slots and you want to swap between one of the slot and
the production environment, and you provided only one slot name in the syntax as
below:
Switch-AzureWebsiteSlot Name <AzureWebsiteName> -Slot1 <slotName>
It will then throw error like The website has more than 2 slots you must specify which
ones to swap as shown below
In this case, it is necessary to provide the slot name for the actual/production
environment which is Production. The syntax would be something like below:
Switch-AzureWebsiteSlot Name <AzureWebsiteName> -Slot1 Production
-Slot2 <slotName>
This will swap the corresponding slot content with the production slot of the Azure Web
Site.
Each tier differs in terms of features and pricing. For information on pricing,
see Cache Pricing Details.+
This guide shows you how to use the StackExchange.Redis client using C#
code. The scenarios covered include creating and configuring a cache,
configuring cache clients, and adding and removing objects from the
cache. For more information on using Azure Redis Cache, refer to the Next
Steps section. For a step-by-step tutorial of building an ASP.NET MVC web
app with Redis Cache, see How to create a Web App with Redis Cache.+
+
Note
If you don't have an Azure account, you can Open an Azure account for free
in just a couple of minutes.+
+
33.2.1.1.2
Note
In addition to creating caches in the Azure portal, you can also create them
using Resource Manager templates, PowerShell, or Azure CLI.+
To create a cache using Azure PowerShell, see Manage Azure Redis Cache
with Azure PowerShell.
To create a cache using Azure CLI, see How to create and manage Azure
Redis Cache using the Azure Command-Line Interface (Azure CLI).
In the New Redis Cache blade, specify the desired configuration for the
cache.+
In Dns name, enter a cache name to use for the cache endpoint. The cache
name must be a string between 1 and 63 characters and contain only numbers,
letters, and the - character. The cache name cannot start or end with the character, and consecutive - characters are not valid.
For Subscription, select the Azure subscription that you want to use for the
cache. If your account has only one subscription, it will be automatically selected
and the Subscription drop-down will not be displayed.
In Resource group, select or create a resource group for your cache. For
more information, see Using Resource groups to manage your Azure resources.
Use Pricing Tier to select the desired cache size and features.
Redis cluster allows you to create caches larger than 53 GB and to shard
data across multiple Redis nodes. For more information, see How to configure
clustering for a Premium Azure Redis Cache.
Once the new cache options are configured, click Create. It can take a few
minutes for the cache to be created. To check the status, you can monitor
the progress on the startboard. After the cache has been created, your new
cache has a Running status and is ready for use with default settings.+
+
33.2.2
To access your cache after it's created
Caches can be accessed in the Azure portal using the Browse blade.+
+
To view your caches, click More services > Redis Caches. If you have
recently browsed to a Redis Cache, you can click Redis Caches directly from
the list without clicking More services.+
Select the desired cache to view and configure the settings for that cache.+
+
You can view and configure your cache from the Redis Cache blade.+
+
For more information about configuring your cache, see How to configure
Azure Redis Cache.+
+
Note
For more information, see the StackExchange.Redis github page and the
StackExchange.Redis cache client documentation.+
To configure a client application in Visual Studio using the
StackExchange.Redis NuGet package, right-click the project in Solution
Explorer and choose Manage NuGet Packages. +
+
Type StackExchange.Redis or StackExchange.Redis.StrongName into
the search text box, select the desired version from the results, and click
Install.+
33.3.1.1.2
Note
+
The NuGet package downloads and adds the required assembly references
for your client application to access Azure Redis Cache with the
StackExchange.Redis cache client.+
33.3.1.1.3
Note
package is available, you can update your project to use the updated
version.+
Once your client project is configured for caching, you can use the
techniques described in the following sections for working with your cache.+
+
using StackExchange.Redis;
33.5.1.1.1
Note
throughout your client application, and does not need to be created on a per
operation basis. +
To connect to an Azure Redis Cache and be returned an instance of a
connected ConnectionMultiplexer, call the static Connect method and pass
in the cache endpoint and key like the following example. Use the key
generated from the Azure Portal as the password parameter.+
Copy
ConnectionMultiplexer connection =
ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,abortConnect=false,ssl=t
rue,password=...");
33.5.1.1.2
Important
Warning: Never store credentials in source code. To keep this sample simple,
Im showing them in the source code. See How Application Strings and
Connection Strings Work for information on how to store credentials.+
If you don't want to use SSL, either set ssl=false or omit the ssl parameter.+
33.5.1.1.3
Note
The non-SSL port is disabled by default for new caches. For instructions on
enabling the non-SSL port, see the Access Ports..+
One approach to sharing a ConnectionMultiplexer instance in your
application is to have a static property that returns a connected instance,
similar to the following example. This provides a thread-safe way to initialize
only a single connected ConnectionMultiplexer instance. In these examples
abortConnect is set to false, which means that the call will succeed even if a
connection to the Azure Redis Cache is not established. One key feature of
+
33.5.2
Host name and ports
To access the host name and ports click Properties.+
+
33.5.3
Access keys
To retrieve the access keys, click Access keys.+
+
Once the connection is established, return a reference to the redis cache
database by calling the ConnectionMultiplexer.GetDatabase method. The
object returned from the GetDatabase method is a lightweight pass-through
object and does not need to be stored.+
Copy
Now that you know how to connect to an Azure Redis Cache instance and
return a reference to the cache database, let's take a look at working with
the cache.+
+
Redis stores most data as Redis strings, but these strings can contain many
types of data, including serialized binary data, which can be used when
storing .NET objects in the cache.+
When calling StringGet, if the object exists, it is returned, and if it does not,
null is returned. In this case you can retrieve the value from the desired data
source and store it in the cache for subsequent use. This is known as the
cache-aside pattern.+
Copy
cache.StringSet("key1", value);
}
class Employee
{
public int Id { get; set; }
public string Name { get; set; }
// Store to cache
cache.StringSet("e25", JsonConvert.SerializeObject(new Employee(25, "Clayton Gragg")));
Note
Additional costs are associated with examining monitoring data in the Azure
Portal. For more information, see Storage Analytics and Billing.
+
Azure File storage currently supports Storage Analytics metrics, but does not
yet support logging. You can enable metrics for Azure File storage via the
Azure Portal.+
Storage accounts with a replication type of Zone-Redundant Storage (ZRS)
do not have the metrics or logging capability enabled at this time. +
For an in-depth guide on using Storage Analytics and other tools to identify,
diagnose, and troubleshoot Azure Storage-related issues, see Monitor,
diagnose, and troubleshoot Microsoft Azure Storage.+
In the Azure Portal, click Storage, and then click the storage account name
to open the dashboard.
2.
Click Configure, and scroll down to the monitoring settings for the blob,
table, and queue services.
3.
In monitoring, set the level of monitoring and the data retention policy
for each service:
To set the data retention policy, in Retention (in days), type the number of
days of data to retain from 1 to 365 days. If you do not want to set a retention
policy, enter zero. If there is no retention policy, it is up to you to delete the
monitoring data. We recommend setting a retention policy based on how long you
want to retain storage analytics data for your account so that old and unused
analytics data can be deleted by system at no cost.
+
1.
You should start seeing monitoring data on the dashboard and the Monitor
page after about an hour.+
Until you configure monitoring for a storage account, no monitoring data is
collected, and the metrics charts on the dashboard and Monitor page are
empty.+
After you set the monitoring levels and retention policies, you can choose
which of the available metrics to monitor in the Azure Portal, and which
metrics to plot on metrics charts. A default set of metrics is displayed at each
monitoring level. You can use Add Metrics to add or remove metrics from
the metrics list.+
In the Azure Portal, click Storage, and then click the name of the storage
account to open the dashboard.
2.
To change the metrics that are plotted on the chart, take one of the
following actions:
To add a new metric to the chart, click the colored check box next to
the metric header in the table below the chart.
To hide a metric that is plotted on the chart, clear the colored check
box next to the metric header.
3.
By default, the chart shows trends, displaying only the current value of each
metric (the Relative option at the top of the chart). To display a Y axis so you can
see absolute values, select Absolute.
4.
To change the time range the metrics chart displays, select 6 hours, 24 hours,
or 7 days at the top of the chart.
If your storage account has verbose monitoring configured, the metrics are
available at a finer resolution of individual storage operations in addition to the
service-level aggregates.
Use the following procedures to choose which storage metrics to view in the
metrics charts and table that are displayed on the Monitor page. These
In the Azure Portal, click Storage, and then click the name of the storage
account to open the dashboard.
2.
Click Monitor.
The Monitor page opens. By default, the metrics table displays a subset of
the metrics that are available for monitoring. The illustration shows the
default Monitor display for a storage account with verbose monitoring
configured for all three services. Use Add Metrics to select the metrics you
want to monitor from all available metrics.
34.5.1.1.1 Note
Consider costs when you select the metrics. There are transaction and egress
costs associated with refreshing monitoring displays. For more information,
see Storage Analytics and Billing.
3.
4.
Hover over the right side of the dialog box to display a scrollbar that you
can drag to scroll additional metrics into view.
5.
Click the down arrow by a metric to expand a list of operations the metric
is scoped to include. Select each operation that you want to view in the
metrics table in the Azure Portal.
In the following illustration, the AUTHORIZATION ERROR PERCENTAGE metric
has been expanded.
6.
After you select metrics for all services, click OK (checkmark) to update the
monitoring configuration. The selected metrics are added to the metrics table.
7.
To delete a metric from the table, click the metric to select it, and then
click Delete Metric.
34.6 How to: Customize the metrics chart on the Monitor page
1.
On the Monitor page for the storage account, in the metrics table, select up
to 6 metrics to plot on the metrics chart. To select a metric, click the check box on
its left side. To remove a metric from the chart, clear the check box.
2.
To switch the chart between relative values (final value only displayed) and
absolute values (Y axis displayed), select Relative or Absolute at the top of the
chart.
3.
To change the time range the metrics chart displays, select 6 hours, 24
hours, or 7 days at the top of the chart.
1.
In the Azure Portal, click Storage, and then click the name of the storage
account to open the dashboard.
2.
Click Configure, and use the Down arrow on the keyboard to scroll down
to logging.
3.
For each service (blob, table, and queue), configure the following:
The types of request to log: Read Requests, Write Requests, and Delete
Requests.
The number of days to retain the logged data. Enter zero is if you do
not want to set a retention policy. If you do not set a retention policy, it is up to
you to delete the logs.
4.
Click Save.
35.1 Request
35.2
The Update Data Disk request may be specified as follows. Replace <subscription-id> with the
subscription ID, <cloudservice-name> with the name of the cloud service, <deployment-name> with
the name of the deployment, <role-name> with the name of the Virtual Machine, and <lun> with the
logical unit number of the disk.
Method
PUT
Request URI
https://management.core.windows.net/<subscriptionid>/services/hostedservices/<cloudservice-name>/deployments/<deploymentname>/roles/<role-name>/DataDisks/<lun>
35.2.1
35.2.2
URI Parameters
None.
35.2.3
35.2.4
Request Headers
35.2.5
35.2.6
Description
Required. Specifies the version of the operation to use for this request. This header
should be set to 2012-03-01 or higher.
Request Body
<DataVirtualHardDisk xmlns="http://schemas.microsoft.com/windowsazure"
xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
<HostCaching>caching-mode-of-disk</HostCaching>
<DiskName>name-of-data-disk</DiskName>
<Lun>logical-unit-number-of-data-disk</Lun>
<MediaLink>path-to-vhd</MediaLink>
</DataVirtualHardDisk>
Description
Optional. Specifies the caching behavior of the data disk.
Possible values are:
HostCaching
None
ReadOnly
ReadWrite
Lun
Required. Specifies the name of the data disk to update. This value is only used to
identify the data disk to update and cannot be changed.
Required. Specifies the Logical Unit Number (LUN) for the data disk. You can use this
element to change the LUN for the data disk. If you do not want to change the LUN,
specify the existing LUN as the value for this element.
Valid LUN values are 0 through 31.
Required. Specifies the location of the VHD that is associated with the data disk. This
value is only used to identify the data disk to update and cannot be changed.
MediaLink
Example:
http://example.blob.core.windows.net/disks/mydatadisk.vhd
35.3 Response
35.4
The response includes an HTTP status code, a set of response headers, and a response body.
35.4.1
35.4.2
Status Code
35.4.3
35.4.4
Response Headers
The response for this operation includes the following headers. The response may also include
additional standard HTTP headers.
Response Header
x-ms-request-id
35.4.5
35.4.6
Description
A value that uniquely identifies a request made against the management service.
Response Body
None.
Note
Get help from Azure experts on the Azure forums. For even higher level of
support, go to the Azure Support site and click Get Support.+
This article is for Azure App Service (Web Apps, API Apps, Mobile Apps, Logic
Apps); for Cloud Services, see Configuring a custom domain name for an
Azure cloud service.+
36.1.1.1.2
Note
If you app is load-balanced by Azure Traffic Manager, click the selector at the
top of this article to get specific steps.+
Custom domain names are not enabled for Free tier. You must scale up
to a higher pricing tier, which may change how much you are billed for your
subscription. See App Service Pricing for more information.+
2.
Create the DNS records that map your domain to your app.
3.
4.
36.3.1
Types of domains you can map
Azure App Service lets you map the following categories of custom domains
to your app.+
Root domain - the domain name that you reserved with the domain registrar
(represented by the @ host record, typically). For example, contoso.com.
Subdomain - any domain that's under your root domain. For example,
www.contoso.com (represented by the www host record). You can map
different subdomains of the same root domain to different apps in Azure.
Wildcard domain - any subdomain whose leftmost DNS label is * (e.g. host
records * and *.blogs ). For example, *.contoso.com.
36.3.2
Types of DNS records you can use
Depending on your need, you can use two different types of standard DNS
records to map your custom domain: +
A - maps your custom domain name to the Azure app's virtual IP address
directly.
CNAME - maps your custom domain name to your app's Azure domain name,
<appname>.azurewebsites.net.
Important
Do not create a CNAME record for your root domain (i.e. the "root record").
For more information, see Why can't a CNAME record be used at the root
domain. To map a root domain to your Azure app, use an A record instead.+
+
2.
3.
4.
5.
Keep this portal blade open. You will come back to it once you create the DNS
records.
Find the page for managing DNS records. Look for links or areas of the site
labeled Domain Name, DNS, or Name Server Management. Often, you can
find the link by viewing your account information, and then looking for a link such
as My domains.
2.
Look for a link that lets you add or edit DNS records. This might be a Zone
file or DNS Records link, or an Advanced configuration link.
3.
+
36.5.1
Create an A record
To use an A record to map to your Azure app's IP address, you actually need
to create both an A record and a TXT record. The A record is for the DNS
resolution itself, and the TXT record is for Azure to verify that you own the
custom domain name. +
Configure your A record as follows (@ typically represents the root domain):+
FQDN example
A Host
A Value
contoso.com (root)
www.contoso.com (sub)
www
*.contoso.com (wildcard)
Your additional TXT record takes on the convention that maps from
<subdomain>.<rootdomain> to < appname>.azurewebsites.net. Configure
your TXT record as follows:+
FQDN example
TXT Host
TXT Value
contoso.com (root)
<appname>.azurewebsites.net
www.contoso.com (sub)
www
<appname>.azurewebsites.net
*.contoso.com (wildcard)
<appname>.azurewebsites.net
+
36.5.2
Create a CNAME record
If you use a CNAME record to map to your Azure app's default domain name,
you don't need an additional TXT record like you do with an A record. +
36.5.2.1.1
Important
Do not create a CNAME record for your root domain (i.e. the "root record").
For more information, see Why can't a CNAME record be used at the root
domain. To map a root domain to your Azure app, use an A record instead.+
Configure your CNAME record as follows (@ typically represents the root
domain):+
FQDN example
CNAME Host
CNAME Value
www.contoso.com (sub)
www
<appname>.azurewebsites.n
et
*.contoso.com (wildcard)
<appname>.azurewebsites.n
et
36.6 Step 3. Enable the custom domain name for your app
Back in the Custom Domains blade in the Azure portal (see Step 1), you
need to add the fully-qualified domain name (FQDN) of your custom domain
to the list.+
1.
2.
3.
Click your app, then click Custom domains > Add hostname.
4.
36.6.1.1.1 Note
Azure will attempt to verify the domain name that you use here. Be sure that
it is the same domain name for which you created a DNS record in Step 2.
5.
Click Validate.
6.
Upon clicking Validate Azure will kick off Domain Verification workflow. This
will check for Domain ownership as well as Hostname availability and report
success or detailed error with prescriptive guidence on how to fix the error.
7.
Upon successful validation Add hostname button will become active and
you will be able to the assign hostname.
8.
Once Azure finishes configuring your new custom domain name, navigate to
your custom domain name in a browser. The browser should open your Azure
app, which means that your custom domain name is configured properly.
First, create a verification TXT record with your DNS registry by following
the steps at Step 2. Create the DNS record(s). Your additional TXT record takes
on the convention that maps from <subdomain>.<rootdomain> to
<appname>.azurewebsites.net. See the following table for examples:
2.
FQDN example
TXT Host
TXT Value
contoso.com (root)
awverify.contoso.com
<appname>.azurewebsit
es.net
www.contoso.com
(sub)
awverify.www.contoso.c
om
<appname>.azurewebsit
es.net
*.contoso.com
(wildcard)
awverify.*.contoso.com
<appname>.azurewebsit
es.net
Then, add your custom domain name to your Azure app by following the
steps at Step 3. Enable the custom domain name for your app.
Your custom domain is now enabled in your Azure app. The only thing left to
do is to update the DNS record with your domain registrar.
3.
Finally, update your domain's DNS record to point to your Azure app as is
shown in Step 2. Create the DNS record(s).
User traffic should be redirected to your Azure app immediately after DNS
propagation happens.
+
36.8.1.1.1
Note
The easiest way to get an SSL certificate that meets all the requirements is to buy
one in the Azure portal directly. This article shows you how to do it manually and
then bind it to your custom domain in App Service.
Elliptic Curve Cryptography (ECC) certificates can work with App Service, but
outside the scope of this article. Work with your CA on the exact steps to create ECC
certificates.
+
+
Step 1. Get an SSL certificate
Because CAs provide the various SSL certificate types at different price points, you
should start by deciding what type of SSL certificate to buy. To secure a single
domain name (www.contoso.com), you just need a basic certificate. To secure
multiple domain names (contoso.com and www.contoso.com and mail.contoso.com),
you need either a wildcard certificate or a certificate with Subject Alternate Name
(subjectAltName).+
Once you know which SSL certificate to buy, you submit a Certificate Signing
Request (CSR) to a CA. When you get requested certificate back from the CA, you
then generate a .pfx file from the certificate. You can perform these steps using the
tool of your choice. Here are instructions for the common tools:+
Certreq.exe steps - the Windows utility for creating certificate requests. It has been
part of Windows since Windows XP/Windows Server 2000.
IIS Manager steps - The tool of choice if you're already familiar with it.
OpenSSL steps - an open-source, cross-platform tool. Use it to help you get an SSL
certificate from any platform.
subjectAltName steps using OpenSSL - steps for getting subjectAltName certificates.
+
If you want to test the setup in App Service before buying a certificate, you can
generate a self-signed certificate. This tutorial gives you two ways to generate it:+
Self-signed certificate, Certreq.exe steps
Self-signed certificate, OpenSSL steps
+
+
Get a certificate using Certreq.exe
Create a file (e.g. myrequest.txt), and copy into it the following text, and save it in a
working directory. Replace the <your-domain> placeholder with the custom domain
name of your app.
Copy
[NewRequest]
Subject = "CN=<your-domain>" ; E.g. "CN=www.contoso.com", or
"CN=*.contoso.com" for a wildcard certificate
Exportable = TRUE
KeyLength = 2048
KeySpec = 1
KeyUsage = 0xA0
MachineKeySet = True
ProviderName = "Microsoft RSA SChannel Cryptographic Provider"
ProviderType = 12
HashAlgorithm = SHA256
[EnhancedKeyUsageExtension]
OID=1.3.6.1.5.5.7.3.1
; Server Authentication
For more information on the options in the CSR, and other available options, see the
Certreq reference documentation.
In a command prompt, CD into your working directory and run the following
command to create the CSR:
Copy
Right-click the certificate and select All Tasks > Export. In the Certificate Export
Wizard, click Next, then select Yes, export the private key, and then click Next again.
Select Personal Information Exchange - PKCS #12, Include all certificates in the
certificate path if possible, and Export all extended properties. Then, click Next.
Select Password, and then enter and confirm the password. Click Next.
Provide a path and filename for the exported certificate, with the extension .pfx.
Click Next to finish.
+
You are now ready to upload the exported PFX file to App Service. See Step 2.
Upload and bind the custom SSL certificate.+
+
Get a certificate using the IIS Manager
Generate a CSR with IIS Manager to send to the CA. For more information on
generating a CSR, see Request an Internet Server Certificate (IIS 7).
Submit your CSR to a CA to get an SSL certificate. For a list of CAs trusted by
Microsoft, see Microsoft Trusted Root Certificate Program: Participants.
Complete the CSR with the certificate that the CA sends back to you. For more
information on completing the CSR, see Install an Internet Server Certificate (IIS 7).
If your CA uses intermediate certificates, install them before you proceed. They
usually come as a separate download from your CA, and in several formats for
different web server types. Select the version for Microsoft IIS.
Once you have downloaded the certificates, right-click each of them in Windows
Explorer and select Install certificate. Use the default values in the Certificate
Import Wizard, and continue selecting Next until the import has completed.
Export the SSL certificate from IIS Manager. For more information on exporting the
certificate, see Export a Server Certificate (IIS 7).
Important
In the Certificate Export Wizard, make sure that you select Yes, export the private
key
and also select Personal Information Exchange - PKCS #12, Include all certificates in
the certificate path if possible, and Export all extended properties.
+
You are now ready to upload the exported PFX file to App Service. See Step 2.
Upload and bind the custom SSL certificate.+
+
Get a certificate using OpenSSL
In a command-line terminal, CD into a working directory generate a private key and
CSR by running the following command:
Copy
openssl req -sha256 -new -nodes -keyout myserver.key -out server.csr -newkey
rsa:2048
When prompted, enter the appropriate information. For example:
Copy
Please enter the following 'extra' attributes to be sent with your certificate request
-----BEGIN CERTIFICATE----MIIDJDCCAgwCCQCpCY4o1LBQuzANBgkqhkiG9w0BAQUFADBUMQswCQYDVQQGEwJ
V
UzELMAkGA1UECBMCV0ExEDAOBgNVBAcTB1JlZG1vbmQxEDAOBgNVBAsTB0NvbnRv
c28xFDASBgNVBAMTC2NvbnRvc28uY29tMB4XDTE0MDExNjE1MzIyM1oXDTE1MDEx
NjE1MzIyM1owVDELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAldBMRAwDgYDVQQHEwd
S
ZWRtb25kMRAwDgYDVQQLEwdDb250b3NvMRQwEgYDVQQDEwtjb250b3NvLmNvbT
CC
ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN96hBX5EDgULtWkCRK7DMM3
enae1LT9fXqGlbA7ScFvFivGvOLEqEPD//eLGsf15OYHFOQHK1hwgyfXa9sEDPMT
3AsF3iWyF7FiEoR/qV6LdKjeQicJ2cXjGwf3G5vPoIaYifI5r0lhgOUqBxzaBDZ4
xMgCh2yv7NavI17BHlWyQo90gS2X5glYGRhzY/fGp10BeUEgIs3Se0kQfBQOFUYb
ktA6802lod5K0OxlQy4Oc8kfxTDf8AF2SPQ6BL7xxWrNl/Q2DuEEemjuMnLNxmeA
Ik2+6Z6+WdvJoRxqHhleoL8ftOpWR20ToiZXCPo+fcmLod4ejsG5qjBlztVY4qsC
AwEAATANBgkqhkiG9w0BAQUFAAOCAQEAVcM9AeeNFv2li69qBZLGDuK0NDHD3zhK
Y0nDkqucgjE2QKUuvVSPodz8qwHnKoPwnSrTn8CRjW1gFq5qWEO50dGWgyLR8Wy1
F69DYsEzodG+shv/G+vHJZg9QzutsJTB/Q8OoUCSnQS1PSPZP7RbvDV9b7Gx+gtg
7kQ55j3A5vOrpI8N9CwdPuimtu6X8Ylw9ejWZsnyy0FMeOPpK3WTkDMxwwGxkU3Y
lCRTzkv6vnHrlYQxyBLOSafCB1RWinN/slcWSLHADB6R+HeMiVKkFpooT+ghtii1
A9PdUQIhK9bdaFicXPBYZ6AgNVuGtfwyuS5V6ucm7RE6+qf+QjXNFg==
-----END CERTIFICATE----In the command-line terminal, run the following command to export myserver.pfx
from myserver.key and myserver.crt:
Copy
You are now ready to upload the exported PFX file to App Service. See Step 2.
Upload and bind the custom SSL certificate.+
+
Get a SubjectAltName certificate using OpenSSL
Create a file named sancert.cnf, copy the following text into it, and save it in a
working directory:
Copy
commonName_default
commonName_max = 64
= www.mydomain.com
[ v3_req ]
subjectAltName=DNS:ftp.mydomain.com,DNS:blog.mydomain.com,DNS:*.mydomai
n.com
# -------------- END custom sancert.cnf ----In the line that begins with subjectAltName, replace the value with all domain
names you want to secure (in addition to commonName). For example:
Copy
subjectAltName=DNS:sales.contoso.com,DNS:support.contoso.com,DNS:fabrikam.c
om
You do not need to change any other field, including commonName. You will be
prompted to specify them in the next few steps.
In a command-line terminal, CD into your working directory and run the following
command:
Copy
openssl req -sha256 -new -nodes -keyout myserver.key -out server.csr -newkey
rsa:2048 -config sancert.cnf
When prompted, enter the appropriate information. For example:
Copy
Once the CA sends you the requested certificate, save it to a file named
myserver.crt. If your CA provides it in a text format, simply copy the content into
myserver.crt in a text editor and save it. The file should look like the following:
Copy
-----BEGIN CERTIFICATE----MIIDJDCCAgwCCQCpCY4o1LBQuzANBgkqhkiG9w0BAQUFADBUMQswCQYDVQQGEwJ
V
UzELMAkGA1UECBMCV0ExEDAOBgNVBAcTB1JlZG1vbmQxEDAOBgNVBAsTB0NvbnRv
c28xFDASBgNVBAMTC2NvbnRvc28uY29tMB4XDTE0MDExNjE1MzIyM1oXDTE1MDEx
NjE1MzIyM1owVDELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAldBMRAwDgYDVQQHEwd
S
ZWRtb25kMRAwDgYDVQQLEwdDb250b3NvMRQwEgYDVQQDEwtjb250b3NvLmNvbT
CC
ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN96hBX5EDgULtWkCRK7DMM3
enae1LT9fXqGlbA7ScFvFivGvOLEqEPD//eLGsf15OYHFOQHK1hwgyfXa9sEDPMT
3AsF3iWyF7FiEoR/qV6LdKjeQicJ2cXjGwf3G5vPoIaYifI5r0lhgOUqBxzaBDZ4
xMgCh2yv7NavI17BHlWyQo90gS2X5glYGRhzY/fGp10BeUEgIs3Se0kQfBQOFUYb
ktA6802lod5K0OxlQy4Oc8kfxTDf8AF2SPQ6BL7xxWrNl/Q2DuEEemjuMnLNxmeA
Ik2+6Z6+WdvJoRxqHhleoL8ftOpWR20ToiZXCPo+fcmLod4ejsG5qjBlztVY4qsC
AwEAATANBgkqhkiG9w0BAQUFAAOCAQEAVcM9AeeNFv2li69qBZLGDuK0NDHD3zhK
Y0nDkqucgjE2QKUuvVSPodz8qwHnKoPwnSrTn8CRjW1gFq5qWEO50dGWgyLR8Wy1
F69DYsEzodG+shv/G+vHJZg9QzutsJTB/Q8OoUCSnQS1PSPZP7RbvDV9b7Gx+gtg
7kQ55j3A5vOrpI8N9CwdPuimtu6X8Ylw9ejWZsnyy0FMeOPpK3WTkDMxwwGxkU3Y
lCRTzkv6vnHrlYQxyBLOSafCB1RWinN/slcWSLHADB6R+HeMiVKkFpooT+ghtii1
A9PdUQIhK9bdaFicXPBYZ6AgNVuGtfwyuS5V6ucm7RE6+qf+QjXNFg==
-----END CERTIFICATE----In the command-line terminal, run the following command to export myserver.pfx
from myserver.key and myserver.crt:
Copy
[NewRequest]
Subject = "CN=<your-domain>" ; E.g. "CN=www.contoso.com", or
"CN=*.contoso.com" for a wildcard certificate
Exportable = TRUE
KeyLength = 2048
; KeyLength can be 2048, 4096, 8192, or 16384
(required minimum is 2048)
KeySpec = 1
KeyUsage = 0xA0
MachineKeySet = True
ProviderName = "Microsoft RSA SChannel Cryptographic Provider"
ProviderType = 12
HashAlgorithm = SHA256
RequestType = Cert
; Self-signed certificate
ValidityPeriod = Years
ValidityPeriodUnits = 1
[EnhancedKeyUsageExtension]
OID=1.3.6.1.5.5.7.3.1
; Server Authentication
Right-click the certificate and select All Tasks > Export. In the Certificate Export
Wizard, click Next, then select Yes, export the private key, and then click Next again.
Select Personal Information Exchange - PKCS #12, Include all certificates in the
certificate path if possible, and Export all extended properties. Then, click Next.
Select Password, and then enter and confirm the password. Click Next.
Provide a path and filename for the exported certificate, with the extension .pfx.
Click Next to finish.
+
You are now ready to upload the exported PFX file to App Service. See Step 2.
Upload and bind the custom SSL certificate.+
+
Generate a self-signed certificate using OpenSSL
Important
Self-signed certificates are for test purposes only. Most browsers return errors when
visiting a website that's secured by a self-signed certificate. Some browsers may
even refuse to navigate to the site. +
Create a text file named serverauth.cnf, then copy the following content into it, and
then save it in a working directory:
Copy
[ req ]
default_bits
= 2048
default_keyfile
= privkey.pem
distinguished_name
attributes
= req_distinguished_name
= req_attributes
x509_extensions
= v3_ca
[ req_distinguished_name ]
countryName
countryName_min
=2
countryName_max
=2
stateOrProvinceName
localityName
0.organizationName
organizationalUnitName
commonName
commonName_max
emailAddress
= 64
= Email Address
emailAddress_max
= 40
[ req_attributes ]
challengePassword
= A challenge password
challengePassword_min
challengePassword_max
=4
= 20
[ v3_ca ]
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid:always,issuer:always
basicConstraints = CA:false
openssl req -sha256 -x509 -nodes -days 365 -newkey rsa:2048 -keyout
myserver.key -out myserver.crt -config serverauth.cnf
This command creates two files: myserver.crt (the self-signed certificate) and
myserver.key (the private key), based on the settings in serverauth.cnf.
Export the certificate to a .pfx file by running the following command:
Copy
Select the .pfx file that you exported in Step 1 and specify the password that you
create before. Then, click Upload to upload the certificate. You should now see your
uploaded certificate back in the SSL certificate blade.
In the ssl bindings section Click on Add bindings
In the Add SSL Binding blade use the dropdowns to select the domain name to
secure with SSL, and the certificate to use. You may also select whether to use
Server Name Indication (SNI) or IP based SSL.
Copy
Remap the A record for your custom domain name to this new IP address.
You already have one or more SNI SSL bindings in your app, and you just added an
IP based SSL binding. Once the binding is complete, your
<appname>.azurewebsites.net domain name points to the new IP address.
Therefore, any existing CNAME mapping from the custom domain to
<appname>.azurewebsites.net, including the ones that the SNI SSL secure, also
receives traffic on the new address, which is created for the IP based SSL only. In
this scenario, you need to send the SNI SSL traffic back to the original shared IP
address by following these steps:
Identify all CNAME mappings of custom domains to your app that has an SNI SSL
binding.
Remap each CNAME record to sni.<appname>.azurewebsites.net instead of <
appname>.azurewebsites.net.
+
Step 4. Test HTTPS for your custom domain
All that's left to do now is to make sure that HTTPS works for your custom domain.
In various browsers, browse to https://<your.custom.domain> to see that it serves
up your app.+
If your app gives you certificate validation errors, you're probably using a self-signed
certificate.
If that's not the case, you may have left out intermediate certificates when you
export your .pfx certificate. Go back to What you need to verify that your CSR meets
all the requirements by App Service.
+
+
Enforce HTTPS on your app
If you still want to allow HTTP access to your app, skip this step. App Service does
not enforce HTTPS, so visitors can still access your app using HTTP. If you want to
enforce HTTPS for your app, you can define a rewrite rule in the web.config file for
your app. Every App Service app has this file, regardless of the language framework
of your app.+
Note
There is language-specific redirection of requests. ASP.NET MVC can use the
RequireHttps filter instead of the rewrite rule in web.config (see Deploy a secure
ASP.NET MVC 5 app to a web app).+
Follow these steps:+
Navigate to the Kudu debug console for your app. Its address is
https://<appname>.scm.azurewebsites.net/DebugConsole.
In the debug console, CD to D:\home\site\wwwroot.
Open web.config by clicking the pencil button.
If you deploy your app with Visual Studio or Git, App Service automatically
generates the appropriate web.config for your .NET, PHP, Node.js, or Python app in
the application root. If web.config doesn't exist, run touch web.config in the webbased command prompt to create it. Or, you can create it in your local project and
redeploy your code.
If you had to create a web.config, copy the following code into it and save it. If you
opened an existing web.config, then you just need to copy the entire <rule> tag
into your web.config's configuration/system.webServer/rewrite/rules element.
Copy
</rules>
</rewrite>
</system.webServer>
</configuration>
This rule returns an HTTP 301 (permanent redirect) to the HTTPS protocol whenever
the user requests a page using HTTP. It redirects from http://contoso.com to
https://contoso.com.
Important
If there are already other <rule> tags in your web.config, then place the copied
<rule> tag before the other <rule> tags.
Save the file in the Kudu debug console. It should take effect immediately redirect
all requests to HTTPS.
Important
network ACL by using the Management Portal, see How to Set Up Endpoints
to a Virtual Machine.+
Using Network ACLs, you can do the following:+
Blacklist IP addresses
Use rule ordering to ensure the correct set of rules are applied on a given
virtual machine endpoint (lowest to highest)
Rule #
Remote Subnet
Endpoint
Permit/Deny
100
0.0.0.0/0
3389
Permit
2.
Permit - When you add one or more "permit" ranges, you are denying all
other ranges by default. Only packets from the permitted IP range will be able to
communicate with the virtual machine endpoint.
3.
Deny - When you add one or more "deny" ranges, you are permitting all
other ranges of traffic by default.
4.
virtual machine which locks down access for certain IP addresses. The table
below shows a way to grant access to public virtual IPs (VIPs) of a certain
range to permit access for RDP. All other remote IPs are denied. We follow a
lowest takes precedence rule order.+
38.4.1
Multiple rules
In the example below, if you want to allow access to the RDP endpoint only
from two public IPv4 address ranges (65.0.0.0/8, and 159.0.0.0/8), you can
achieve this by specifying two Permit rules. In this case, since RDP is created
by default for a virtual machine, you may want to lock down access to the
RDP port based on a remote subnet. The example below shows a way to
grant access to public virtual IPs (VIPs) of a certain range to permit access for
RDP. All other remote IPs are denied. This works because network ACLs can
be set up for a specific virtual machine endpoint and access is denied by
default.+
Example Multiple rules+
Rule #
Remote Subnet
Endpoint
Permit/Deny
100
65.0.0.0/8
3389
Permit
200
159.0.0.0/8
3389
Permit
38.4.2
Rule order
Because multiple rules can be specified for an endpoint, there must be a way
to organize rules in order to determine which rule takes precedence. The rule
order specifies precedence. Network ACLs follow a lowest takes precedence
rule order. In the example below, the endpoint on port 80 is selectively
granted access to only certain IP address ranges. To configure this, we have
a deny rule (Rule # 100) for addresses in the 175.1.0.1/24 space. A second
rule is then specified with precedence 200 that permits access to all other
addresses under 175.0.0.0/8.+
Example Rule precedence+
Rule #
Remote Subnet
Endpoint
Permit/Deny
100
175.1.0.1/24
80
Deny
200
175.0.0.0/8
80
Permit
39 Cost estimates
The following table summarizes the current rates in U.S. dollars for these services. The prices listed
here are accurate for the U.S. market as of July 2012. However, for up-to-date pricing information
see the Azure Pricing Details. You can find the pricing for other regions at the same address.
Service
1. In/Out
Description
This is the web traffic between the user's browser and
Cost
Inbound: Free
Bandwidth
2. Compute
Cloud Services roles, for the time each role is running.
3. Azure
Storage
5. Database
Azure SQL Database, cost per month.
Up to 50 GB: First 10 GB
$45.954, each additional
GB $1.998
Up to 150 GB: First 50 GB
$125.874, each additional
GB $0.999
The methods used most often for tracing are the methods for writing output to listeners: Write,
WriteIf, WriteLine, WriteLineIf, Assert, and Fail. These methods can be divided into two
categories: Write, WriteLine, and Fail all emit output unconditionally, whereas WriteIf,
WriteLineIf, and Assert test a Boolean condition, and write or do not write based on the value of
the condition. WriteIf and WriteLineIf emit output of the condition is true, and Assert emits output
if the condition is false.
When designing your tracing and debugging strategy, you should think about how you want the
output to look. Multiple Write statements filled with unrelated information will create a log that is
difficult to read. On the other hand, using WriteLine to put related statements on separate lines may
make it difficult to distinguish what information belongs together. In general, use multiple Write
statements when you want to combine information from multiple sources to create a single
informative message, and the WriteLine statement when you want to create a single, complete
message.
A carriage return is appended to the end of the message this method returns, so that the next
message returned by Write, WriteIf, WriteLine, or WriteLineIf will begin on the following
line:
Copy
' Visual Basic
Dim errorFlag As Boolean = False
Trace.WriteLine("Error in AppendData procedure.")
Trace.WriteLineIf(errorFlag, "Error in AppendData procedure.")
// C#
bool errorFlag = false;
System.Diagnostics.Trace.WriteLine ("Error in AppendData procedure.");
System.Diagnostics.Trace.WriteLineIf(errorFlag,
"Error in AppendData procedure.");
The next message put out by a Write, WriteIf, WriteLine, or WriteLineIf will begin on the
same line as the message put out by the Write or WriteIf statement:
Copy
' Visual Basic
Dim errorFlag As Boolean = False
Trace.WriteIf(errorFlag, "Error in AppendData procedure.")
Debug.WriteIf(errorFlag, "Transaction abandoned.")
Trace.Write("Invalid value for data request")
// C#
bool errorFlag = false;
System.Diagnostics.Trace.WriteIf(errorFlag,
"Error in AppendData procedure.");
System.Diagnostics.Debug.WriteIf(errorFlag, "Transaction abandoned.");
Trace.Write("Invalid value for data request");
To verify that certain conditions exist either before or after you execute a method
Copy
' Visual Basic
Dim I As Integer = 4
Trace.Assert(I = 5, "I is not equal to 5.")
// C#
int I = 4;
System.Diagnostics.Trace.Assert(I == 5, "I is not equal to 5.");
Note You can use Assert with both tracing and debugging. This example
outputs the call stack to any listener in the Listeners collection. For more
information, see Assertions in Managed Code and Debug.Assert Method.
// C#
System.Diagnostics.Trace.WriteLineIf(dataSwitch.Enabled,
"Starting connection procedure");
A TraceSwitch provides multiple setting levels, and exposes a set of properties that correspond to
these levels. Thus, the Boolean properties TraceError, TraceWarning, TraceInfo, and
TraceVerbose can be tested as part of a WriteIf or WriteLineIf statement. The code in this example
writes the specified information only if your TraceSwitch is set to trace level Error or higher:
Copy
' Visual Basic
Trace.WriteLineIf(myTraceSwitch.TraceError, "Error 42 occurred")
// C#
System.Diagnostics.Trace.WriteLineIf(myTraceSwitch.TraceError,
"Error 42 occurred");
The preceding example always calls the WriteLineIf method when tracing is enabled. Therefore, the
example must always execute any code necessary to evaluate the second argument for WriteLineIf.
However, you will usually get better performance by testing a BooleanSwitch first and then calling
the general Trace.Write method only if the test succeeds, using this code:
Copy
' Visual Basic
If MyBooleanSwitch.Enabled Then
Trace.WriteLine("Error 42 occured")
End If
// C#
if (MyBooleanSwitch.Enabled)
{
System.Diagnostics.Trace.WriteLine("Error 42 occured");
}
If you test the Boolean value before calling the tracing method, you avoid executing unnecessary
code, because tracing always evaluates all parameters of WriteLineIf. Note that this technique
would improve performance only if TraceSwitch is off during the application's normal operating
mode. If TraceSwitch is on, the application must evaluate all parameters of WriteLineIf, adding
time to the overall execution of the application.
41 Trace.TraceInformation Method
Writes an informational message to the trace listeners in the Listeners collection.
Namespace: System.Diagnostics
Assembly: System (in System.dll)
Description
42 Trace.WriteIf Method
Writes information about the trace to the trace listeners in the Listeners collection if a condition is
true.
Namespace: System.Diagnostics
Assembly: System (in System.dll)
Description
Writes the value of the object's ToString method to the trace listeners in
Writes a category name and the value of the object's ToString method to
the trace listeners in the Listeners collection if a condition is true.
WriteIf(Boolean,
String)
WriteIf(Boolean,
String,String)
internet. The forwarder is the server-end of that connection that listens to the endpoint for incoming
request from VS and it forwards incoming traffic to the msvsmon instance running on the same box.
Once the connection in established then it will hit the breakpoint in your code in VisualsStudio.
44
To enable remote debugging for your cloud service, select Debug as the Build Configuration on the
Common Settings tab of your Cloud Services publish dialog wizard:
Then click the Advanced Settings tab and check the Enable Remote Debugging for all roles checkbox:
Once your cloud service is published and running live in the cloud, simply set a breakpoint in your local
source code:
Then use Visual Studios Server Explorer to select the Cloud Service instance deployed in the cloud, and
then use the Attach Debugger context menu on the role or to a specific VM instance of it. You can also
attach multiple instance if available.
Once the debugger attaches to the Cloud Service, and a breakpoint is hit, youll be able to use the rich
debugging capabilities of Visual Studio to debug the cloud instance remotely, in real-time, and see exactly
how your app is running in the cloud.
45
Limitations
Instances: Publish will fail if the role has more than 25 instances.
Traffic: As the debugger communicates with Visual Studio and that Azure charges for outbound data. The
data transferred is not big and shouldnt be too big a deal.
Native debugging: For CTP tooling does not enable native debugging.
Ports: The debugger uses ports 30400-30424 and 31400-31424. If you use ports that conflict with the
debugger ports youll see the following message: Allocation failed. Please retry later, try reducing the
VM size or number of role instances, or try deploying to a different region.
VS Restart after full deployment: If you do a full deployment and the VIP changes you need to restart
VS to attach the debugger.
how Azure Virtual Machines are set up. When you create an Azure Virtual
Machine, there are two services that work in tandem to create this machine:
Compute and Storage. On the Storage side, a VHD is created in one of your
storage accounts within the Azure Storage Service. The physical node that this
VHD is stored on is located in the region you specified to place your Virtual
Machine. On the compute side, we find a physical node in a second cluster to
place your virtual machine. When the VM starts in that cluster, it establishes a
connection with the Storage Service and boots from the VHD. When creating a
Virtual Machine, we require that the VHD be located in a storage account in the
same region where you are creating the VM. This is to ensure there is
performance consistency when communicating between the Virtual Machine and
the storage account.
With this context in mind, lets walk through the steps to migrate the virtual
machine from one region to another:
1.
2.
3.
4.
46.1
Go to the Service Management Portal, select the Virtual Machine that youd like
to migrate, and select Shut Down from the control menu.
Alternatively, you can use the Azure Powershell cmdlet to accomplish the same
task:
$servicename = "KenazTestService"
$vmname = "TestVM1"
Get-AzureVM -ServiceName $servicename -Name $vmname | Stop-AzureVM
Stopping the VM is a required step so that the file system is consistent when you
do the copy operation. Azure does not support live migration at this time. This
operation implies that you are migrating a specialized VM from one region to
another. If youd like to create a VM from a generalized image, sys-prep the
Virtual Machine before stopping it.
46.2
The Azure Storage Service exposes the ability to move a blob from one storage
account to another. To do this, we have to perform the following steps:
1.
2.
3.
4.
This will initiate the blob copy from your source storage account to your
destination storage account. At this point, youll probably have to wait a while
for the blob to be fully copied. In order to check the status of the operation, you
can try the following commands.
while(($blobCopy | Get-AzureStorageBlobCopyState).Status -eq "Pending")
{
Start-Sleep -s 30
$blobCopy | Get-AzureStorageBlobCopyState
}
Once the blob is finished copying, the status of the blob copy will be
Success. For a more comprehensive copy VHD example, see "Azure Virtual
Machine: Copy VHDs Between Storage Accounts."
46.3
Another option is to use the AzCopy utility (download here). Here is the
equivalent blob copy between storage accounts:
AzCopy https://sourceaccount.blob.core.windows.net/mycontainer1
https://destaccount.blob.core.windows.net/mycontainer2 /sourcekey:key1 /destkey:key2
abc.txt
For more details on how to use AzCopy for different scenarios, check out
Getting Started with the AzCopy Command-Line Utility.
46.4
At this point, the blob that youve copied into your destination storage account
is still just a blob. In order to boot from it, you have to create an Azure Disk from
this blob. Navigate to the Disks section of Virtual Machines and select Create.
NOTE: These instructions are specific to specialized VMs. If you want to use the
VHD as an image, you will need to restart the VM, sysprep it, copy the blob over,
and then add as an Image (not a Disk).
Use the VHD URL explorer to select the blob from the destination container that
we copied the blob to. Select the toggle that says The VHD contains an
operating system. This indicates to Azure that the disk object youre creating is
meant to be used as the OS disk rather than one of the data disks.
NOTE
: If you get an error that states A lease conflict occurred with the blob , go
back to the previous step to validate that the blob has finished copying.
Alternatively, you can use the Powershell cmdlets to perform the same
operation:
Add-AzureDisk -DiskName "myMigratedTestVM" `
-OS Linux `
-MediaLocation
"https://kenazdestinationsa.blob.core.windows.net/destinationvhds/KenazTestServiceTestVM1-2014-8-26-16-16-48-522-0.vhd" `
-Verbose
Once complete, the Disk should show up under the Disks section of Virtual
Machines.
46.5
At this point, you can create the Virtual Machine using the disk object you just
created. From the Service Management Portal, select Create Virtual Machine
from Gallery and select the Disk that you created under My Disks. NOTE: If you
are moving a VM that has a storage pool configured (or want the drive letter
ordering to remain the same), make a note of the LUN number to VHD mapping
on the source VM, and make sure the data disks are attached to the same LUNs
on the destination VM.
Page blobs, which are optimized for random read/write operations and
which provide the ability to write to a range of bytes in a blob.
For more information about block blobs and page blobs, see Understanding
Block Blobs, Append Blobs, and Page Blobs.
The REST API for the Blob service defines HTTP operations against container
and blob resources. The API includes the operations listed in the following
table.
+
Resource
Type
Description
List
Containers
Account
Set Blob
Service
Properties
Account
Get Blob
Service
Properties
Account
Preflight
Blob
Request
Account
Get Blob
Service
Stats
Account
Create
Container
Operation
Operation
Resource
Type
Description
Container
Get
Container
Properties
Container
Get
Container
Metadata
Container
Set
Container
Metadata
Container
Get
Container
ACL
Container
Set
Container
ACL
Container
Lease
Container
Container
Delete
Container
Container
List Blobs
Container
Put Blob
Block,
append,
and page
Operation
Resource
Type
Description
blobs
Get Blob
Block,
append,
and page
blobs
Get Blob
Properties
Block,
append,
and page
blobs
Set Blob
Properties
Block,
append,
and page
blobs
Get Blob
Metadata
Block,
append,
and page
blobs
Set Blob
Metadata
Block,
append,
and page
blobs
Delete Blob
Block,
append
and page
blobs
Lease Blob
Block,
append,
and page
Operation
Resource
Type
Description
blobs
Snapshot
Blob
Block,
append,
and page
blobs
Copy Blob
Block,
append,
and page
blobs
Abort Copy
Blob
Block,
append,
and page
blobs
Put Block
Block blobs
only
Put Block
List
Block blobs
only
Get Block
List
Block blobs
only
Put Page
Page blobs
only
Get Page
Ranges
Page blobs
only
Incremental
Copy Blob
Page blobs
only
Operation
Resource
Type
Description
changes are transferred.
Append
Block
Append
blobs only
The result is your Web site will resolve to both [your app].azurewebsites.net and whatever
domain you purchased. The A record needs to point to the IP address you captured in step two.
Replace whatever value is there with the IP address provided. When someone calls up your
site, your registrar will authoritatively answer that request and pass it on directly to the IP
address you provided. For the CNAME, there are three entries you need to make:
Step 4: Enter your custom domain name in the Manage Domains dialog and check for validity.
Pull up the Domain Settings for your Web site again. This time, enter your new domain name.
If you want Azure to respond to both www.(yoursite).com and (yoursite).com, youll want to
create both entries. Youll likely see a red dot indicating that validation and/or CNAME look up
has failed.
This is simply Azures way of telling you records have not yet propagated. You can happily
continue using your Azure Web site using the [your app].azurewebsites.net URL. When you
come back to the dialog, the verification should succeed and any request for (yoursite).com
should automatically resolve to your Azure app.
Interested in more tips and tricks pertaining to the evolution of .net development, or creating a
custom domain name for Azure Web Sites? Join me for my next presentations at VS LIVE!,
Redmond, Washington:
details about disks and VHDs in Microsoft Azure, see About Disks and VHDs
for Virtual Machines.+
49.1.1.1.1
Important
Azure has two different deployment models for creating and working with
resources: Resource Manager and Classic. This article covers using the
Classic deployment model. Microsoft recommends that most new
deployments use the Resource Manager model. You can also upload a virtual
machine using the Resource Manager model.+
49.2 Prerequisites
This article assumes you have:+
An Azure subscription - If you don't have one, you can open an Azure
account for free.
The VHDX format is not supported in Microsoft Azure. You can convert the disk
to VHD format using Hyper-V Manager or the Convert-VHD cmdlet. For details,
see this blogpost.
+
2.
3.
4.
5.
6.
Click OK.
Login
Copy
PowerShell
Add-AzureAccount
2.
PowerShell
Select-AzureSubscription -SubscriptionName <SubscriptionName>
3.
Create a new storage account. The name of the storage account should
be unique, 3-24 characters. The name can be any combination of letters and
numbers. You also need to specify a location like "East US"
Copy
PowerShell
New-AzureStorageAccount StorageAccountName <StorageAccountName> -Location
<Location>
4.
5.
Copy
PowerShell
Add-AzureVhd -Destination
"https://<StorageAccountName>.blob.core.windows.net/<ContainerName>/<vhdName>.vh
d" -LocalFilePath <LocalPathtoVHDFile>
This article shows you how to enable HTTPS for a web app, a mobile app backend,
or an API app in Azure App Service that uses a custom domain name. It covers
server-only authentication. If you need mutual authentication (including client
authentication), see How To Configure TLS Mutual Authentication for App Service.+
To secure with HTTPS an app that has a custom domain name, you add a certificate
for that domain name. By default, Azure secures the *.azurewebsites.net wildcard
domain with a single SSL certificate, so your clients can already access your app at
https://<appname>.azurewebsites.net. But if you want to use a custom domain, like
contoso.com, www.contoso.com, and *.contoso.com, the default certificate can't
secure that. Furthermore, like all wildcard certificates, the default certificate is not
as secure as using a custom domain and a certificate for that custom domain. +
What you need
To secure your custom domain name with HTTPS, you bind a custom SSL certificate
to that custom domain in Azure. Before binding a custom certificate, you need to do
the following:+
Configure the custom domain - App Service only allows adding a certificate for a
domain name that's already configured in your app. For instructions, see Map a
custom domain name to an Azure app.
Scale up to Basic tier or higher App Service plans in lower pricing tiers don't support
custom SSL certificates. For instructions, see Scale up an app in Azure.
Get an SSL certificate - If you do not already have one, you need to get one from a
trusted certificate authority (CA). The certificate must meet all the following
requirements:
It is signed by a trusted CA (no private CA servers).
It contains a private key.
It is created for key exchange, and exported to a .PFX file.
It uses a minimum of 2048-bit encryption.
Its subject name matches the custom domain it needs to secure. To secure multiple
domains with one certificate, you need to use a wildcard name (e.g. *.contoso.com)
or specify subjectAltName values.
It is merged with all intermediate certificates used by your CA. Otherwise, you may
run into irreproducible interoperability problems on some clients.
Note
The easiest way to get an SSL certificate that meets all the requirements is to buy
one in the Azure portal directly. This article shows you how to do it manually and
then bind it to your custom domain in App Service.
Elliptic Curve Cryptography (ECC) certificates can work with App Service, but
outside the scope of this article. Work with your CA on the exact steps to create ECC
certificates.
+
+
[NewRequest]
Subject = "CN=<your-domain>" ; E.g. "CN=www.contoso.com", or
"CN=*.contoso.com" for a wildcard certificate
Exportable = TRUE
KeyLength = 2048
KeySpec = 1
KeyUsage = 0xA0
MachineKeySet = True
ProviderName = "Microsoft RSA SChannel Cryptographic Provider"
ProviderType = 12
HashAlgorithm = SHA256
[EnhancedKeyUsageExtension]
OID=1.3.6.1.5.5.7.3.1
; Server Authentication
For more information on the options in the CSR, and other available options, see the
Certreq reference documentation.
In a command prompt, CD into your working directory and run the following
command to create the CSR:
Copy
To export your SSL certificate from the certificate store, press Win+R and run
certmgr.msc to launch Certificate Manager. Select Personal > Certificates. In the
Issued To column, you should see an entry with your custom domain name, and the
CA you used to generate the certificate in the Issued By column.
Right-click the certificate and select All Tasks > Export. In the Certificate Export
Wizard, click Next, then select Yes, export the private key, and then click Next again.
Select Personal Information Exchange - PKCS #12, Include all certificates in the
certificate path if possible, and Export all extended properties. Then, click Next.
Select Password, and then enter and confirm the password. Click Next.
Provide a path and filename for the exported certificate, with the extension .pfx.
Click Next to finish.
+
You are now ready to upload the exported PFX file to App Service. See Step 2.
Upload and bind the custom SSL certificate.+
+
Get a certificate using the IIS Manager
Generate a CSR with IIS Manager to send to the CA. For more information on
generating a CSR, see Request an Internet Server Certificate (IIS 7).
Submit your CSR to a CA to get an SSL certificate. For a list of CAs trusted by
Microsoft, see Microsoft Trusted Root Certificate Program: Participants.
Complete the CSR with the certificate that the CA sends back to you. For more
information on completing the CSR, see Install an Internet Server Certificate (IIS 7).
If your CA uses intermediate certificates, install them before you proceed. They
usually come as a separate download from your CA, and in several formats for
different web server types. Select the version for Microsoft IIS.
Once you have downloaded the certificates, right-click each of them in Windows
Explorer and select Install certificate. Use the default values in the Certificate
Import Wizard, and continue selecting Next until the import has completed.
Export the SSL certificate from IIS Manager. For more information on exporting the
certificate, see Export a Server Certificate (IIS 7).
Important
In the Certificate Export Wizard, make sure that you select Yes, export the private
key
and also select Personal Information Exchange - PKCS #12, Include all certificates in
the certificate path if possible, and Export all extended properties.
+
You are now ready to upload the exported PFX file to App Service. See Step 2.
Upload and bind the custom SSL certificate.+
+
Get a certificate using OpenSSL
In a command-line terminal, CD into a working directory generate a private key and
CSR by running the following command:
Copy
openssl req -sha256 -new -nodes -keyout myserver.key -out server.csr -newkey
rsa:2048
When prompted, enter the appropriate information. For example:
Copy
Please enter the following 'extra' attributes to be sent with your certificate request
-----BEGIN CERTIFICATE----MIIDJDCCAgwCCQCpCY4o1LBQuzANBgkqhkiG9w0BAQUFADBUMQswCQYDVQQGEwJ
V
UzELMAkGA1UECBMCV0ExEDAOBgNVBAcTB1JlZG1vbmQxEDAOBgNVBAsTB0NvbnRv
c28xFDASBgNVBAMTC2NvbnRvc28uY29tMB4XDTE0MDExNjE1MzIyM1oXDTE1MDEx
NjE1MzIyM1owVDELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAldBMRAwDgYDVQQHEwd
S
ZWRtb25kMRAwDgYDVQQLEwdDb250b3NvMRQwEgYDVQQDEwtjb250b3NvLmNvbT
CC
ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN96hBX5EDgULtWkCRK7DMM3
enae1LT9fXqGlbA7ScFvFivGvOLEqEPD//eLGsf15OYHFOQHK1hwgyfXa9sEDPMT
3AsF3iWyF7FiEoR/qV6LdKjeQicJ2cXjGwf3G5vPoIaYifI5r0lhgOUqBxzaBDZ4
xMgCh2yv7NavI17BHlWyQo90gS2X5glYGRhzY/fGp10BeUEgIs3Se0kQfBQOFUYb
ktA6802lod5K0OxlQy4Oc8kfxTDf8AF2SPQ6BL7xxWrNl/Q2DuEEemjuMnLNxmeA
Ik2+6Z6+WdvJoRxqHhleoL8ftOpWR20ToiZXCPo+fcmLod4ejsG5qjBlztVY4qsC
AwEAATANBgkqhkiG9w0BAQUFAAOCAQEAVcM9AeeNFv2li69qBZLGDuK0NDHD3zhK
Y0nDkqucgjE2QKUuvVSPodz8qwHnKoPwnSrTn8CRjW1gFq5qWEO50dGWgyLR8Wy1
F69DYsEzodG+shv/G+vHJZg9QzutsJTB/Q8OoUCSnQS1PSPZP7RbvDV9b7Gx+gtg
7kQ55j3A5vOrpI8N9CwdPuimtu6X8Ylw9ejWZsnyy0FMeOPpK3WTkDMxwwGxkU3Y
lCRTzkv6vnHrlYQxyBLOSafCB1RWinN/slcWSLHADB6R+HeMiVKkFpooT+ghtii1
A9PdUQIhK9bdaFicXPBYZ6AgNVuGtfwyuS5V6ucm7RE6+qf+QjXNFg==
-----END CERTIFICATE----In the command-line terminal, run the following command to export myserver.pfx
from myserver.key and myserver.crt:
Copy
You are now ready to upload the exported PFX file to App Service. See Step 2.
Upload and bind the custom SSL certificate.+
+
Get a SubjectAltName certificate using OpenSSL
Create a file named sancert.cnf, copy the following text into it, and save it in a
working directory:
Copy
commonName_default
commonName_max = 64
= www.mydomain.com
[ v3_req ]
subjectAltName=DNS:ftp.mydomain.com,DNS:blog.mydomain.com,DNS:*.mydomai
n.com
# -------------- END custom sancert.cnf ----In the line that begins with subjectAltName, replace the value with all domain
names you want to secure (in addition to commonName). For example:
Copy
subjectAltName=DNS:sales.contoso.com,DNS:support.contoso.com,DNS:fabrikam.c
om
You do not need to change any other field, including commonName. You will be
prompted to specify them in the next few steps.
In a command-line terminal, CD into your working directory and run the following
command:
Copy
openssl req -sha256 -new -nodes -keyout myserver.key -out server.csr -newkey
rsa:2048 -config sancert.cnf
When prompted, enter the appropriate information. For example:
Copy
Once the CA sends you the requested certificate, save it to a file named
myserver.crt. If your CA provides it in a text format, simply copy the content into
myserver.crt in a text editor and save it. The file should look like the following:
Copy
-----BEGIN CERTIFICATE----MIIDJDCCAgwCCQCpCY4o1LBQuzANBgkqhkiG9w0BAQUFADBUMQswCQYDVQQGEwJ
V
UzELMAkGA1UECBMCV0ExEDAOBgNVBAcTB1JlZG1vbmQxEDAOBgNVBAsTB0NvbnRv
c28xFDASBgNVBAMTC2NvbnRvc28uY29tMB4XDTE0MDExNjE1MzIyM1oXDTE1MDEx
NjE1MzIyM1owVDELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAldBMRAwDgYDVQQHEwd
S
ZWRtb25kMRAwDgYDVQQLEwdDb250b3NvMRQwEgYDVQQDEwtjb250b3NvLmNvbT
CC
ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN96hBX5EDgULtWkCRK7DMM3
enae1LT9fXqGlbA7ScFvFivGvOLEqEPD//eLGsf15OYHFOQHK1hwgyfXa9sEDPMT
3AsF3iWyF7FiEoR/qV6LdKjeQicJ2cXjGwf3G5vPoIaYifI5r0lhgOUqBxzaBDZ4
xMgCh2yv7NavI17BHlWyQo90gS2X5glYGRhzY/fGp10BeUEgIs3Se0kQfBQOFUYb
ktA6802lod5K0OxlQy4Oc8kfxTDf8AF2SPQ6BL7xxWrNl/Q2DuEEemjuMnLNxmeA
Ik2+6Z6+WdvJoRxqHhleoL8ftOpWR20ToiZXCPo+fcmLod4ejsG5qjBlztVY4qsC
AwEAATANBgkqhkiG9w0BAQUFAAOCAQEAVcM9AeeNFv2li69qBZLGDuK0NDHD3zhK
Y0nDkqucgjE2QKUuvVSPodz8qwHnKoPwnSrTn8CRjW1gFq5qWEO50dGWgyLR8Wy1
F69DYsEzodG+shv/G+vHJZg9QzutsJTB/Q8OoUCSnQS1PSPZP7RbvDV9b7Gx+gtg
7kQ55j3A5vOrpI8N9CwdPuimtu6X8Ylw9ejWZsnyy0FMeOPpK3WTkDMxwwGxkU3Y
lCRTzkv6vnHrlYQxyBLOSafCB1RWinN/slcWSLHADB6R+HeMiVKkFpooT+ghtii1
A9PdUQIhK9bdaFicXPBYZ6AgNVuGtfwyuS5V6ucm7RE6+qf+QjXNFg==
-----END CERTIFICATE----In the command-line terminal, run the following command to export myserver.pfx
from myserver.key and myserver.crt:
Copy
[NewRequest]
Subject = "CN=<your-domain>" ; E.g. "CN=www.contoso.com", or
"CN=*.contoso.com" for a wildcard certificate
Exportable = TRUE
KeyLength = 2048
; KeyLength can be 2048, 4096, 8192, or 16384
(required minimum is 2048)
KeySpec = 1
KeyUsage = 0xA0
MachineKeySet = True
ProviderName = "Microsoft RSA SChannel Cryptographic Provider"
ProviderType = 12
HashAlgorithm = SHA256
RequestType = Cert
; Self-signed certificate
ValidityPeriod = Years
ValidityPeriodUnits = 1
[EnhancedKeyUsageExtension]
OID=1.3.6.1.5.5.7.3.1
; Server Authentication
Right-click the certificate and select All Tasks > Export. In the Certificate Export
Wizard, click Next, then select Yes, export the private key, and then click Next again.
Select Personal Information Exchange - PKCS #12, Include all certificates in the
certificate path if possible, and Export all extended properties. Then, click Next.
Select Password, and then enter and confirm the password. Click Next.
Provide a path and filename for the exported certificate, with the extension .pfx.
Click Next to finish.
+
You are now ready to upload the exported PFX file to App Service. See Step 2.
Upload and bind the custom SSL certificate.+
+
Generate a self-signed certificate using OpenSSL
Important
Self-signed certificates are for test purposes only. Most browsers return errors when
visiting a website that's secured by a self-signed certificate. Some browsers may
even refuse to navigate to the site. +
Create a text file named serverauth.cnf, then copy the following content into it, and
then save it in a working directory:
Copy
[ req ]
default_bits
= 2048
default_keyfile
= privkey.pem
distinguished_name
attributes
= req_distinguished_name
= req_attributes
x509_extensions
= v3_ca
[ req_distinguished_name ]
countryName
countryName_min
=2
countryName_max
=2
stateOrProvinceName
localityName
0.organizationName
organizationalUnitName
commonName
commonName_max
emailAddress
= 64
= Email Address
emailAddress_max
= 40
[ req_attributes ]
challengePassword
= A challenge password
challengePassword_min
challengePassword_max
=4
= 20
[ v3_ca ]
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid:always,issuer:always
basicConstraints = CA:false
openssl req -sha256 -x509 -nodes -days 365 -newkey rsa:2048 -keyout
myserver.key -out myserver.crt -config serverauth.cnf
This command creates two files: myserver.crt (the self-signed certificate) and
myserver.key (the private key), based on the settings in serverauth.cnf.
Export the certificate to a .pfx file by running the following command:
Copy
Select the .pfx file that you exported in Step 1 and specify the password that you
create before. Then, click Upload to upload the certificate. You should now see your
uploaded certificate back in the SSL certificate blade.
In the ssl bindings section Click on Add bindings
In the Add SSL Binding blade use the dropdowns to select the domain name to
secure with SSL, and the certificate to use. You may also select whether to use
Server Name Indication (SNI) or IP based SSL.
Copy
Remap the A record for your custom domain name to this new IP address.
You already have one or more SNI SSL bindings in your app, and you just added an
IP based SSL binding. Once the binding is complete, your
<appname>.azurewebsites.net domain name points to the new IP address.
Therefore, any existing CNAME mapping from the custom domain to
<appname>.azurewebsites.net, including the ones that the SNI SSL secure, also
receives traffic on the new address, which is created for the IP based SSL only. In
this scenario, you need to send the SNI SSL traffic back to the original shared IP
address by following these steps:
Identify all CNAME mappings of custom domains to your app that has an SNI SSL
binding.
Remap each CNAME record to sni.<appname>.azurewebsites.net instead of <
appname>.azurewebsites.net.
+
Step 4. Test HTTPS for your custom domain
All that's left to do now is to make sure that HTTPS works for your custom domain.
In various browsers, browse to https://<your.custom.domain> to see that it serves
up your app.+
If your app gives you certificate validation errors, you're probably using a self-signed
certificate.
If that's not the case, you may have left out intermediate certificates when you
export your .pfx certificate. Go back to What you need to verify that your CSR meets
all the requirements by App Service.
+
+
Enforce HTTPS on your app
If you still want to allow HTTP access to your app, skip this step. App Service does
not enforce HTTPS, so visitors can still access your app using HTTP. If you want to
enforce HTTPS for your app, you can define a rewrite rule in the web.config file for
your app. Every App Service app has this file, regardless of the language framework
of your app.+
Note
There is language-specific redirection of requests. ASP.NET MVC can use the
RequireHttps filter instead of the rewrite rule in web.config (see Deploy a secure
ASP.NET MVC 5 app to a web app).+
Follow these steps:+
Navigate to the Kudu debug console for your app. Its address is
https://<appname>.scm.azurewebsites.net/DebugConsole.
In the debug console, CD to D:\home\site\wwwroot.
Open web.config by clicking the pencil button.
If you deploy your app with Visual Studio or Git, App Service automatically
generates the appropriate web.config for your .NET, PHP, Node.js, or Python app in
the application root. If web.config doesn't exist, run touch web.config in the webbased command prompt to create it. Or, you can create it in your local project and
redeploy your code.
If you had to create a web.config, copy the following code into it and save it. If you
opened an existing web.config, then you just need to copy the entire <rule> tag
into your web.config's configuration/system.webServer/rewrite/rules element.
Copy
</rules>
</rewrite>
</system.webServer>
</configuration>
This rule returns an HTTP 301 (permanent redirect) to the HTTPS protocol whenever
the user requests a page using HTTP. It redirects from http://contoso.com to
https://contoso.com.
Important
If there are already other <rule> tags in your web.config, then place the copied
<rule> tag before the other <rule> tags.
Save the file in the Kudu debug console. It should take effect immediately redirect
all requests to HTTPS.
+
For more information on the IIS URL Rewrite module, see the URL Rewrite
documentation.
51.1.1
Deployment models and methods
It's important to know that Azure currently works with two deployment
models: Resource Manager and classic. Before you begin your configuration,
make sure that you understand the deployment models and tools. You'll need
to know which model that you want to work in. Not all networking features
are supported yet for both models. For information about the deployment
models, see Understanding Resource Manager deployment and classic
deployment.+
We update this table as new articles and additional tools become available
for this configuration. When an article is available, we link directly to it from
this table.+
Deployment
Model/Method
Classic
Portal
PowerShe
ll
Azure Portal
Resource Manager
Article
Not
Supported
Supported
Classic
Not
Supported
Not
Supported
Article
routing type, make sure that your on-premises VPN gateway supports routebased VPN configurations. +
Compatible VPN hardware for each on-premises location. Check About VPN
Devices for Virtual Network Connectivity to verify if the device that you want to
use is something that is known to be compatible.
An externally facing public IPv4 IP address for each VPN device. The IP
address cannot be located behind a NAT. This is requirement.
You'll need to install the latest version of the Azure PowerShell cmdlets. See
How to install and configure Azure PowerShell for more information about
installing the PowerShell cmdlets.
The IP address ranges that you want to use for your virtual network (if you
haven't already created one).
The IP address ranges for each of the local network sites that you'll be
connecting to. You'll need to make sure that the IP address ranges for each of
the local network sites that you want to connect to do not overlap. Otherwise,
the Azure Classic Portal or the REST API will reject the configuration being
uploaded.
For example, if you have two local network sites that both contain the IP
address range 10.2.3.0/24 and you have a package with a destination address
10.2.3.3, Azure wouldn't know which site you want to send the package to
because the address ranges are overlapping. To prevent routing issues, Azure
doesn't allow you to upload a configuration file that has overlapping ranges.
+
2.
Configure your new gateway and create your VPN tunnel. For instructions,
see Configure a VPN Gateway in the Azure Classic Portal. First, change your
gateway type to dynamic routing.
51.5.2
1.
2.
<NetworkConfiguration xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://schemas.microsoft.com/ServiceHosting/2011/07/NetworkConfiguration">
<VirtualNetworkConfiguration>
<LocalNetworkSites>
<LocalNetworkSite name="Site1">
<AddressSpace>
<AddressPrefix>10.0.0.0/16</AddressPrefix>
<AddressPrefix>10.1.0.0/16</AddressPrefix>
</AddressSpace>
<VPNGatewayAddress>131.2.3.4</VPNGatewayAddress>
</LocalNetworkSite>
<LocalNetworkSite name="Site2">
<AddressSpace>
<AddressPrefix>10.2.0.0/16</AddressPrefix>
<AddressPrefix>10.3.0.0/16</AddressPrefix>
</AddressSpace>
<VPNGatewayAddress>131.4.5.6</VPNGatewayAddress>
</LocalNetworkSite>
</LocalNetworkSites>
<VirtualNetworkSites>
<VirtualNetworkSite name="VNet1" AffinityGroup="USWest">
<AddressSpace>
<AddressPrefix>10.20.0.0/16</AddressPrefix>
<AddressPrefix>10.21.0.0/16</AddressPrefix>
</AddressSpace>
<Subnets>
<Subnet name="FE">
<AddressPrefix>10.20.0.0/24</AddressPrefix>
</Subnet>
<Subnet name="BE">
<AddressPrefix>10.20.1.0/24</AddressPrefix>
</Subnet>
<Subnet name="GatewaySubnet">
<AddressPrefix>10.20.2.0/29</AddressPrefix>
</Subnet>
</Subnets>
<Gateway>
<ConnectionsToLocalNetwork>
<LocalNetworkSiteRef name="Site1">
<Connection type="IPsec" />
</LocalNetworkSiteRef>
</ConnectionsToLocalNetwork>
</Gateway>
</VirtualNetworkSite>
</VirtualNetworkSites>
</VirtualNetworkConfiguration>
</NetworkConfiguration>
<Gateway>
<ConnectionsToLocalNetwork>
<LocalNetworkSiteRef name="Site1"><Connection type="IPsec"
/></LocalNetworkSiteRef>
</ConnectionsToLocalNetwork>
</Gateway>
To add additional site references (create a multi-site configuration), simply add additional
"LocalNetworkSiteRef" lines, as shown in the example below:
<Gateway>
<ConnectionsToLocalNetwork>
<LocalNetworkSiteRef name="Site1"><Connection type="IPsec"
/></LocalNetworkSiteRef>
<LocalNetworkSiteRef name="Site2"><Connection type="IPsec"
/></LocalNetworkSiteRef>
</ConnectionsToLocalNetwork>
</Gateway>
51.10
6. Download keys
Once your new tunnels have been added, use the PowerShell cmdlet GetAzureVNetGatewayKey to get the IPsec/IKE pre-shared keys for each tunnel.
+
For example:+
Copy
If you prefer, you can also use the Get Virtual Network Gateway Shared Key
REST API to get the pre-shared keys.+
51.11
Check the multi-site tunnel status. After downloading the keys for each
tunnel, you'll want to verify connections. Use Get-AzureVnetConnection to
get a list of virtual network tunnels, as shown in the example below. VNet1 is
the name of the VNet.+
Copy
ConnectivityState
: Connected
EgressBytesTransferred
: 661530
IngressBytesTransferred : 519207
LastConnectionEstablished : 5/2/2014 2:51:40 PM
LastEventID
: 23401
LastEventMessage
: The connectivity state for the local network site 'Site1' changed
from Not Connected to Connected.
LastEventTimeStamp
: 5/2/2014 2:51:40 PM
LocalNetworkSiteName
: Site1
OperationDescription
OperationId
: Get-AzureVNetConnection
: 7f68a8e6-51e9-9db4-88c2-16b8067fed7f
OperationStatus
: Succeeded
ConnectivityState
: Connected
EgressBytesTransferred
: 789398
IngressBytesTransferred : 143908
LastConnectionEstablished : 5/2/2014 3:20:40 PM
LastEventID
: 23401
LastEventMessage
: The connectivity state for the local network site 'Site2' changed
from Not Connected to Connected.
LastEventTimeStamp
LocalNetworkSiteName
: 5/2/2014 2:51:40 PM
: Site2
OperationDescription
OperationId
: 7893b329-51e9-9db4-88c2-16b8067fed7f
OperationStatus
51.12
: Get-AzureVNetConnection
: Succeeded
Next steps
52 VM Sizes
52.1 Notes: Standard A0 - A4 using CLI and PowerShell
In the classic deployment model, some VM size names are slightly different
in CLI and PowerShell:+
Standard_A0 is ExtraSmall
Standard_A1 is Small
Standard_A2 is Medium
Standard_A3 is Large
Standard_A4 is ExtraLarge
Size
CPU
cor
es
Memor
y: GiB
Loca
l
HDD
: GiB
Standard_
0.768
20
Max
dat
a
disk
s
Max data
disk
throughp
ut: IOPS
Max
NICs /
Network
bandwid
th
1x500
1 / low
Max
dat
a
disk
s
Max data
disk
throughp
ut: IOPS
Max
NICs /
Network
bandwid
th
CPU
cor
es
Memor
y: GiB
Loca
l
HDD
: GiB
Standard_
A1
small
1.75
70
2x500
1/
moderat
e
Standard_
A2
medium
3.5 GB
135
4x500
1/
moderat
e
Standard_
A3
large
285
8x500
2 / high
Standard_
A4
Extra
large
14
605
16
16x500
4 / high
Standard_
A5
14
135
4X500
1/
moderat
e
Standard_
A6
28
285
8x500
2 / high
Standard_
A7
56
605
16
16x500
4 / high
Size
A0
Extra
small
53 Database tiers
53.1.1
Service tier
S0
S1
S2
S3
Max DTUs
10
20
50
100
250 GB
250 GB
250 GB
250 GB
N/A
N/A
N/A
N/A
60
90
120
200
60
90
120
200
600
900
1200
2400
53.1.2
Service tier
P1
P2
P4
P6
P11
P15
Max DTUs
125
250
500
1000
1750
4000
500
GB
500
GB
500
GB
500
GB
1 TB
1 TB
Max in-memory
OLTP storage
1 GB
2 GB
4 GB
8 GB
14
GB
32
GB
Max concurrent
workers
200
400
800
1600
2400
6400
Service tier
P1
P2
P4
P6
P11
P15
Max concurrent
logins
200
400
800
1600
2400
6400
Max concurrent
sessions
3000
0
3000
0
3000
0
3000
0
3000
0
3000
0
Note
Startup tasks are not applicable to Virtual Machines, only to Cloud Service
Web and Worker roles.+
and files can be written to local storage that can then be read later by your
roles.+
Your startup task can log information and errors to the directory specified by
the TEMP environment variable. During the startup task, the TEMP
environment variable resolves to the C:\Resources\temp\[guid].
[rolename]\RoleTemp directory when running on the cloud.+
Startup tasks can also be executed several times between reboots. For
example, the startup task will be run each time the role recycles, and role
recycles may not always include a reboot. Startup tasks should be written in
a way that allows them to run several times without problems.+
Startup tasks must end with an errorlevel (or exit code) of zero for the
startup process to complete. If a startup task ends with a non-zero
errorlevel, the role will not start.+
2.
Warning
IIS may not be fully configured during the startup task stage in the startup
process, so role-specific data may not be available. Startup tasks that
The role host process is started and the site is created in IIS.
4.
5.
6.
</Task>
</Startup>
In the following example, the Startup.cmd batch file writes the line "The
current version is 1.0.0.0" to the StartupLog.txt file in the directory specified
by the TEMP environment variable. The EXIT /B 0 line ensures that the
startup task ends with an errorlevel of zero.+
Copy
cmd
ECHO The current version is %MyVersionNumber% >> "%TEMP%\StartupLog.txt" 2>&1
EXIT /B 0
54.4.1.1.1
Note
In Visual Studio, the Copy to Output Directory property for your startup
batch file should be set to Copy Always to be sure that your startup batch
file is properly deployed to your project on Azure (approot\bin for Web
roles, and approot for worker roles).+
The command, with optional command line parameters, which begins the
startup task.
executionContext - Specifies the privilege level for the startup task. The
privilege level can be limited or elevated:+
limited
The startup task runs with the same privileges as the role. When the
executionContext attribute for the Runtime element is also limited, then user
privileges are used.
elevated
The startup task runs with administrator privileges. This allows startup tasks to
install programs, make IIS configuration changes, perform registry changes, and
other administrator level tasks, without increasing the privilege level of the role
itself.
+
54.5.1.1.1
Note
The privilege level of a startup task does not need to be the same as the role
itself.+
taskType - Specifies the way a startup task is executed.+
simple
Tasks are executed synchronously, one at a time, in the order specified in the
ServiceDefinition.csdef file. When one simple startup task ends with an
errorlevel of zero, the next simple startup task is executed. If there are no
more simple startup tasks to execute, then the role itself will be started.
54.5.1.1.2 Note
If the simple task ends with a non-zero errorlevel, the instance will be
blocked. Subsequent simple startup tasks, and the role itself, will not start.
To ensure that your batch file ends with an errorlevel of zero, execute the
command EXIT /B 0 at the end of your batch file process.
background
Tasks are executed asynchronously, in parallel with the startup of the role.
foreground
Tasks are executed asynchronously, in parallel with the startup of the role. The
key difference between a foreground and a background task is that a
foreground task prevents the role from recycling or shutting down until the task
has ended. The background tasks do not have this restriction.
Your storage account key is similar to the root password for your storage account.
Always be careful to protect your account key. Avoid distributing it to other users,
hard-coding it, or saving it in a plain-text file that is accessible to others. Regenerate
your account key using the Azure Portal if you believe it may have been
compromised. To learn how to regenerate your account key, see How to create,
manage, or delete a storage account in the Azure Portal.+
A SAS gives you granular control over what type of access you grant to clients who
have the SAS, including:+
The interval over which the SAS is valid, including the start time and the expiry
time.
The permissions granted by the SAS. For example, a SAS on a blob might grant a
user read and write permissions to that blob, but not delete permissions.
An optional IP address or range of IP addresses from which Azure Storage will accept
the SAS. For example, you might specify a range of IP addresses belonging to your
organization. This provides another measure of security for your SAS.
The protocol over which Azure Storage will accept the SAS. You can use this optional
parameter to restrict access to clients using HTTPS.
+
When should you use a shared access signature?
You can use a SAS when you want to provide access to resources in your storage
account to a client that can't be trusted with the account key. Your storage account
keys include both a primary and secondary key, both of which grant administrative
access to your account and all of the resources in it. Exposing either of your account
keys opens your account to the possibility of malicious or negligent use. Shared
access signatures provide a safe alternative that allows other clients to read, write,
and delete data in your storage account according to the permissions you've
granted, and without need for the account key.+
A common scenario where a SAS is useful is a service where users read and write
their own data to your storage account. In a scenario where a storage account
stores user data, there are two typical design patterns:+
1. Clients upload and download data via a front-end proxy service, which performs
authentication. This front-end proxy service has the advantage of allowing
validation of business rules, but for large amounts of data or high-volume
transactions, creating a service that can scale to match demand may be expensive
or difficult.+
+
2. A lightweight service authenticates the client as needed and then generates a
SAS. Once the client receives the SAS, they can access storage account resources
directly with the permissions defined by the SAS and for the interval allowed by the
SAS. The SAS mitigates the need for routing all data through the front-end proxy
service.+
+
Many real-world services may use a hybrid of these two approaches, depending on
the scenario involved, with some data processed and validated via the front-end
proxy while other data is saved and/or read directly using SAS.+
Additionally, you will need to use a SAS to authenticate the source object in a copy
operation in certain scenarios:+
When you copy a blob to another blob that resides in a different storage account,
you must use a SAS to authenticate the source blob. With version 2015-04-05, you
can optionally use a SAS to authenticate the destination blob as well.
When you copy a file to another file that resides in a different storage account, you
must use a SAS to authenticate the source file. With version 2015-04-05, you can
optionally use a SAS to authenticate the destination file as well.
When you copy a blob to a file, or a file to a blob, you must use a SAS to
authenticate the source object, even if the source and destination objects reside
within the same storage account.
+
Types of shared access signatures
Version 2015-04-05 of Azure Storage introduces a new type of shared access
signature, the account SAS. You can now create either of two types of shared access
signatures:+
Account SAS. The account SAS delegates access to resources in one or more of the
storage services. All of the operations available via a service SAS are also available
via an account SAS. Additionally, with the account SAS, you can delegate access to
operations that apply to a given service, such as Get/Set Service Properties and Get
Service Stats. You can also delegate access to read, write, and delete operations on
blob containers, tables, queues, and file shares that are not permitted with a service
SAS. See Constructing an Account SAS for in-depth information about constructing
the account SAS token.
Service SAS. The service SAS delegates access to a resource in just one of the
storage services: the Blob, Queue, Table, or File service. See Constructing a Service
SAS and Service SAS Examples for in-depth information about constructing the
service SAS token.
+
How a shared access signature works
A shared access signature is a signed URI that points to one or more storage
resources and includes a token that contains a special set of query parameters. The
token indicates how the resources may be accessed by the client. One of the query
parameters, the signature, is constructed from the SAS parameters and signed with
the account key. This signature is used by Azure Storage to authenticate the SAS.+
Here's an example of a SAS URI, showing the resource URI and the SAS token:+
+
Note that the SAS token is a string generated on the client side (see the SAS
examples section below for code examples). The SAS token generated by the
storage client library is not tracked by Azure Storage in any way. You can create an
unlimited number of SAS tokens on the client side.+
When a client provides a SAS URI to Azure Storage as part of a request, the service
checks the SAS parameters and signature to verify that it is valid for authenticating
the request. If the service verifies that the signature is valid, then the request is
authenticated. Otherwise, the request is declined with error code 403 (Forbidden).+
Shared access signature parameters
The account SAS and service SAS tokens include some common parameters, and
also take a few parameters that that are different.+
https://myaccount.blob.core.windows.net/sascontainer/sasblob.txt?sv=2015-0405&st=2015-04-29T22%3A18%3A26Z&se=2015-0430T02%3A23%3A26Z&sr=b&sp=rw&sip=168.1.5.60-168.1.5.70&spr=https&sig=Z
%2FRHIX5Xcg0Mq2rqI3OlWTjEg2tYkboXr1P9ZUXDtkk%3D
Name
SAS portion
Description
Blob URI
https://myaccount.blob.core.windows.net/sasconta
iner/sasblob.txt
The
address of
the blob.
Note that
using
HTTPS is
highly
recommen
ded.
Name
SAS portion
Description
Storage
services
version
sv=2015-04-05
For storage
services
version
2012-0212 and
later, this
parameter
indicates
the version
to use.
Start
time
st=2015-04-29T22%3A18%3A26Z
Specified
in UTC
time. If you
want the
SAS to be
valid
immediatel
y, omit the
start time.
Expiry
time
se=2015-04-30T02%3A23%3A26Z
Specified
in UTC
time.
Resourc
e
sr=b
The
resource is
a blob.
Permissi
ons
sp=rw
The
permission
s granted
by the SAS
include
Read (r)
and Write
(w).
Name
SAS portion
Description
IP range
sip=168.1.5.60-168.1.5.70
The range
of IP
addresses
from which
a request
will be
accepted.
Protocol
spr=https
Only
requests
using
HTTPS are
permitted.
Signatur
e
sig=Z
%2FRHIX5Xcg0Mq2rqI3OlWTjEg2tYkboXr1P9ZUXD
tkk%3D
Used to
authenticat
e access to
the blob.
The
signature
is an HMAC
computed
over a
string-tosign and
key using
the
SHA256
algorithm,
and then
encoded
using
Base64
encoding.
And here is an example of an account SAS that uses the same common parameters
on the token. Since these parameters are described above, they are not described
here. Only the parameters that are specific to account SAS are described in the
table below.+
Copy
https://myaccount.blob.core.windows.net/?
restype=service&comp=properties&sv=2015-04-05&ss=bf&srt=s&st=2015-0429T22%3A18%3A26Z&se=2015-0430T02%3A23%3A26Z&sr=b&sp=rw&sip=168.1.5.60-168.1.5.70&spr=https&sig=F
%6GRVAZ5Cdj2Pw4tgU7IlSTkWgn7bUkkAg8P6HESXwmf%4B
Descript
ion
Name
SAS portion
Resourc
e URI
https://myaccount.blob.core.windows.net/?
restype=service&comp=properties
The
Blob
service
endpoin
t, with
parame
ters for
getting
service
properti
es
(when
called
with
GET) or
setting
service
properti
es
(when
called
with
SET).
Service
s
ss=bf
The SAS
applies
to the
Blob
and File
services
Resourc
srt=s
The SAS
Name
SAS portion
e types
Permissi
ons
Descript
ion
applies
to
servicelevel
operatio
ns.
sp=rw
The
permiss
ions
grant
access
to read
and
write
operatio
ns.
Given that permissions are restricted to the service level, accessible operations with
this SAS are Get Blob Service Properties (read) and Set Blob Service Properties
(write). However, with a different resource URI, the same SAS token could also be
used to delegate access to Get Blob Service Stats (read).+
Controlling a SAS with a stored access policy
A shared access signature can take one of two forms:+
Ad hoc SAS: When you create an ad hoc SAS, the start time, expiry time, and
permissions for the SAS are all specified on the SAS URI (or implied, in the case
where start time is omitted). This type of SAS may be created as an account SAS or
a service SAS.
SAS with stored access policy: A stored access policy is defined on a resource
container - a blob container, table, queue, or file share - and can be used to manage
constraints for one or more shared access signatures. When you associate a SAS
with a stored access policy, the SAS inherits the constraints - the start time, expiry
time, and permissions - defined for the stored access policy.
+
Note
Currently, an account SAS must be an ad hoc SAS. Stored access policies are not
yet supported for account SAS.+
The difference between the two forms is important for one key scenario: revocation.
A SAS is a URL, so anyone who obtains the SAS can use it, regardless of who
requested it to begin with. If a SAS is published publicly, it can be used by anyone in
the world. A SAS that is distributed is valid until one of four things happens:+
The expiry time specified on the SAS is reached.
The expiry time specified on the stored access policy referenced by the SAS is
reached (if a stored access policy is referenced, and if it specifies an expiry time).
This can either occur because the interval elapses, or because you have modified
the stored access policy to have an expiry time in the past, which is one way to
revoke the SAS.
The stored access policy referenced by the SAS is deleted, which is another way to
revoke the SAS. Note that if you recreate the stored access policy with exactly the
same name, all existing SAS tokens will again be valid according to the permissions
associated with that stored access policy (assuming that the expiry time on the SAS
has not passed). If you are intending to revoke the SAS, be sure to use a different
name if you recreate the access policy with an expiry time in the future.
The account key that was used to create the SAS is regenerated. Note that doing
this will cause all application components using that account key to fail to
authenticate until they are updated to use either the other valid account key or the
newly regenerated account key.
+
Important
A shared access signature URI is associated with the account key used to create the
signature, and the associated stored access policy (if any). If no stored access policy
is specified, the only way to revoke a shared access signature is to change the
account key.+
Authenticating from a client application with a SAS
A client who is in possession of a SAS can use the SAS to authenticate a request
against a storage account for which they do not possess the account keys. A SAS
can be included in a connection string, or used directly from the appropriate
constructor or method.+
Using a SAS in a connection string
If you possess a shared access signature (SAS) URL that grants you access to
resources in a storage account, you can use the SAS in a connection string. Because
the SAS includes on the URI the information required to authenticate the request,
the SAS URI provides the protocol, the service endpoint, and the necessary
credentials to access the resource.+
To create a connection string that includes a shared access signature, specify the
string in the following format:+
Copy
BlobEndpoint=myBlobEndpoint;
QueueEndpoint=myQueueEndpoint;
TableEndpoint=myTableEndpoint;
FileEndpoint=myFileEndpoint;
SharedAccessSignature=sasToken
Each service endpoint is optional, although the connection string must contain at
least one.+
Note
Using HTTPS with a SAS is recommended as a best practice.+
If you are specifying a SAS in a connection string in a configuration file, you may
need to encode special characters in the URL.+
Service SAS example
Here's an example of a connection string that includes a service SAS for Blob
storage:+
Copy
BlobEndpoint=https://storagesample.blob.core.windows.net;SharedAccessSignature
=sv=2015-04-05&sr=b&si=tutorial-policy635959936145100803&sig=9aCzs76n0E7y5BpEi2GvsSv433BZa22leDOZXX
%2BXXIU%3D
And here's an example of the same connection string with encoding of special
characters:+
Copy
BlobEndpoint=https://storagesample.blob.core.windows.net;SharedAccessSignature
=sv=2015-04-05&sr=b&si=tutorial-policy635959936145100803&sig=9aCzs76n0E7y5BpEi2GvsSv433BZa22leDOZXX
%2BXXIU%3D
Account SAS example
Here's an example of a connection string that includes an account SAS for Blob and
File storage. Note that endpoints for both services are specified:+
Copy
BlobEndpoint=https://storagesample.blob.core.windows.net;
FileEndpoint=https://storagesample.file.core.windows.net;
SharedAccessSignature=sv=2015-07-08&sig=iCvQmdZngZNW
%2F4vw43j6%2BVz6fndHF5LI639QJba4r8o%3D&spr=https&st=2016-0412T03%3A24%3A31Z&se=2016-04-13T03%3A29%3A31Z&srt=s&ss=bf&sp=rwl
And here's an example of the same connection string with URL encoding:+
Copy
BlobEndpoint=https://storagesample.blob.core.windows.net;
FileEndpoint=https://storagesample.file.core.windows.net;
SharedAccessSignature=sv=2015-07-08&sig=iCvQmdZngZNW
%2F4vw43j6%2BVz6fndHF5LI639QJba4r8o%3D&spr=https&st=2016-0412T03%3A24%3A31Z&se=2016-0413T03%3A29%3A31Z&srt=s&ss=bf&sp=rwl
Using a SAS in a constructor or method
Several Azure Storage client library constructors and method overloads offer a SAS
parameter, so that you can authenticate a request to the service with a SAS.+
For example, here a SAS URI is used to create a reference to a block blob. The SAS
provides the only credentials needed for the request. The block blob reference is
then used for a write operation:+
Copy
C#
string sasUri = "https://storagesample.blob.core.windows.net/sample-container/" +
"sampleBlob.txt?sv=2015-0708&sr=b&sig=39Up9JzHkxhUIhFEjEH9594DJxe7w6cIRCg0V6lCGSo%3D" +
"&se=2016-10-18T21%3A51%3A37Z&sp=rcw";
// Create operation: Upload a blob with the specified name to the container.
// If the blob does not exist, it will be created. If it does exist, it will be overwritten.
try
{
MemoryStream msWrite = new
MemoryStream(Encoding.UTF8.GetBytes(blobContent));
msWrite.Position = 0;
using (msWrite)
{
await blob.UploadFromStreamAsync(msWrite);
}
If a SAS is leaked, it can be used by anyone who obtains it, which can potentially
compromise your storage account.
If a SAS provided to a client application expires and the application is unable to
retrieve a new SAS from your service, then the application's functionality may be
hindered.
+
The following recommendations for using shared access signatures will help balance
these risks:+
Always use HTTPS to create a SAS or to distribute a SAS. If a SAS is passed over
HTTP and intercepted, an attacker performing a man-in-the-middle attack will be
able to read the SAS and then use it just as the intended user could have,
potentially compromising sensitive data or allowing for data corruption by the
malicious user.
Reference stored access policies where possible. Stored access policies give you the
option to revoke permissions without having to regenerate the storage account
keys. Set the expiration on these to be a very long time (or infinite) and make sure
that it is regularly updated to move it farther into the future.
Use near-term expiration times on an ad hoc SAS. In this way, even if a SAS is
compromised unknowingly, it will only be viable for a short time duration. This
practice is especially important if you cannot reference a stored access policy. This
practice also helps limit the amount of data that can be written to a blob by limiting
the time available to upload to it.
Have clients automatically renew the SAS if necessary. Clients should renew the SAS
well before the expiration, in order to allow time for retries if the service providing
the SAS is unavailable. If your SAS is meant to be used for a small number of
immediate, short-lived operations that are expected to be completed within the
expiration period, then this may be unnecessary as the SAS is not expected to be
renewed. However, if you have client that is routinely making requests via SAS, then
the possibility of expiration comes into play. The key consideration is to balance the
need for the SAS to be short-lived (as stated above) with the need to ensure that
the client is requesting renewal early enough to avoid disruption due to the SAS
expiring prior to successful renewal.
Be careful with SAS start time. If you set the start time for a SAS to now, then due to
clock skew (differences in current time according to different machines), failures
may be observed intermittently for the first few minutes. In general, set the start
time to be at least 15 minutes ago, or don't set it at all, which will make it valid
immediately in all cases. The same generally applies to expiry time as well remember that you may observe up to 15 minutes of clock skew in either direction
on any request. Note for clients using a REST version prior to 2012-02-12, the
maximum duration for a SAS that does not reference a stored access policy is 1
hour, and any policies specifying longer term than that will fail.
To use the account SAS to access service-level APIs for the Blob service, construct a
Blob client object using the SAS and the Blob storage endpoint for your storage
account.+
Copy
C#
static void UseAccountSAS(string sasToken)
{
// Create new storage credentials using the SAS token.
StorageCredentials accountSAS = new StorageCredentials(sasToken);
// Use these credentials and the account name to create a Blob service client.
CloudStorageAccount accountWithSAS = new CloudStorageAccount(accountSAS,
"account-name", endpointSuffix: null, useHttps: true);
CloudBlobClient blobClientWithSAS = accountWithSAS.CreateCloudBlobClient();
// Now set the service properties for the Blob client created with the SAS.
blobClientWithSAS.SetServiceProperties(new ServiceProperties()
{
HourMetrics = new MetricsProperties()
{
MetricsLevel = MetricsLevel.ServiceAndApi,
RetentionDays = 7,
Version = "1.0"
},
MinuteMetrics = new MetricsProperties()
{
MetricsLevel = MetricsLevel.ServiceAndApi,
RetentionDays = 7,
Version = "1.0"
},
Logging = new LoggingProperties()
{
LoggingOperations = LoggingOperations.All,
RetentionDays = 14,
Version = "1.0"
}
});
// The permissions granted by the account SAS also permit you to retrieve service
properties.
ServiceProperties serviceProperties = blobClientWithSAS.GetServiceProperties();
Console.WriteLine(serviceProperties.HourMetrics.MetricsLevel);
Console.WriteLine(serviceProperties.HourMetrics.RetentionDays);
Console.WriteLine(serviceProperties.HourMetrics.Version);
}
Example: Create a stored access policy
The following code creates a stored access policy on a container. You can use the
access policy to specify constraints for a service SAS on the container or its blobs.+
Copy
C#
private static async Task CreateSharedAccessPolicyAsync(CloudBlobContainer
container, string policyName)
{
// Create a new shared access policy and define its constraints.
// The access policy provides create, write, read, list, and delete permissions.
SharedAccessBlobPolicy sharedPolicy = new SharedAccessBlobPolicy()
{
// When the start time for the SAS is omitted, the start time is assumed to be
the time when the storage service receives the request.
// Omitting the start time for a SAS that is effective immediately helps to avoid
clock skew.
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
Permissions = SharedAccessBlobPermissions.Read |
SharedAccessBlobPermissions.List |
SharedAccessBlobPermissions.Write | SharedAccessBlobPermissions.Create |
SharedAccessBlobPermissions.Delete
};
// Add the new policy to the container's permissions, and set the container's
permissions.
permissions.SharedAccessPolicies.Add(policyName, sharedPolicy);
await container.SetPermissionsAsync(permissions);
}
Example: Create a service SAS on a container
The following code creates a SAS on a container. If the name of an existing stored
access policy is provided, that policy is associated with the SAS. If no stored access
policy is provided, then the code creates an ad-hoc SAS on the container.+
Copy
C#
private static string GetContainerSasUri(CloudBlobContainer container, string
storedPolicyName = null)
{
string sasContainerToken;
// If no stored policy is specified, create a new access policy and define its
constraints.
if (storedPolicyName == null)
{
// Note that the SharedAccessBlobPolicy class is used both to define the
parameters of an ad-hoc SAS, and
// to construct a shared access policy that is saved to the container's shared
access policies.
SharedAccessBlobPolicy adHocPolicy = new SharedAccessBlobPolicy()
{
// When the start time for the SAS is omitted, the start time is assumed to be
the time when the storage service receives the request.
// Omitting the start time for a SAS that is effective immediately helps to
avoid clock skew.
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
Permissions = SharedAccessBlobPermissions.Write |
SharedAccessBlobPermissions.List
};
// Return the URI string for the container, including the SAS token.
if (policyName == null)
{
// Create a new access policy and define its constraints.
// Note that the SharedAccessBlobPolicy class is used both to define the
parameters of an ad-hoc SAS, and
// to construct a shared access policy that is saved to the container's shared
access policies.
SharedAccessBlobPolicy adHocSAS = new SharedAccessBlobPolicy()
{
// When the start time for the SAS is omitted, the start time is assumed to be
the time when the storage service receives the request.
// Omitting the start time for a SAS that is effective immediately helps to
avoid clock skew.
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
Permissions = SharedAccessBlobPermissions.Read |
SharedAccessBlobPermissions.Write | SharedAccessBlobPermissions.Create
};
// Generate the shared access signature on the blob, setting the constraints
directly on the signature.
sasBlobToken = blob.GetSharedAccessSignature(adHocSAS);
// Return the URI string for the container, including the SAS token.
return blob.Uri + sasBlobToken;
}
Conclusion
Shared access signatures are useful for providing limited permissions to your
storage account to clients that should not have the account key. As such, they are a
vital part of the security model for any application using Azure Storage. If you follow
the best practices listed here, you can use SAS to provide greater flexibility of
access to resources in your storage account, without compromising the security of
your application.
Reading and writing page or block blob content, block lists, properties, and metadata
57 Emulator express