Professional Documents
Culture Documents
White Paper
Copyright 2009 Dariel Solutions (Proprietary Limited), All rights reserved. No part
of this publication may be reproduced, stored in a retrieval system, or
transmitted, in any form or by any means, electronic, mechanical, photocopying,
recording, or otherwise, without the prior permission of the copyright holders
Building Service Orientated Applications
Using WCF
1. Indexing
1.1 Table of contents
1. Indexing ........................................................................................................... 2
1.1 Table of contents ........................................................................................ 2
1.2 Table of Figures and Examples ............................................................... 3
2. Revision History .............................................................................................. 4
3. Foreword .......................................................................................................... 5
4. Executive Summary....................................................................................... 6
5. The SOA Egg ................................................................................................... 7
6. Nobody I talk to knows what to do, blogs just confuse me ................. 8
7. A SOA model using WCF............................................................................. 10
7.1 Mangers, Engines and Gateways .......................................................... 11
7.1.1 Connector Tier ...................................................................................... 11
7.1.2 Business Logic Tier .............................................................................. 12
7.1.3 Data Access Layer ................................................................................ 14
7.1.4 Utilities.................................................................................................... 15
7.1.5 Data Contracts ...................................................................................... 16
7.1.6 Deployable ............................................................................................. 16
7.1.7 Hosting ................................................................................................... 16
7.1.8 Call patterns .......................................................................................... 17
7.2 Technical Requirements .......................................................................... 18
7.2.1 Integrity ................................................................................................. 18
7.2.1.1 Data Validation..................................................................................... 22
7.2.2 Flexibility ................................................................................................ 24
7.2.3 Maintainability ....................................................................................... 24
7.2.4 Performance .......................................................................................... 29
7.2.5 Reusability ............................................................................................. 32
7.2.6 Scalability............................................................................................... 32
7.2.7 Security .................................................................................................. 33
7.2.7.1 Authentication and Authorisation..................................................... 33
8. Try it out ........................................................................................................ 36
9. References and Bibliography ..................................................................... 37
Example 7 = Security............................................................................ 34
2. Revision History
Name Description Date Version
3. Foreword
I would like to thank Garrick Hensberg and Edrich Landsberg for their ideas,
many of which have been included into this work.
4. Executive Summary
SOA is a big business buzzword tossed into conversations at board meetings and
at executive briefings. At this level, however, SOA really refers to connecting
disparate systems across application, department, corporate, and even industry
boundaries. This is the “Big” SOA concept, and this is the realm of the enterprise
architect.
This is the space of multimillion rand Service Bus applications, SAP systems and
other wonderful products. But the fact still remains that a chain is a strong as its
weakest link, if the systems hooked up to the top of the range Service Bus are
not rock solid and can not be trusted to produce the correct results all the time,
then the some of the true potential of the investment is lost.
The path to “Big” SOA begins with a solid base and the ideas presented in this
paper will provide you with the tools you need to achieve this.
This paper offers practical advice for building Service Orientated Applications,
using service oriented programming (SO) as an approach, that shows that every
component can be a service while still maintaining the technical requirements
that modern applications are required to exhibit and in most cases surpassing
what many application frameworks offer to date.
By building a solid “Little” SOA base, the platform is set at an enterprise level to
realise the composition and reuse that is the value proposition of SOA, as without
a “Little” SOA that is properly portioned, rock solid and composeable, attempts to
realise more will fail.
I hope to share some of the best practices, techniques and tools that I have
learned from the last two years of applying WCF to the construction of service
orientated applications and also to provide an insight to application architects on
how to achieve the non functional specifications that many projects only play lip
service to.
I call this the SOA egg, a farmer presented with an egg might envision a chicken,
while a chef may see an omelette and a child a brightly painted Easter egg, SOA
is no different, everyone has there own vision of what SOA is. [MSDN1]
This paper will define a practical set of patterns for building Service Orientated
Applications using service orientation as a methodology. These concepts are so
deeply ingrained in this architectural approach that it shows that every
component can be a service while still maintaining the technical requirements
that modern applications are required to exhibit and in most cases surpassing
what many application frameworks offer to date.
It is important to note that in the context of this framework the term service is
not synonymous with the term web service; I view the protocol that the service
implements to be an implementation detail of the service not part of its
architectural design. I advocate that most of the services detailed below be run
as in-process services where it makes sense to do so.
An In-Proc or in process call is one where the service resides in the same process
as the client.
The trouble is that much of it is sales jargon and opinionated views that cover
only a small subsection of the SOA landscape, one is hard pressed to find any
practical advice on actually building service orientated applications above putting
a web service in front of your application and calling it SOA, other advice pushes
you down the slippery path of white box services and tightly coupled, unreliable
and unmaintainable systems.
I maintain that SOA is simply to our current knowledge the best and most
productive way to build software.
Bad SOA or Just a Bunch of Web Services (JBOWS) is a relic of the SOA past
where the focus was on tooling and technology rather than architecture and
analysis, services are exposed somewhat arbitrarily. They do not contribute
towards any defined broader architecture. Therefore service contracts will have
an arbitrary level of stability.
Service orientated Integration is slightly better than JBOWS, with this approach;
service contracts are centred on applications. They are expressed in terms of the
So what about layered service models? In this case, we have atomic business
tasks, business processes and data stores abstracted as services. These concepts
are not at all stable. Businesses very often make changes in business processes
that in turn require changes in how atomic business activities are defined and
how data is represented. With this approach we are very likely to find ourselves
changing our service contracts as business processes are updated.
But what about business capabilities? Business capabilities are by their very
nature incredibly stable. Although a retail organisation may make regular changes
as to how it goes about inventory management, the fact is that it will always have
an inventory management capability. Moreover, other capabilities within the
enterprise don't care how inventory management is performed. They only care
that it is performed. [Poole]
With an understanding of a SOA Quality Spectrum the problem then is how one
builds a service orientated application that has a firm grip on the technical
requirements of; Integrity, Flexibility, Maintainability, Performance, Reusability,
Scalability and Security, Usability is not considered as this will be a technical
requirement of the client application.
Any application framework also needs to sit at the right of the SOA quality
spectrum, be as simple as possible to build by a team of individuals by giving
them a set of guidelines of what goes where, thus eliminating many of the
decisions that cause projects to loose their conceptual integrity before they are
out of the development phase.
The framework should build on all the lessons learned from the past eighty years
of software engineering and incorporate the patterns and best practices the
industry has distilled from this journey.
Sounds like a tall order but it is entirely achievable in a vey productive and
repeatable way.
Most of these components are services and each has a well defined architecture,
they will differ only in implementation and composition between different projects
and domains.
They have a well defined interaction pattern and a clear set of guidelines of what
their function is and their responsibility within the model making the
implementers decision of what does where easier.
They are;
1. Service Connector
2. Manager
3. Engine
4. Gateway
5. Capability Bunker
Service connectors should not contain any business logic. They should just
delegate down to the business layer components to get the real work of the use
cases done. The service connector exists to isolate the business logic from
needing to know about the service boundary specifics. Service connectors form
the boundary between client and the application. The service connector may
implement the interface of the manager component.
Thus the relationship particulars to any one client are encapsulated into a single
layer, allowing the layers below to remain reusable and client agnostic thus when
any set of managers is reused, they do not carry any of the restrictions of the
older client, we would typically build a connector layer for the new client and
enforce its application specific security demands in that layer.
In the scenario where a client calls a service connector, it is possible to have the
Data Contracts validate the data that is bound to them, if you own the user
interface (UI) components as well as the service connector it is possible to
combine the two techniques to leverage the .Net framework to do client side
validation on input, and then use the same mechanism to reject invalid input at
the connector before any service operations have been invoked.
The best way of using the validation is by both sides being .Net, but if some other
technology calls the service, as soon as that call hits the connector and the WCF
stack has translated the contract we essentially have a .Net contract again, and
validation will continue as normal. So when something other than .Net calls the
service, they won’t have validation, but the service will always validate.
This technique is valuable as it reduces the amount of error handling code needed
in the layers below to protect against things like parameters exceeding their
maximum length, rather the data contract assert it as incorrect and reject the call
with a suitable error message, or flag it in the UI thus preventing the invocation
of the business logic only to have an error thrown in the gateway.
In a development where a Service Bus is being utilised this service layer may
become optional if the Service Bus assumes the responsibility of resolving the
application specific security requests and validation of the data flowing over the
Bus.
The business layer components are broken into two main types: manager and
engine components.
Manager components are the code manifestation of a use case; they should bare
a close resemblance to the implemented use case functionality. This aids system
validation as it is easy to see what a system does by looking at the managers.
[OperationBehavior(TransactionScopeRequired = true)]
Deduction CreateDeductionForBudgetItem(Deduction deduction, BudgetLineItem item)
{
using (var gateway = new ProxyHelper<BudgetLineItemGateway,
IRepository<BudgetLineItem>>())
{
gateway.Execute(g => g.Delete(item.Id));
}
Notice that in the above example the manager is controlling the flow of events,
not implementing them.
Managers can interact with gateways and engines directly, but never other
managers, we will use an event driven architecture to achieve this to avoid the
coupling and reduction in performance this introduces in a system.
Managers are prime candidates for the introduction of workflow technology; this
will allow a more visual programming experience, aiding maintenance and
understanding of the system
The engine components encapsulate the business rules that support the business
process. This separation of managers and engines is intentional for maintenance
reasons.
Business rules are likely to change more frequently than the business processes.
Keeping them isolated in their own component allows the developer to more
easily locate and update the business rule code without needing to check in many
locations throughout the architecture. The engine components can potentially be
deployed and updated separately from the rest of the architecture to minimize
any potential breakage with deploying updates.
Business exceptions are exceptions where the client is able to modify their action
and try again, this may be unnatural for many developers who are taught to
Try...Catch...Throw exceptions whenever there is a chance of an exception
occurring, this is a waste of effort in our opinion as nobody ever codes for all
circumstances.
When last did anyone write a disk crash exception? Rather let the framework
catch, log and shield all application exceptions, as the client could not do anything
about your server disk crash anyway, this way you have a log of what went
wrong and where, and the client is not exposed to the inner workings of your
service. As a side effect you also reduce the complexity of your code and increase
the maintainability.
Fault contracts are also not recommended at any layer of the model as you
introduce coupling between your service and client, this is a bad coupling as they
are coupled on exception type, if that type changes in the service, the client also
needs to change. Rather throw only business exceptions, and when you do throw
them, throw them as FaultException<T> with a message of a suitable nature so
that the client can take corrective action and try again.
[OperationBehavior(TransactionScopeRequired = true)]
void IClientEngine.ValidateModifyClientBusinessRule(Client client)
{
(this as IClientEngine).ValidateAddClientBusinessRule(client);
}
[OperationBehavior(TransactionScopeRequired = true)]
void IClientEngine.ValidateDeleteClientBusinessRule(Client client)
{
if (hasSchedule)
{
RaiseException(ExceptionMessages.ClientHasRehabSchedule);
}
}
You should avoid creating references between engines and gateways, this
increases code coupling. Where ever possible do the calls to fetch and save data
from the manager and pass just the data to the engine to apply its strategy.
The data access components isolate the business layer from the details of data
retrieval in the same way that the connector layer isolates the business layer
from the details of exposing remotely accessible services. No data access code
should reside outside the data access layer
Gateways are responsible for the data access and contract transformation
between the above layers and the data store. It is here that the impedance
mismatch between schema and domain is resolved.
This is not to be confused with the type of logic that is found in engines, engines
exist to encapsulate business logic volatility, so that logic is gleaned from the
business, the type of logic here is as a result of our choice of implementation and
is unlikely to be as volatile as the engine logic.
Gateways try to encapsulate their own representation of the domain as they need
it to be for the interaction that they require with it. It is important to use an
explicit name when naming your context to avoid potential naming conflicts later
on.
This eliminates the coupling and dependency problem of sharing a domain across
every gateway in the application, they simply have their own context of the
domain that is rich enough for them to do what they need to only.
It is normal to have a gateway per data contract, for contracts that are
composed of multiple contracts we still have a gateway for each contract; we
simply interact with the composed contracts from the base or container contract,
in data driven design terms think of this as an aggregate root. We use the term
aggregates to describe the situation as the entire group is considered one unit
with regard to data changes. Each aggregate has one root and it is the service
with which the outside manager interacts, the manager holds only a reference to
the root.
An interesting observation here is how the concept of the domain has changed
from object orientation to service orientation, one is no longer passing objects
with state and behaviour, with service orientation you pass only the state as a
message and delegate the behaviour to the service.
7.1.4 Utilities
Not depicted in the SOA Framework diagram are the utilities involved; this would
include class libraries for shared data entity / data contract definitions (one per
service), a logging library, a support library for providing diagnostics and end
user support for deployed applications, and a Utilities library for shared types that
don’t belong in other libraries.
All communication between components is via well defined data contracts; these
follow a Data Transfer Object pattern [Fowler]. We strive to share data contracts
where possible. These contracts have the responsibility of validating the data that
is acceptable and setting a flag to indicate its acceptableness or lack thereof.
7.1.6 Deployable
The capability bunker is our unit of deployment, it translates to an msi file that
packages all the related services together and allows an IT Professional to install
it on the host server. The factoring of services into a capability bunker is a
discipline in its own right, interested parties should read Roger Sessions works
regarding this, I would suggest starting with Simple Architectures for Complex
Enterprises before progressing to the Software Fortress book.
7.1.7 Hosting
How then do we manage to have every class as a WCF service, but only host the
connector layer?
The technique is to utilize In-Proc services; we have a wrapper class that allows
us to simulate the OO like programming model by instancing and hosting In-Proc
the service we need to use at that particular time. This allows us to compose a
sub system of interrelated services that all work together to realize the business
functionality we are exposing via the service layer.
As the connector layer component just implements the interface of the manager
component, it can be said that we realize our systems use cases by composing
the functionality out of a set of new or existing services, which brings us back to
the essence and value proposition of SOA.
Only the functionality that is important to the business is hosted, and as a side
effect system validation also becomes easy to achieve as there should be a close
parity between the functionality exposed on the manager interface and the use
case.
{
return manager.Execute(m => m.ManageClient(operationMode, client));
}
}
Capability Bunkers are the Fortresses of the application world, they apply the
concepts of simplicity at the application level, and the rules of how to factor unit
cases into Bunkers will be explained later in this paper.
The above set of components in themselves offer no help in meeting the technical
requirements of; Integrity, Flexibility, Maintainability, Performance, Reusability,
Scalability and Security.
It is the interaction of the model and the technology used to construct it that will
allow us to achieve the required non-functional aspects of our system. It can be
said that architecture and the process used to build it are really just different
representations of the same thing, and when the two are aligned one is able to
build powerful systems very productively.
WCF can be said to be three minutes and three weeks, three weeks to think
about how to do it, three minutes to do it, the beauty of WCF is that you almost
don’t use it. WCF moves the focus of development from implementation to
design, as gives developers a fighting chance to deliver professional software in
productive timeframes.
Using the above mentioned model and patterns I will demonstrate the
architecture and code implementation to achieve the required technical
requirement, the level of adherence will depend on your systems requirements.
The configuration below or a slight variation will suit 90% of most business
applications.
7.2.1 Integrity
Integrity is the ability to detect and manage invalid data coming in to system as
well as the imposition of complete transactions or rollbacks.
Transactions
If no Isolation level is provided by the client we will assume the strictest Isolation
level of Serializable, we have found this to be suitable in most systems except
read intensive ones or in places where batch like functionality is required.
In a complex system, transactions are essential for ensuring the integrity and
consistency of the data modified by the call chains. Directly managing the
transactions is error prone and extremely labour intensive. The transaction
support in the .NET Framework and WCF make this considerably easier.
However, Serializable transactions also use the most locking at the database level
to ensure that integrity so will have throughput impacts. Those impacts are
generally best addressed by scaling out the application as needed for throughput
rather than relaxing the isolation unless detailed analysis of each call chain is
done by data access experts.
Certain calling patterns, such as a read-only SELECT style call chain do not need
to participate in a transaction. But if there is any potential for database
modifications to be made as part of the overall business process of a call chain,
then starting with a serializable transaction encapsulating all the activity of the
call chain is the safest starting point.
Through analysis, the decision to back off on the span of the transaction scope or
the isolation level may be made if performance challenges dictate and the query
pattern is permissive. These have to be treated on a case by case basis.
Many raise a question of performance and locking issues due to this strict
isolation level, research has found that in typical enterprise systems between 1-
3% of users are concurrent at any one instance in time, 1% is normal 3% in a
highly transactional system, for example, a system with 1500 users could expect
15 – 45 users to be concurrently accessing the system. This number is further
reduced by assessing if they are all contending over the same resource at that
instance in time.
I recommend starting with Serializable, with the exception of select type call
chains where a transaction is not required, but I would still recommend flowing
the transaction at the interface level as in a SOA one can never be certain of the
order and actual call chain that your service may participate in, so to be safe we
flow the transaction trough to the next call to use if it needs too.
The diagram below depicts an iDesign type model of the default model; the red
line in this case represents the transaction boundary
Figure 5 - Transactions
With this setting we enlist the entire logic unit of work into a single transaction
that will enlist any transactional resource that it touches and if any exception
occurs will trigger a rollback across the entire call chain even if the pieces are
physically deployed on separate machines. This is distributed transaction is
achieved by using the Distributed Transaction Controller packaged into the
windows operating system. If publish/subscribe mechanisms are used the service
calls will at least one transaction removed from the original causing a rollback to
the senders queue.
<netTcpBinding>
<binding
name="ReliableTcp" maxBufferSize="100000"
maxReceivedMessageSize="100000" transactionFlow="true">
<readerQuotas maxArrayLength="100000"
maxStringContentLength="100000"/>
<reliableSession enabled="true"/>
</binding>
</netTcpBinding>
[OperationContract]
[TransactionFlow(TransactionFlowOption.Allowed)]
void CaptureClientDetails(Client client);
3. Set the transaction scope required on the service implementation to true
[OperationBehavior(TransactionScopeRequired = true)]
void IClientManager.RequestEmployer(Employer employer)
{
…
}
If you want the transactions to flow between machines will need to turn on
DTC (start >> run >>mmc >> component services >> computer >> my
computer >> properties rt click MSDTC >> security configuration >> tick
Network DTC Access >> Allow inbound >> allow outbound >> mutual
authentication required >> ok >> ok
Open port 135 (Port 135 is used by the RPC Endpoint Mapper.)
If you are not on the same domain as your server change the
authentication setting to none for development
To validate if valid data is being bound to our contract we will use our second
extension, the DataMemberValidator will inspect the data bound to the data
contract and compare it to the rules that where set up for valid data for that
contract at design time by calling the Validate method. If any invalid data is
bound to the contract a runtime the validator will flag it as invalid data.
By using these two attributes together we thus create a mechanism we can use in
UI controls to validate input against a particular contract at runtime and provide
visual prompts, and reject any contracts that have invalid data bound to them at
the service boundary rather that executing all the layers to have an exception
thrown from the gateway because a sting was too long.
[DataContract(Namespace = "http://www.dariel.co.za/WCF/03/2009/01")]
public class Client : IExtensibleDataObject, IDataErrorInfo
[DataMember]
[DataMemberValidator("Application Date", ValidatorConstraint.IsRequiredProperty)]
public DateTime ApplicationDate { get; set; }
[DataMember]
[DataMemberValidator("Account Number", ValidatorConstraint.IsRequiredProperty,
MaximumLength = 50)]
public string AccountNumber { get; set; }
return m_Error;
}
[OperationDataContractValidation]
void IClientManager.ManageClient(OperationMode operationMode, Client client)
{
CheckArgument(client, "Client");
7.2.2 Flexibility
As a general heuristic we say that engines and gateways should be reused. This
means that the strategy and the data access should be written once and reused
as much as possible. It is then possible to reconfigure the solution per use case
(manager) by picking the engines and gateways required.
This becomes productive when you need to provide new functionality, which is
conceptually just more use cases, that use the existing data in new ways, you
only need to write a new manager and reuse the gateways and engines.
If you need to reuse a manager, it means you have two identical use cases,
eliminate one and delegate functionality to the existing service, never copy and
re-host a manager.
This is the reason we introduced the concept of the aggregate root and a gateway
per contract, so that even if you require a different view of the data you will be
able to aggregate existing gateways to produce the contract that you require.
7.2.3 Maintainability
So the question is; how to partition our application so that we can avoid
complexity from spreading. Luckily the answer lies in the realm of set theory and
equivalence relations.
With this basic model of complexity, we can gain some insight into how
complexity can be better organised.
Consider another two systems (C) and (D) both have three six sided dice, but in
C all the dice are together as before, but in D the dice are divided into three
partitions. Let’s assume we can deal with the three partitions independently, in
effect, three subsystems each like B. We know that the complexity of C is 216.
The overall complexity of system D is the sum of the complexity of each partition
(61 + 61 + 61) or 18. If you are not convinced of this imagine inspecting C and D
for correctness in C you would need to examine 216 different states, checking
each for correctness. In system D you would need to examine only 6 states in
each partition.
S = Sides, D = Dice
S D SxD SD
(partitioned) (non-
partitioned)
6 1 6 6
6 2 12 36
6 3 18 126
6 4 24 1296
6 5 30 7776
6 6 36 46656
6 7 40 279936
6 8 46 1679616
6 9 54 10077696
The above table demonstrates how much more complex a non-partitioned system
of 9 dice is compared to a partitioned system containing the same number of
dice. Ratio of 10,077,696 to 54, a lot!
How then do we partition a software system so we too can gain the advantage
from this property of the universe?
A partition is a concept from set theory; a partition is a set of sub sets that divide
a larger set so that all the items in the larger set live in one, and only one of the
subsets. Like our dice example each die lives in one and only one bucket. One
can also observe that in any set of items there are at least N possible ways to
partition that set.
In software systems it just isn’t enough to just partition variables into different
subsets; we need to find one that honours the dependencies between the
variables.
Consider a small shop, all the items in the shop can be considered as being in the
same set, if we pick the equivalence relation costs the same as let’s see how this
becomes useful.
The shop stocks Cereal for R10.00, Pens for R10.00, Coke R10.00, and Notebooks
R20.00.
Symmetry - costs the same as (Cereal, Pens) = true and costs the same as
(Pens, Cereal) = true
Transitivity - costs the same as (Cereal, Pens) = true and costs the same as
(Pens, Coke) = true and costs the same as (Cereal, Coke) = true
And to check our logic costs the same as (Cereal, Notebooks) = false.
Thus because for a particular equivalence relation every item in the set lives in
only one partition we can use equivalence relations to formulate partitions.
In practice we take our system and pull out all the required functionality for
example in a retail operation use cases could be:
Calculate change,
Stock report
If we apply the synergy test we can we that Calculate total cost is synergistic with
Calculate change, it is hard to imagine one without the other, in our retail set. At
first glance Charge credit card looks synergistic with Calculate total cost, but
remember symmetry, it is a two way street; it is possible to imagine a scenario
where you would Calculate total cost without charging credit cards (cash payment
for example), therefore Charge credit card is actually autonomous to Calculate
total cost.
If a and b are in the same equivalence class of E then ~E(a, b) is always false,
and if a and b are in different equivalence classes of E then ~E(a, b) is always
false.
For example, costs the same as (Cereal, Pens) = true, think of this as E(a, b),
and does not cost the same as (Cereal, Pens) = false, then this is ~E(a, b)
because these two items are in the same equivalence class of items that cost ten
rand. Similarly ~E(Pens, Notebooks) = true.
This leads to a useful trick, as you can see it is quite difficult to generate such a
set where does not cost the same is false for all elements within a set and true
for all elements across sets, but if you realize that does not cost the same is the
inverse of costs the same the exercise becomes trivial and you will automatically
have the property of non-equivalence for does not cost the same.
a. The size of the subsets and their importance in the overall partition
must be roughly equivalent.
5. The interaction between the subsets must be minimal and well defined.
[Sessions]
By reducing complexity and ensuring that our applications are well partitioned
and autonomous we effectively sequester complexity and make our systems
maintainable
7.2.4 Performance
The issue then is to ensure that neither the model nor the technology is the cause
of any unacceptable loss of performance.
The technology in this instance is WCF, the model advocates that every class is a
service in order to leverage the built in plumbing that comes with WCF, but one
often hears the question, what about performance?
This is the wrong question to ask, as in reality nobody except developers even
cares about performance as long as performance is adequate. So the actual
question is, is the performance adequate?
So let’s weigh it all up with all classes as WCF services we automatically benefit
from:
1. Encrypted calls
2. Authentication
3. Identity propagation
4. Authorization
5. Security audit
6. Transactional propagation
7. Transactional voting
8. Calls timeout
9. Reliability
13. Durability
19. Synchronization
20. Remotability
21. Queuing
So what about performance, WCF is not free, but it is adequate for most
applications, most applications will never notice it. Raw WCF with a single client
calling to a single service with no work in it on a laptop provides some 10,000
calls per second, raw C# offers some 300,000 calls per second.
To put the calls per second thing into perspective Amazon runs at about 150 calls
per second.
But raw WCF offers no value, in a real system with every aspect of WCF enabled,
WCF can manage 100 calls per second on a laptop; in contrast, a C# application
with all the equivalent aspects created as bespoke code and enabled manages a
maximum of 10 calls per second. Strange you may think, what has happened?
The WCF aspects underwent years of rigorous testing by some of the most
devious minds in the industry and has been optimized and performance tweaked.
So if you care about performance you have to use WCF as all its aspects have
been performance tested against hundreds of different scenarios and perform
better than the components you have written and only tested against the one or
two scenarios that you knew about.
There is a cost to using every WCF aspect; every aspect will cause certain
degradation in performance, this is summed in the table below.
Encoding 0
Process Boundary 0
Authorization 0
Authentication 0
Instancing mode 0
Reliability 10
Performance counters 25
Transactions 66
Message Protection 85
[Lowy]
7.2.5 Reusability
This is the value proposition of SOA; with SOA we have changed the element of
reuse from the class to the service, which does not sound significant, but with the
move from a reference based call stack to passing messages reuse has changed
from being the reuse of code to the delegation of functionality.
The services that are being reused may never be included in you code base, but
play just as import role in your system as those you have written just for that
project.
The model presented here follows this approach by not ever making any
assumptions on how they will be called, every service has its own security,
instancing and transactional requirements encoded into its design, thus allowing
them to be called in any order by any other service without destroying the intent
of their designer.
A useful tool in managing reusability is the service repository, and those starting
out writing service orientated applications should take a serious look at
applications such a Managed Services Engine from Microsoft as a tool for creating
and managing services.
7.2.6 Scalability
Scalability is the ability of a system to function well when there are changes to
the load or demand. Typically, the system will be able to be extended over more
powerful or more numerous servers as demand and load increase.
Experience has shown us that when building WCF services, all roads lead to Per
Call instancing, so by default we avoid state at all costs as state is the sworn
enemy of scalability. All services should be per call and synchronized and any
movement away from this is an executive decision. It applies to all levels of the
architecture, service connector, manager, engine and gateway. As synchronized
is the WCF default no attribute is required. This design gives us the safest and
most scalable solution as default and allows for ease of development as there is
no decision about how to set up any piece of the architecture.
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]
public class ClientManager : IClientManager
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]
public class ClientGateway : IRepository<Client>,
ITranslate<Client, Model.Client>,
IClientGateway
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]
public class BudgetEngine : IBudgetEngine
Example 6 – Instancing
7.2.7 Security
Security defines the ways that a system is protected from disclosure or loss of
information, and the possibility of a successful malicious attack. A secure system
aims to protect assets and prevent unauthorized modification of information.
Security in layers is a very prudent approach when thinking about the type of
security to incorporate into your application. I would say at minimum, every call
to every service should be authenticated and every message to every service
should be encrypted and signed to prevent tampering.
All that remains is to decide how to authorise calls if needed, best advice is to
turn security all the way up as default and only relax a setting if there is a reason
to do so.
This way as the designer of the system you have covered any liability you may
have exposed yourself to by neglecting to include a security setting in your
design.
Authentication is the process for identifying the caller and verifying that they are
who they say they are.
Users will have to authenticate with the UI to use it, a number of options exist
including Windows integrated security, username/password authentication
against a custom store, or using CardSpace and federated identities.
Authentication at the service and client boundaries will likely be done with a
mixture of ASP.NET and Window integrated security
/// <summary>
/// Concrete implementation of IClientManager.
/// </summary>
[ErrorHandlerBehavior]
[SecurityBehavior(ServiceSecurity.Internet,
"TestServiceCert",UseAspNetProviders = true, ApplicationName =
"DarielTest")]
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]
public class ClientService : IClientManager, IBudgetManager
{
/// <summary>
/// Manages the client.
/// </summary>
/// <param name="operationMode">The operation mode.</param>
/// <param name="client">The client.</param>
[OperationBehavior(TransactionScopeRequired = true)]
[PrincipalPermission(SecurityAction.Demand, Role = "DCO")]
[PrincipalPermission(SecurityAction.Demand, Role ="DC_Principal")]
[PrincipalPermission(SecurityAction.Demand, Role = "DC_Admin")]
[PrincipalPermission(SecurityAction.Demand, Role = "DC_User")]
[OperationDataContractValidation]
Client IClientManager.ManageClient(OperationMode operationMode,
Client client)
…
Example 7 = Security
The calls between the service layer and the client have been well
documented, but what about calls between the other components
behind the service connector layer, how are they protected?
What is important at this level is that the messages are encrypted and
signed; this is the default of the NetNamedPipeBinding we are using to
construct our In-Proc service, so WCF both signs the message and
encrypts its content providing integrity, privacy, and authenticity. This
negates the risk of having the In-Proc windows service in the operating
system compromised.
8. Try it out
A sample project containing all the source code mentioned in this article will
follow soon.