Professional Documents
Culture Documents
Data Persistence
Infrastructure Layer
Figure 1.- Data Persistence Layer location within the N-Layered Architecture
Repository Pattern
Repository is one of the well documented ways of working with a data source.
Martin Fowler in his PoEAA book describes a repository as follows:
A repository performs the tasks of an intermediary between the domain model
layers and data mapping, acting in a similar way to a set of domain objects in memory.
Client objects declaratively build queries and send them to the repositories for
Rule #: D?.
o Rules
-
To encapsulate the data persistence logic, you should design and implement
Repository classes. Repositories are easily implemented over O/RMs.
For example, in the diagram above, the Order entity is the entity-root for the Order
and OrderLines entities aggregation.
Rule #: D?.
o Rule
-
Rule #: D?.
o Recommendations
-
It is common and very useful to have base classes for every layer to group
and reuse common behaviors that otherwise would be duplicated in different
parts of the system. This simple pattern is called Layer SuperType.
It is particularly useful when we have similar data access code for different
domain entities.
References
Layer Supertype Pattern by Martin Fowler.
http://martinfowler.com/eaaCatalog/layerSupertype.html
http://martinfowler.com/eaaCatalog/unitOfWork.html
The following scheme shows the operation of traditional data access classes (DAL),
without using any UoW:
The following scheme shows the operation of a REPOSITORY class along with a
UoW, which is what we recommend in this Architecture guide:
This design pattern is put into practice for many implementations of dynamic
languages such as Ruby and nowadays it is widely used by the developers community.
Currently in .NET, there are many implementations such as Castle Active Record,
.NetTiersApplication Framework or LLBLGenPro.
However, one of the most important inconveniences of this pattern comes from its
own definition, as it does not conceptually separate the data transportation from its
persistence mechanisms. If we think about service oriented architectures where the
separation between data contracts and operations is one of the main pillars, we will see
that a solution like Active Record is not suitable, and is, quite often, extremely hard to
implement and maintain. Another example of a solution based on Active Record
which would not be a good choice, is one without a 1:1 relationship between the
database tables and Active Record objects in the domain models, since the logic that
these objects need to carry would be a bit complex.
Patterns
Active Record
Data Mapper
Query Object
Repository
Table Module
Additional references
For information on Domain Model, Table Module, Coarse-Grained Lock, Implicit
Lock, Transaction Script, Active Record, Data Mapper, Optimistic Offline Locking,
Pessimistic Offline Locking, Query Object, Repository, Row Data Gateway, and Table
Data Gateway patterns, see:
Patterns of Enterprise Application Architecture (P of EAA) in
http://martinfowler.com/eaaCatalog/
3.- TESTING
IN
THE
INFRASTRUCTURE LAYER
DATA
PERSISTENCE
Like most elements of a solution, our Data Persistence layer is another area that
should be covered by unit testing. It should, of course, meet the same requirements
demanded from the rest of the layers or parts of the project. The implication of an
external dependency such as a database has special considerations. These should be
treated carefully so as not to fall into certain common anti-patterns when designing
unit tests. In particular, the following defects in the created tests should be avoided.
Anti-patterns to avoid:
-
Erratic Tests. One or more tests are behaving erratically; sometimes they pass
and sometimes they fail. The main impact of this type of behavior comes from
the treatment they are given, since they are usually ignored and they could
hide some code failure internally that is not dealt with.
Slow tests. The tests take too long to run. This symptom usually prevents
developers from running system tests when one or more changes are made.
This, in turn, reduces the code quality because it is exempted from continuous
testing on it, while the productivity of the people in charge of keeping and
running those tests is also reduced.
Obscure Test. The real behavior of the test is obscured very frequently due to
certain elements of test initialization and cleaning processes or initial data reset
processes, and it cannot be understood at a glance.
Unrepeatable Test: A test behaves differently the first time it is run than how
it behaves on subsequent test runs.
Some usual solutions to perform tests where a database is involved can be seen in
the following items, although, of course, they are not exclusive:
-
Redoing the set of data upon completion of each test: this consists of
redoing a set of data to its initial state, in order to immediately repeat it.
o Recommendations
-
When tests are executed against a real database we should ensure we are not
falling into the Unrepeatable Test or the Erratic Test anti-patterns.
References
MSDN UnitTesting
http://msdn.microsoft.com/en-us/magazine/cc163665.aspx
Unit Testing Patterns
http://xunitpatterns.com/
Invalid operations that are really code defects that the developer will
need to fix, and hence shouldnt be handled at all.
Consider security risks. This layer should protect against attacks attempting
to steal or corrupt data, as well as protect mechanisms used to access the data
sources. For example, care must be taken not to return confidential information
on errors/exceptions related to data access, as well as to access data sources
with the lowest possible credentials (not using database administrator users).
Additionally, data access should be performed through parameterized queries
(ORMs do this by default) and should never form SQL statements through
string concatenation, to avoid SQL Injection attacks.
Consider scalability and performance goals. These goals should be kept in
mind during the application design. For example, if an e-commerce application
must be designed to be used by Internet users, data access may become a
bottleneck. For all cases where performance and scalability is critical, consider
strategies based on Cache, as long as the business logic allows it. Also,
perform query analysis through profiling tools to be able to determine possible
improvement points. Other considerations about performance are the
following:
If the database already exists, the O/RM tools usually also allow
generation of the entity-relation model of data from this existing
database and then mapping of the objects/entities of the domain and
database.
For the sake of security, typed parameters should be used when using
stored procedures to avoid SQL injections. We also need to guard
against having code inside the stored procedure that takes one of the
input string parameters and uses it as part of a dynamic SQL
statement that it later executes.
"Designing Data Tier Components and Passing Data Through Tiers" http://msdn.microsoft.com/en-us/library/ms978496.aspx.
Steps to be followed:
1.- The first step will be to identify the type of the data we want to access. This
will help us to choose between the different technologies available for the
implementation of the Repositories. Are we dealing with relational
databases? Which DBMS specifically? Or are we dealing with another type of
data source?
2.- The next step is to choose the strategy required to convert domain objects to
persisted data (usually databases) and to determine the data access approach.
Business entities are really Domain entities and have to be defined within the
Domain layer and not in the Data persistence layer. However, we are actually
analyzing the relationship of such Domain Entities with the Data Persistence
layer. Many decisions regarding domain entities and their mapping to
persistence should be taken at this point (Persistence Layer implementation).
3.- Finally, we should determine the error handling strategy to use in the
management of exceptions and errors related to data sources.
Microsoft P&P Enterprise Library Data Access Application Block: This data
access library is based on ADO.NET. However, if possible, we recommend
Third party technologies: There are many other good technologies (O/RMs like
NHibernate, etc.) which are not provided and supported by Microsoft but they
also help with a DDD approach.
If low level support is required for queries and parameters, use the plain
ADO.NET objects, but even then, you could implement Repository patterns.
If you simply use ADO.NET and your database is SQL Server, use the SQL
Client provider to maximize performance.
If you use SQL Server 2008 or a higher version, consider using FILESTREAM
to obtain higher flexibility in storage and access to BLOB type data.
If you are designing a data persistence layer following DDD (Domain Driven
Design) architectural style, the most recommended option is an O/RM
framework O/RM such as Entity Framework or NHibernate.
Rule #: I?.
o Rule
-
Another viable option could be to use any other third-party O/RM, such as
NHibernate or similar.
References
http://msdn.microsoft.com/en-us/data/aa937723.aspx
Consider using an O/RM that performs mapping between domain entities and
database objects.
Make sure that entities are correctly grouped to achieve the highest level of
cohesion. This means that you should group entities in Aggregates according to
DDD patterns. This grouping must be part of your own logic. The current EF
version does not provide the concept of Aggregate and aggregate-root, but we
can implement it in our own code.
Important:
Before we can implement REPOSITORIES, we need to define the types/entities to
be used. In the case of N-Layer Domain Oriented architectures, as mentioned, the
business Entities should be located within the Domain Layer. However, when
using the Model First or Database First approaches, the creation of such entities
takes place during EF entity model creation which is defined in the data persistence
infrastructure layer. But, before choosing how to create the Data persistence layer,
we should choose what type of EF domain entities will be used (Prescriptive, T4
POCO or STE templates or POCO Code-First approach). This analysis is
explained in the Domain Layer chapter, so we recommend to read that chapter first
and to learn about the pros and cons of each type of possible EF entity type before
moving forward in the current chapter.
In our case, we selected the POCO Code-First approach available since EF 4.1,
because it fits better with DDD architecture and design patterns, as explained .
As previously stated, having any domain entity defined in our code (like the
Customer or Order classes we defined in the previous chapter), we could create a
simple EF DbContext and run the following code and it simply would work. No need
to create the database-tables or elaborate any kind of pre-established mapping.
// EF DbContext
// This code is still an isolated example, it is not part of the SampleApp
public class MyModelContext : DbContext
DbContext
{
public IDbSet<Customer> Customers { get; set; }
public IDbSet<BankAccount> BankAccount { get; set; }
}
That is all the code we need to write in order to start storing and retrieving
data. Obviously there is quite a bit going on behind the scenes and we will take a look
at that in the following sections. Also, in our DDD Architecture we wont use a
simple DbContext but we will extract a Unit of Work interface and use Dependency
Injection, etc.
What this means is that code first will assume that your classes follow the
default conventions of the schema that EF uses for a conceptual model. In that case,
EF will be able to work out the details it needs to do its job. The following code will
simply work. (Take into account that we are executing this code from a main
console app, just as an example. Usually this kind of code would be within the
Application and Domain layers).
// Using Code First- Simple isolated example. It is not part of the SampleApp
class Program
{
static void Main(string[] args)
{
using (var db = new MyModelContext())
Context
{
// Add a Customer
var customer = new Customer {CustomerId = "ALFKI", Name = "Joe Smith"};
db.Customers.Add(customer);
My Entity
int recordsAffected = db.SaveChanges();
Console.WriteLine(
"Saved {0} entities to the database, press any key to exit.",
recordsAffected);
Console.ReadKey();
}
}
}
This default behavior is defined by a new class named Database, which is available
within the namespace System.Data.Entity.
public class Database
{
public static IDbConnectionFactory
set; }
DefaultConnectionFactory
{ get;
// Ommited
}
Password=password")
{
}
public IDbSet<Customer> Customers { get; set; }
public IDbSet<Order> Orders { get; set; }
}
Finally, we could also use our XML Configuration Settings using the connection
string section. The only requirement is to set as connection string name the whole
qualified path of our working context.
<?xml version="1.0"?>
<configuration>
<connectionStrings>
<add name="[Namespace].MyModelContext"
connectionString="Server=.\SQLEXPRESS;Initial
Catalog=NLayerApp;Integrated Security=true"
providerName="System.Data.SqlClient"/>
</connectionStrings>
</configuration>
As we can see, this method has the DbModelBuilder parameter which represents
the main mapping artifact. In fact, this class has the Conventions property which we
can use in order to customize any convention or even eliminate any convention, using
the property ConventionsConfiguration.
public class DbModelBuilder
{
public virtual ConventionsConfiguration Conventions { get; }
}
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Conventions.Remove<OneToManyCascadeDeleteConvention>();
}
For instance, we can use Fluent API to specify the primary key of any of our
domain entities.
public class MainBCUnitOfWork
:DbContext,IMainBCUnitOfWork
{
public IDbSet<Customer> Customers { get; set; }
//Ommitted Code (Other Entity sets, etc.)
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Entity<Customer>()
.HasKey(c => c.CustomerId);
Example modification:
PrimaryKey specification
modelBuilder.Entity<Customer>()
.Property(c => c.CustomerId)
.HasDatabaseGeneratedOption(DatabaseGeneratedOption.None);
}
}
Specifying
names
custom
column
modelBuilder.Entity<Customer>()
.Property(c => c.LastName)
.HasColumnName("Surname");
}
}
We can also set specific data types and requirements using the methods
HasColumnType(), IsRequired() and IsOptional().
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Entity<Customer>()
.HasKey(c => c.CustomerId);
modelBuilder.Entity<Customer>()
.Property(c => c.Age)
.HasColumnType("bigint")
.IsRequired();
modelBuilder.Entity<Customer>()
.Property(c => c.LastName)
.IsOptional();
}
.IsConcurrencyToken(true);
modelBuilder.Entity<Customer>()
.Property(c => c.LastName)
.IsConcurrencyToken(true);
}
Concurrency management
}
}
Doing so we could have a list of many configuration classes added to the list, but
the code would be much better structured and readable.
For a Complex-type and Value-Object customization example using a
ComplexTypeConfiguration you can look for it in the domain Model Layer chapter
where we previously showed code about it.
.IsRequired()
.HasMaxLength(10);
modelBuilder.Entity<Customer>()
.Property(p => p.LastName)
.IsRequired()
.HasMaxLength(20);
}
If we try to change any of those properties and we do not comply with those
validations, we would get a DbEntityValidationException which has an
EntityValidationErrors collection thrown by our Unit of Work (EF Context)
SaveChanges() method.
If we want to catch and manage those errors, we should manage the former exception
and flatten the possible multiple errors we could get, like in the following simple
example.
try
{
//Ommitted
//...
unitOfWork.Commit();
}
catch (DbEntityValidationException ex)
{
DumpErrors(ex);
}
Once we have caught the exception, we need to flatten all the errors. Then we could
publish them, store them or simply show them like in the following example.
private static void DumpErrors(DbEntityValidationException ex)
{
foreach (var item in ex.EntityValidationErrors)
{
Console.WriteLine("Validation error for entity with this state:{0}",
item.Entry.State);
Console.WriteLine("Validation errors are the following:");
foreach (var ve in item.ValidationErrors)
{
Console.WriteLine("\tProperty:{0} Error:{1}",
ve.PropertyName,
ve.ErrorMessage);
}
}
}
Additionally, if we dont want to wait until the EF context (UoW) is trying to save the
data, we can check these validations on-demand. We can do so in two possible ways.
1. Invoking the GetValidationErrors() method, part of the EF context.
var validationErrors = unitOfWork.GetValidationErrors();
2. Validating entity by entity, accessing the DbEntityEntry for each entity and invoking
the GetValidationResult() method.
var vresult = unitOfWork.Entry(customer)
.GetValidationResult();
5.10.1.-
If we have two entities, like Customer and RewardsInfo and we want to relate
them to each other, we can establish a relationship either using Data annotations or
Fluent API.
Like previously said, Fluent API is less intrusive regarding our Domain entities
code, but because of that, we are going to compare both ways.
If using Data annotations, we can establish entity associations using the
ForeignKeyAttribute in a way similar to the following code. Afterwards, we will see
why we dont like this way of establishing relationships.
public class Customer : Entity
{
//...
//Ommitted
//...
public Guid
CustomerId { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public decimal CreditLimit { get; set; }
public Guid
CountryId { get; set; }
public virtual Country Country { get; private set; }
public virtual Address Address { get; set; }
public virtual Picture Picture { get; set; }
//Methods with Entity logic
//...
//Ommitted
//...
}
public class RewardsInfo
{
Relationship established using the attribute ForeignKeyAttribute which is defined
//...
within the EntityFramework.dll assembly!!! This is against PI principle!!
//Ommitted
//...
[ForeignKey("Customer")]
public Guid RewardsInfoId { get; set; }
public bool IsPremiumCustomer { get; set; }
But, as mentioned, this way (Data Annotation) is quite intrusive, especially when
this specific attribute (ForeignKeyAttribute) is defined within the
EntityFramework.dll, so we wouldnt be complying with the Persistence Ignorant
principle, as we would need to add a direct reference to the EF assembly from our
Domain Layer where we have defined our domain entities.
But there is a solution, we have Fluent API at rescue!. We have a few methods for
configuring associations, like shown in the table.
Method
HasMany()
HasOptional()
HasRequired()
Description
It allows to configure a one to many relationship (1..*)
It allows to configure a relationship to an optional element
( 1..0,1).
It allows to configure a one to one relationship ( 1..1)
Regarding our former example, we could write the following Fluent-API code.
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Entity<Customer>()
There are many possible combinations using those methods, therefore we recommend
checking them out on the following URLs:
EF 4.1 - Managing Relationships
http://msdn.microsoft.com/en-us/library/hh295848(v=VS.103).aspx
EntityTypeConfiguration Class information:
http://msdn.microsoft.com/en-us/library/gg696117(v=vs.103).aspx
Table X.- Fluent API is preferred for establishing entity relationships
Rule #: I?.
o Rule
-
5.10.2.-
This is a very typical scenario, like the relationship between Customer and
Order. The following code shows a simple scenario with an implicit one to much
relationship.
public class Company : Entity
{
public Guid
CompanyId { get; set; }
public string Name { get; set; }
public string CompanyType { get; set; }
//...
//Ommitted
Relationship established implicitly
//...
In this case the association is established even when there are not any explicit
foreign key and the association in this case is not bidirectional. But, underneath, in the
database tables, we would have a CompanyId foreign key within the Customers table.
In case we want a bi-directional relationship, we would update the Customer
class like shown below.
public class Customer
{
public Guid
CustomerId { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
//...
//Ommitted
//...
Finally, in case we need to explicitly specify which property is our foreign key, we can
also do it.
public class Customer
{
public Guid
CustomerId { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
//...
//Ommitted
//...
public Guid
[ForeignKey("CompanyCode")]
public virtual Company Company { get; set; }
}
But, again, if we use that kind of attributes, we will need to add a direct reference to EF
4.1 assembly from our Domain-Model Layer. So if you need to customize your
relationships, Fluent API would be a better way, like shown in next code.
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Entity<Company>()
.HasMany(c => c.Customers)
.WithRequired(m => m.FirstName)
.HasForeignKey(m => m.CompanyCode)
.WillCascadeOnDelete(false);
}
5.10.3.-
In this scenario is when a O/RM gets powerful and clean, as the developer
does not have to deal with tables in between for relations (even though in database they
will be), but from an object oriented point of view, we will deal only with associations.
Here we can see a simple many to many association case.
public class Customer
{
public int CustomerId { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
//...
//Ommitted
//...
If we would like to change the table mappings instead of using the default names
convention, we could change it using Fluent API, as follows.
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Entity<Customer>()
.HasMany(c => c.Addresses)
.WithMany(c => c.Customers)
.Map(c
=>
{
c.ToTable("CustomerAddressRelationTable");
c.MapLeftKey("CustID");
c.MapRightKey("AddID");
});
}
Entity Inheritance
Table Splitting
Entity Splitting
DbEntityEntry
DbSet IdbSet
References management
Rule #: I?.
o Rule
-
It is important to locate the entire persistence and data access logic in well-known
points (Repositories). There should be a Repository for each domain
AGGREGATE. As a general rule and for our sample Architecture we will
implement the repositories using Entity Framework.
References
Using Repository and Unit of Work patterns with Entity Framework 4.0
http://blogs.msdn.com/adonet/archive/2009/06/16/using-repository-and-unit-of-work-patterns-with-entity-framework-4-0.aspx
So far, there is nothing special in this class. It will be a regular class and we will
implement methods like CustomerGetCustomerById(int customerId)
by using LINQ to Entities and POCO domain entities.
In this regard, the persistence and data access methods should be placed in the
proper Repositories, usually related to the data or entity type that will be returned by a
method, i.e. following this rule:
Table 7.- How to distribute methods in Repositories
Rule #: I?.
o Rule
-
If, for example, a specific method defined with the phrase "Retrieve Company
Customers" returns a specific entity type (in this case Customer), the method
should be placed in the repository class related to this type/entity (in this case,
CustomerRepository. Placing it into the CompanyRepository would be
wrong).
If the types being used are sub-entities within an AGGREGATE, the method
should be placed in the Repository related to the Aggregate root entity class.
For example, if we want to return all the detail lines for an order, we should
place this method in the Repository of the aggregate root entity class, which is
OrderRepository.
In update methods, the same rule should be followed but depending on the
main updated entity.
The reason this would not make sense is because the methods we could reuse would
be something unrelated to any domain entity type. We cannot use a specific entity class
such as Products in the Repository base class methods, because after that we may want
to inherit the CustomerRepository class which was not initially related to Products.
TEntity will be replaced by the entity to be used in each case, that is, Products,
Customers, etc. Thus, we can implement common methods only once and, in each
case, they will work against a different specific entity. Below we partially explain the
base class Repository we use in the N-Layered application example:
C#
where TEntity:Entity
{
//EF Context (Unit of Work)
IQueryableUnitOfWork _UnitOfWork;
if (item != (TEntity)null)
GetSet().Add(item); // add new item in this set
else
{
LoggerFactory.CreateLog()
.LogInfo(Messages.info_CannotAddNullEntity,
typeof(TEntity).ToString());
}
Common Repository method
specification)
{
return GetSet().Where(specification.SatisfiedBy())
.AsEnumerable();
}
public virtual IEnumerable<TEntity> GetPaged<KProperty>(int pageIndex,
int pageCount,
Expression<Func<TEntity, KProperty>> orderByExpression,
bool ascending)
{
var set = GetSet();
if (ascending)
{
return set.OrderBy(orderByExpression)
.Skip(pageCount * pageIndex)
.Take(pageCount)
.AsEnumerable();
}
else
{
return set.OrderByDescending(orderByExpression)
.Skip(pageCount * pageIndex)
.Take(pageCount)
.AsEnumerable();
}
Common Repository method
This illustrates how to define certain common methods that will be reused by
different Repositories of different domain entities. Even when a Repository class can
be very simple at the beginning, with no explicit methods implementation, however, it
would already inherit many methods implementation from the Repository base class.
For example, the initial implementation of ProductRepository could be as simple
as the following code:
C#
//Class Repository for Product entity
Inherits Common methods
public class ProductRepository
:Repository<Product>,IProductRepository
{
As you can see, we have not implemented any explicit method in the
ProductRepository class, however, if we instantiate an object of this class, the
following would be the methods that we could invoke "without doing anything.
C#
We would therefore have basic query, addition, and deletion methods for the
mentioned Product entity without having implemented them specifically for this
entity.
In addition, we can add new explicit methods for the Product entity within the
ProductRepository class itself.
return null;
}
}
By default, establishing relationships between entities (like previously shown), they are
set as lazy loading. If you want to do an Eager loading, you should specify it within
your Repository code writing code similar to the following:
//Repository method code for Eager-Loading other internal Aggregate entity
...
Eager Loading w/ Linq to Sql
...
var d = from Order in uow.Orders.Include("OrderLines")
select order;
...
//Another way of doing it using a SPECIFICATION
public override IEnumerable<Order> AllMatching(ISpecification<Order> specification)
{
var set = _currentUnitOfWork.CreateSet<Order>();
Eager Loading using a specification
This would allow us to fully replace the data persistence infrastructure layer, or
repositories through abstractions/interfaces without impacting on the Domain and
Application layers, and without having to change Domain dependencies.
Another reason why this de-coupling is so important is because it enables mocking
of the repositories, so the domain business classes dynamically instantiate fake (stubs
or mocks) classes without having to change code or dependencies. They simply specify
the IoC container that, when prompted to instantiate an object for a given interface,
instantiates a specific class or a fake one (depending on the mapping, but logically,
both meeting the same interface).
This Repository de-coupled instantiation system through IoC containers such as
Unity is further explained in the Application and Distributed Services Layers
Implementation chapters, because it is there where the instantiations should be
performed.
Now, the only important thing to emphasize is that we should have interfaces
defined for each Repository class, and that the location of these repository
interfaces will be within the Domain layer, for the aforementioned reasons.
At the interface implementation level, the following would be an example for
ICustomerRepository:
C#
namespace
Microsoft.Samples.NLayerApp.Domain.MainBoundedContext.ERPModule.Aggregates.CustomerAgg
//Interface/Contrat ICustomerRepository
public interface ICustomerRepository : IRepository<Customer>
{
//Additional methods specific for CustomerRepository
IEnumerable<Customer> GetEnabled(int pageIndex, int pageCount);
}
Specific method for ICustomerRepository
Note that in the case of repository interfaces we are inheriting a base interface
(IRepository) that gathers common methods from the repositories (Add(), Delete(),
GetAll(), etc.).Therefore, in the previous interface we only define other new/exclusive
methods of the repository for Customer entity.
The IRepository base interface is like this:
C#
namespace Microsoft.Samples.NLayerApp.Domain.Core
C#
Implements the generic UoW methods
Then, we need to have the Unit of Work implementation, which is really based on the
EF 4.1 DbContext, as shown below.
C#
UoW Implementation
{
if (_orders == null)
_orders = base.Set<Order>();
return _orders;
}
}
IDbSet<Country> _countries;
Country entity set
public IDbSet<Country> Countries
{
get
{
if (_countries == null)
_countries = base.Set<Country>();
return _countries;
}
}
IDbSet<BankAccount> _bankAccounts;
BankAccount entity set
public IDbSet<BankAccount> BankAccounts
{
get
{
if (_bankAccounts == null)
_bankAccounts = base.Set<BankAccount>();
return _bankAccounts;
}
}
Generic CreateSet()
This is really when the UoW updates all pending data into de database
try
{
base.SaveChanges();
}
catch (DbUpdateConcurrencyException ex)
{
saveFailed = true;
ex.Entries.ToList()
.ForEach(entry =>
{
entry.OriginalValues.SetValues(entry.GetDatabaseValues());
});
}
} while (saveFailed);
}
Rollback data changes
Finally, we need to map from the UoW interface to the real implementation class. That
should be done in our UNITY container registry code, like it is implemented in the
following code.
C#
public static class Container
{
Types registration and Mappings
//Ommitted
//Other methods
static void ConfigureContainer()
{
_currentContainer = new UnityContainer();
o Note
Open the connections against the data source as late as possible and close such
connections as soon as possible. This will ensure that the limited resources are
blocked for the shortest period of time possible and are available sooner e for
other consumers/processes. If nonvolatile data are used, the recommendation is
to use optimistic concurrency to decrease the chance of blockage on the
database. This avoids record blocking overload. In addition, an open
connection with the database would also be necessary during this time and it
should be blocked from the point of view of other data source consumers.
For security reasons, do not use System or DSN (Data Source Name) to save
information of connections.
Regarding the SQL Server, as a general rule it is better to use the Windows
built-in authentication instead of the SQL Server standard authentication.
Usually the best model is the Windows authentication based on the trusted
sub-system (instead of customization and access with the users of the
application, but access to the SQL Server with special/trusted accounts).
Windows authentication is safer because, among other advantages, it does not
need a password in the connection string.
-
If you use SQL Server standard authentication, you should use specific
accounts (never sa) with complex/strong passwords, limiting the permit of
each account through database roles of the SQL Server and ACLs assigned in
the files used to save connection strings, and encrypt such connection string in
the configuration files being used.
Protect confidential data sent through the network to or from the database
server. Consider that Windows authentication only protects credentials, but not
application data. Use the IPSec or SSL to protect data of the internal network.
If you are using SQL Azure for an application deployed in Microsofts PaaS
Cloud (Windows Azure) there is currently a problem regarding SQL Azure
connections you have to deal with. Check the following info to correctly deal
with SQL Azure connections:
Handling SQL Azure Connections issues using Entity Framework 4.0
o
5.17.1.-
http://blogs.msdn.com/b/cesardelatorre/archive/2010/12/20/handling-sqlazure-connections-issues-using-entity-framework-4-0.aspx
and there is no similar connection, a new connection will be created. Therefore, when
there is Windows security reaching SQL Server and it is also impersonated/propagated
from original users, the reuse of connections in the pool is very low. So, as a general
rule (except in cases requiring specific security and if performance and scalability are
not a priority), it is recommended to follow the "Trusted sub-system" access type, that
is, accessing to the database server with only a few types of credentials. Minimizing the
number of credentials increases the possibility that a similar connection will be
available when there is a request of connection to the pool.
The following image shows a diagram representing the Trusted Sub-System:
This sub-system model is quite flexible, because it enables many options for
authorization control in the components server (Application Server), as well as auditing
accesses in the application server. At the same time, it allows suitable use of the
connection pool by using default accounts to access the database server and properly
reuse the available connections of the connections pool.
On the other hand and finally, certain data access objects have a very high
performance (such as DataReaders); however, they may offer very poor scalability if
they are not properly used. Due to the fact that a DataReader keeps the connection open
during a relatively long period of time (since they require an open connection to access
the data) scalability might be impacted. If there are few users, the performance will be
very good, but if the number of concurrent users is high, this may cause bottleneck
problems because of the number of open connections being used against the database at
the same time.
Consider the implementation of retry processes for operations where there may
be timeouts. However, do this only if it is actually feasible. This should be
analyzed on a case-by-case basis.
Design and implement a logging system and error notification system for
critical errors and exceptions that do not show confidential information.