You are on page 1of 100

.

Net Interview Questions

ViewState:

 Viewstate is an Asp.net feature that enables the server controls


to retain their state values across the form posts.
 For every webform there will be a hidden form field called
_VIEWSTATE.
 No matter whether the form contains any control or not, this
hidden field exists as long as there is a form tag and the runat
attribute set to server.
 The value of this hidden field contains all the state values of all
the controls that are participated in viewstate.
 Viewstate is of type system.web.ui.statebag class which is
dictionary based that store name/value pairs.
 When the user first requests a page, the asp.net will put the
state of all the controls that participate in viewstate in to a string
and send it to the client as hidden form field.
 Upon postback the asp.net parses the string from the hidden
field and populates all the controls with their corresponding
state.
 Storing, saving and initializing the viewstate is taken care by
framework. Developer need not worry about it.
 Viewstate is controlled at machine level, application level, page
level and control level.
 By default viewstate is enabled at all these levels.
 At application and machine level we can disable viewstate as
<pages enableviewstate = false>
 It is not possible to disable the viewstate for the server controls
like: Textbox, checkbox, checkbox list and radiobutton list.
 The state of above controls is maintained by the
IPostbackEventHandler and IPostbackDataHandler interfaces and
not by the viewstate mechanism. So enable viewstate setting
does not affect these controls.
 Viewstate object can be used to store values other than the
controls state. It can be used as Viewstate [key] = value and we
can access this by string s = Viewstate [key].
 We cannot access the viewstate of any control with the viewstate
object. For ex: Viewstate [“Textbox1”] does not return the value
of textbox control.
 By default dynamic controls also participate in viewstate
 But the information stored in the viewstate for these dynamic
controls is not completely reliable. Because viewstate is populate
before the page load event, but by this time dynamic controls
need not even created. So we cannot assume for the correctness
of information in case of dynamic controls as viewstate is
concerned.
 Always disable the viewstate when we do not need to retain the
values across the form posts
 Viewstate is case sensitive viewstate[“Name”] and
viewstate[“NAME”] are treated differently
 Viewstate is not shared across the pages.
 When the TextMode property of a textbox control is set to
password, it does not participate in viewstate, hence does not
retain values in form posts. This behavior is provided by
framework by design.
 Don’t use viewstate when the page does not postback or
redirects or transfers to another page on postback.
 Thus Viewstate is the mechanism by which state information of a
page is maintained between the page postbacks.
 In addition viewstate also supports non-postback controls like
label controls. This is where the hidden fields are used to
maintain their state.
 When page executes, asp.net collects the values of all controls
and formats them in to single base64-encoded string.
 This string is then stored in a hidden field control named
“_VIEWSTATE”.
 By default the viewstate of the page is unprotected. Although the
values are not directly visible, it would not be too difficult for an
individual to decode the stored information. However Microsoft
has provided two mechanisms for increasing the security of the
viewstate.
 Machine Authentication Check (MAC) – tamper –proofing: in fact
tamper – proofing does not protect against an individual
decoding or determining the contents of the viewstate. It instead
provides a way of detecting whether someone has modified the
contents of the viewstate.
 In this technique the viewstate is encoded with a hash code
before sent to clients. On postback Asp.net checks the encoded
viewstate to verify it has not been tampered with. This is simply
enabled at page level. <@page EnableViewstateMac = true >
 Encrypting Viewstate: we can instruct Asp.net to encrypt the
contents of viewstate using triple DES algorithm a stronger
algorithm that makes it difficult for any one to decode the
viewstate. This encryption can be applied at machine config
level. <machineKey validation='3Des' />

Asp Main Objects:


 There are 7 main objects in ASP :
 Application, ASPError, ObjectContext, Request, Response,
Server, Session

Impersonation:

1) Impersonation mean executing an asp.net application under the


identity of a windows user.
2) If the user is an authenticated windows user the user token will
be send to IIS
3) If not the IIS will give the default machine identity IUSR_machine
identity.
4) By default Impersonation is disabled on asp.net application
5) We can set it on thru web.config saying <identity impersonate =
true>
6) By default asp.net pages are executed under process or
windows program. All windows program run with a specific
security identity.
7) By default asp.net process runs under a predefined windows
identity.
8) Alternatively by configuring the asp.net application to use
impersonation, we can set asp.net to run under a different
windows identity or under the identity of the user who is making
the request.

How the Request from IIS is handed to Asp.net worker process

 Asp.net pages are dynamically compiled on demand in the first


request with in the application context.
 All resources we can access on an IIS webserver are mapped or
grouped by their file extensions.
 Any incoming request is then assigned to a particular run-time
module for further processing.
 Modules that can handle web requests with in the context of IIS
are ISAPI Extensions.
 ISAPI extensions are plain win32 dll’s
 When a request for a resource arrives, IIS first verifies the type of
resource.
 Static resources such as images, textfiles, html pages, scriptless
asp pages are handled directly by IIS with out the involvement of
external modules.
 Resources that require server side processing are passed on to
the registered module.
 ASP pages are processed by an ISAPI extension called asp.dll.
and files with an .aspx extension etc are assigned to an ISAPI
extension called aspnet_isapi.dll
 In general when the resource is associated with an executable IIS
handles it to that executable for further processing.

Asp.net Page life cycle:

 When a client requests a page, the request goes to IIS; the IIS
does the filtration and gives the control of page processing to the
respective engine.
 So when we request for an .aspx page, the control is given to
aspnet engine. This engine starts the HTTP pipeline.
 The engine creates an instance of HTTP Runtime class per
appdomain. Because each application is present with in
appdomain. The appdomain is hosted with in worker process.
 The HTTP Runtime class picks an instance of HttpApplication
class from the internal pool.
 The HttpApplication class checks the web.config file and calls the
respective handler to process the request.
 The handler calls the process request method.
 The first step in this method calls FrameworkInitialise () which
builds the control trees for the page.
 Next the process request method makes the page to transit
several stages:
 1) Initialization: the Page_Init event is fired, in this phase; a
control tree is established with all the static controls present on
the page. All these controls are initialized to their default values.
Viewstate is not available in this phase. The controls are referred
through their ID’s
 2) Viewstate Loading: LoadViewState event is fired. After the
init event the controls can be reffered only through their ID’s. At
LoadViewState event the controls receive their first property:
Viewstate. The viewstate information that was persisted at
server in their last submission. Thus the controls will be loaded
with the viewstate information present at the server.
 3) PostbackDataProcessing: When the form is posted (due to
autopostback or submit button) the current values for all the
controls that participate in viewstate will be posted to server
through form post data present in hidden field _VIEWSTATE. Thus
current values of all participating controls will be in postdata. The
IPostbackDataHandler is implemented for all the controls that
have posted the data. For all these controls the framework will
fire LoadPostData event which will update their state with the
current values from the postdata.
 Page Loading: page_load event is fired. The objects will take
their real shape in this phase. Now the objects or controls can be
referred not only through their ID’s they can be referred through
relative reference as DOM model of page is been built. The
objects will get their client side properties like width, color etc.
 PostbackChangeNotification: this occurs after all the controls
that implemented IPostbackDataHandler has been updated with
the current values from postdata. During this operation it will
check for all those controls which has implemented this interface
has actually been changed or remain same since the last
submission. It will flag a Boolean value based on the analysis; for
those controls whose value has been changed it flags true. The
framework will fire RaisePostBackChangeEvent for all those
controls whose state or value has been changed.
 PostbackEventHandling: The object or control that caused the
post back will be handled in this phase (controls autopost back or
button click which caused postback). RaisePostbackEvent will be
fired on this control. In this event we will be having some logic to
be implemented.
 PreRendering: the prerender phase is the last step where the
objects can be modified last. After this phase the objects cannot
be modified further. The viewstate is modified and saved for the
last time. No further changes can happen. PreRender event is
fired.
 Viewstate Saving: SaveViewstate event is fired. In this phase
after all modifications to the objects are done; the current state
of all the controls are taken encoded in to base64 string and put
it in to hidden variable and sent along with the page.
 PageRendering: Render event is fired on page, which in turn
fires render event on each control of the page to render their
HTML. The HTML of all these controls is collected by page and
send as a response to the client.
 PageUnloading: all the objects which are created from page
initialization including the page object is disposed. All the server
resources has been released.

MSIL
 The compiler translates the source code in to MSIL which is a
CPU-Independent set of instructions that can be effectively
converted to native code.
 MSIL includes instructions for loading, storing, initializing, calling
methods on objects, instructions for arithmetic and logical
operations, control flow, direct memory access, exceptions and
other operations.
 Before code can be run, MSIL must be converted to CPU-specific
code, by JIT compiler.
 Thus the JIT compiler converts the MSIL to CPU-specific code.
 The CLR supplies one or more JIT compilers for the computer
architecture it supports.
 JIT compiler takes in to account the fact that some code might
never get called during execution. Rather than using time and
memory to convert all the MSIL to native code, it converts the
MSIL as needed during execution and stores the resulting native
code in cache such that is accessible for subsequent calls.
 The loader creates and attaches a stub to each of the method
when it is loaded.
 On initial call to the method, JIT compiler compiles MSIL of the
method to native code and modifies the stub to the location
address where this native code is stored.
 Subsequent calls of the JIT- compiled method proceeds directly
to the native code
 In .net we have 3 types of JIT compilers:
 Pre – JIT: compiles entire source code in to native code at one
stretch
 Ecno – JIT: compiles code part by part and freeing when
required.
 Normal JIT: compiles only that part of code when called and
places in cache.
 In all, the role of JIT compiler is to bring higher performance by
placing the once compiled code in cache, so that when a next
call is made to the same method/procedure it gets executed
faster
 Source code -> MSIL by lang compiler (CPU – Independent
instructions)
 MSIL -> native code by JIT compiler (CPU – Specific code).
 Runtime supplies another mode of compilation called install-time
code generation. When using this install-time code generation
mode the entire assembly that is being installed is converted in
to native code.
 As a part of compiling MSIL to native code, the code needs to
pass through verification process. The verification process tells
that the code is type safe.
ADO.Net Architecture:

 ADO.net was designed to meet the design goals such as:


disconnected data architecture, tight integration with Xml,
common data representation with the ability to combine data
from different data sources.
 ADO remains available to .Net programmers through .Net COM
Interoperability services
 The ADO.Net solution for n-tier programming is the Dataset
 ADO.Net Architecture includes two central components: Data
Provider and Dataset
 Data Provider includes a set of components like Connection,
Command DataReader, DataAdapter objects.
 Dataset is a core component of disconnected architecture of
ADO.Net.

DataRelation

 DataRelation is used to relate two table objects to each other


through data columns
 Relationships are created between matching columns in the
parent and child tables
 DataRelation objects are contained in DataRelation collection
and are accessed through Relations property of the dataset.
 One of the prime functions of datarelation is to allow navigation
from one datatable to the other with in the dataset.
 Relationships can also cascade various changes from parent row
to the child rows, to control this changes in child rows add a
foreign key constraint to the constraint collection of the data
table object. The constraint collection determines what action to
take when a value in parent table is deleted or updated.
 Foreign Key Constrained restricts the action performed when a
column value is changed or deleted. This constraint is intended
on primary key columns. In a parent/child relationship between
two tables, deleting a value from the parent table can affect the
child rows in one of the following ways:
Child rows can also be deleted – Cascading Action
Values in child columns can be set to null
Values in child columns can be set to default values
An exception can be generated

 This rules can be set by taking the option of


ForeignKeyConstriant.DeleteRule = Rule.setNull etc, the default
is Cascading mean which will delete child rows if parent row is
deleted.
 ForeignKeyConstraint objects are present in constraint collection
of the datatable object. Constriants are not enforced unless the
EnforceConstraints is set to true
 When we add a datarelation, which creates a relationship
between two tables to a dataset both foreignkeyconstriant and
uniquekeyconstriant are created automatically.
 The unique constraint is added to the primary key column of
parent datatable and the constraint is added to the constraint
collection of the parent data table.
 The foreignkeyconstriant is applied to foreignkey column and the
constraint is added to the constraint collection of the child table.
 DataRelation also has nested property which when set to true
allows the child rows to be nested with in the associated row
from the parent table.

Dataset

 Dataset is a memory resident data representation.


 To create an exact copy of dataset that includes both schema
and data, use copy method of dataset, to create a copy of
dataset that includes schema and only the data that has been
changed use GetChanges method, to create a copy of dataset
that includes only schema use clone method of dataset, we can
also add existing rows in to the cloned dataset through Import
Row method.
 Case- sensitivity is seen in dataset if two or more tables or
relations exist in dataset with the same name, case – sensitivity
is not seen if only table or relation exists ex: table1 or Table1
doesn’t make any difference if only table is present, but if two
tables with names table1,Table1 are present then case-
sensitivity is seen.
 Any number of dataadapters can be used in conjunction with a
dataset, different number of datadapters are used to resolve the
issues of updates back to relevant datasources
 Update method of dataadapter is called to resolve changes in
dataset back to datasource. The update method takes an
instance of dataset as parameter, the instance of dataset is the
dataset that contains changes.
 If the dataset contains data from a single table we can take use
of Commandbuilder to automatically generate the commands.
 CommandBuilder is used to automatically generate commands
for dataadapter, the appropriate insert, delete, update
commands for dataadapter cannot be specified at design time,
and we can use commandbuilder in this scenario.
 Automatic generated commands through commandbuilder holds
good for standalone tables with out taking in to consideration the
relationships to other tables in the datasource. Hence command
builder should not be used for updating columns involved in
foreignkey constraint and instead explicitly specify the
statements.

COM

 Com is basically known as Component Object Model.


 Com can be written by either vb6 or c++ and used in any
language, language is not a barrier.
 Com maintained some standards for interfaces, a client can
invoke a method in a com class only through this interface, and
thus client is blissfully ignorant of the language of component.
 Com and its associated interfaces are uniquely identified as
GUID.
 When a com component is registered, this GUID values are
stored in the registry and is mapped to the location of the
component.
 Component consists of an interface called IUnknown which
maintains the reference count of the component; mean the
number of clients accessing this component, if the count goes to
0 then component can unload itself.
 IUnknown interface consists of three methods QueryInterface
which deals with casting of interface type to other. AddRef ()
which increments the counter of COM if some one refers it.
Release () will decrement the counter if some one derefers the
component.
 .Net applications can use COM components through COM
interoperability services.
 .net creates a wrapper around the component, or creates a
wrapper assembly of the component through tlbimp.exe. This
exe interrogates the com class and converts all com data types
to .net data types.
 RCW wrapper basically hides the base interfaces of COM such as
IUnknown, Idispatch, etc when exposed to .net client. When
using CCW it adds the base interfaces to .net component such as
IUnknown, Idispatch when exposed to com client.
 Client who invokes a call on com class should get type library
information; this information can be getting at early binding or
late binding.
 Early binding mean client get type library information at compile
time where as in late binding client gets type library information
at runtime.
Connection Pooling

 Connection Pooling will significantly enhance the performance


and scalability of the application.
 .net framework will automatically provide connection pooling for
sql provider.
 When a connection is opened a connection pool is created, each
connection pool is associated with a distinct connection string.
When a new connection is opened, the connection string is not
matched to the connection pool then a new connection pool is
created.
 Once created the connection pools are not destroyed until the
active process ends.
 Connection pool is created for each new connection string, when
a pool is created, multiple connection objects are created and
added to the pool so that minimum pool size requirement is
satisfied, connections are added to the pool as needed up to
maximum pool size.
 When a sqlconnection is requested, it is obtained from the pool if
a usable connection is available. To be usuable the connection
must currently be unused.
 If a maximum pool size is reached and no usuable connection is
available, the request is queued. The connection pooler satisfies
this request as the connection objects are released back to the
pool, the connection objects are released back in to pool when a
close() or dispose() is called. When close or dispose is not called
connection objects are not added to the pool. The connection
objects which are not closed explicitly will be added to the pool
only when the maximum size of the pool has reached and
connection is still open.
 The connection pooler will remove all the connection objects
whose lifetime has expired(connection lifetime :- when a
connection is returned to the pool, its creation is compared with
the current time and if it exceeds the time specified in
connectionlifetime attribute then it is removed). If a connection
object is found to be no longer connected to the server , it is
marked as Invalid. The connection pooler periodically scans all
the connection objects in the pool and remove all the objects
which are released and marked as Invalid.
 The close () or dispose () should not be called on any managed
objects in the finalise method of your class, because the objects
having finalise () needs atleast two garbage collections which is
performance slow. The finalise methods should be used only in
objects which holds unmanaged resources.
Interfaces

 An Interface is simply a collection of function prototypes.


 An interface contains no code, hence cannot be instantiated,
however we can declare objects that look like an interface.
 When the method of the interface is implemented with the
interface name, then this method is available only to the objects
that look like the interface not to any others.
 The interface method definition can have only public as access
modifier in the derived class, but if this method is implemented
with the interface name in its signature, then it cannot have any
access modifiers. If the access modifier is changed in the derived
class it complains. So always the method access modifier should
be public.
 The interface method is implemented by the class, but this class
is further derived and in the derived class the interface method
definition is added with new keyword. For ex
 Interface aaa { void a1();} class bbb:aaa{public void a1(){}}
class ccc:bbb{public new void a1(){}}.
 In this case if we create objects that look like aaa but given
instances to bbb and ccc classes like aaa a = new bbb(); aaa aa
= new ccc(); then in both the cases the method in base class bbb
is called because; “Derived classes cannot alter interface
mappings it receives from the base class. The mapping of
function a1 is to the class bbb as bbb was derived from the
interface. Class ccc cannot change this fact.
 The interface method in the class can be defined with virtual
keyword such that it is further overrided in the derived class.
Then in this case the interface definition mappings can be
changed because the method in not new it is only overiden.
 When the interface is derived by an abstract class, the abstract
class should provide implementation to the method or declare
the methods of the interface as abstract methods. For ex:
interface aaa{void a1();} abstract class bbb: aaa{public abstract
void a1();}

Difference between string s = string.Empty and string s = “”


 String and string are both same, string is the alias name for
System.String
 string s = “” and s = String.Empty are not same. There is some
difference
 when s=”” object “s” is created but when s = String.Empty then
object “s” is not created.
Reflection
 Reflection is the ability to read metadata at runtime. A compiler
for CLR will generate metadata during compilation and store this
metadata directly in to assemblies.
 Through reflection we can read the methods, properties and
events of a type at runtime and we can invoke them
dynamically.
 Objassembly = System.reflection.assembly.LoadFrom (str) will
take the assembly path from where it has to load
 Thus this LoadFrom method will return the assembly which it has
loaded from the path specified dynamically.
 After loading we want to get all the types in that assembly. This
class provides GetTypes (). It returns an array of system.type
objects present in this assembly. Aroftypes =
Objassembly.GetTypes ().
 This Aroftypes contains all the type objects present in the
assembly
 Then for each type t in this array of types Aroftypes we will get
constructor info like constructorinfo [] cinfo = t.GetConstructors
().
 We will get properties info of each type object like propertyinfo []
pinfo = t.GetProperties ().
 Similarly we can get methods and events of each type.
 Thus through reflection class we can extract entire information of
the assembly at runtime.

Constant and Read-Only

 Constant is applied only to fields, where as Read-only can be


applied to fields and methods like properties.
 Const can be used to qualify only local or instance variables, it
cannot be used to static variables. Because const is implicitly
static.
 Once a variable is qualified with const it cannot be referenced
with the object it should be always referred with type name.
 Unlike in c++ const in c# can’t be used to qualify parameters
and return variables. It is used only for local variables.
 Const are compile time constants which says the value of the
constant should be known at design time itself where as
readonly are runtime constats.
 Attributes or properties that refer to a constant will only have
getters no setters because value should not be changed.
 Constant can be initialized only at time of declaration, where as
read-only can be initialized at declaration or in constructors.
 Constant is compile-time constant, if the value is changed then it
should be compiled again, and all it dependencies also should be
compiled, but Read-only is runtime, even if it is changed it need
not be compiled or its dependencies need not be compiled, read-
only fields can be changed only by containing classes;
 Constants are not stored in stack, they are stored in registers,
storage wise there is no change in read-only variables, and they
are stored in stack.
 Constants are class variables; read-only are instance variables.

Garbage Collection

 .Net Framework takes care of memory allocation and


deallocation.
 However when we create objects that utilizes unmanaged
resources; we must explicitly release these unmanaged
resources, bcos the .net framework does not know how to clean
up these resources. The objects which encapsulates the
unmanaged resources must implement finalise method to clean
up them.
 C# and Managed c++ must provide destructors as the simplified
mechanism for writing finalization code. Destructors
automatically generate a finalise method.
 .net framework keep track of the objects that have finalise
method using an internal data structure called
“FinalisationQueue”. Each time the application creates an object
that have finalise method an entry is added to the finalization
queue which maps to the address of the object. Thus finalization
queue consists of entries to all those objects on the heap that
needs to be cleaned up unmanaged resources before reclaiming
the object memory.
 Implementing Finalise method will affect the performance and is
not advisable to be used until necessary. To reclaim the memory
of objects that has finalise method needs atleast two Garbage
Collection (means two rounds of survey), in the first collection it
will reclaim the memory of all objects which are inaccessible and
does not have finalise methods, it also removes the entries from
finalization queue which are inaccessible and place them in
another list of objects marked as ready for finalization. The
entries in this list will point to the objects where finalization code
is ready for execution. A special runtime thread calls the finalise
method of these objects and remove the entries from this list.
Future garbage collection ensures that finalized objects are truly
garbage.
DataProviders

 Data providers are used to connect to database, execute


commands, and retrieving results. It acts as a bridge between an
application and data source
 The .net framework data provider is designed to be light weight
with minimal layer between data source and application code.
 The .net framework for sql server will use its own protocol to
communicate to sql server. It is light weight and provides an
optimized access to the database with out adding OLE DB layer
or open database connectivity layer.
 The sql provider directly connects to the sql server database
where as the OLE DB provider connects OLE DB datasource
through OLE DB service component which provides connection
pooling and transaction services, and OLE DB provider for data
source.
 Thus connection to datasource is straight forward sql provider ->
Sql server data source, oledb provider -> ole db service
component -> ole db provider -> ole db datasource.
 .Net framework data provider for sql server (MSDASQL) should
be used for sql server version 7.0 or more, for others sql server
version of 6.5 or less (SQLOLEDB).

Sessions

 Http is a stateless protocol, which means that it does not


automatically indicate whether a sequence of requests is all from
the same client, as a result building web applications which
needs to maintain some cross-request state information will be
extremely challenging with out any additional support.
 Session state does not persist across web application
boundaries, suppose if an application switches to a new web
application at runtime, then the session values in old application
is not available to new applications.
 Each active session is identified and tracked by “Session –ID”
which is a string containing only the ASCII characters.
 SessionID’s are automatically generated using an algorithm
which ensures uniqueness and so that sessions cannot collide,
and this ID’s are also given randomly so one cannot calculate
other user’s session ID.
 sessionID are communicated between client and server either
through cookies or modified URL’s where the session ID string
embedded(depending on how we configure the application
settings)
 Sessions can be stored in-process or out – process, when in-
process the session is maintained as dictionary based in memory
object with in IIS process. But when it is in-process storage the
data in sessions will be lost if application restarts.
 Session state can be either :
o In Process – where session state information is stored with
in IIS process or Asp.net worker process aspnet_wp.exe.
This is default option
o Off – session state won’t be saved or session state is
disabled
o SQl Server – where session state is out of process and
session state information is stored in sql server.
o StateServer – where sessionstate is out of process and
session state information is stored in other process called
aspnet_state.exe of the same machine or it can be also
stored in another machine.
 Among these modes, inproc is faster, but the more session data
the more memory is consumed on the web server and that can
affect the performance. In proc does not work in web farm
scenario.
 Webfarm scenario is when the application is of high capacity,
instead of hosting on one server, we will host it on multiple
severs. Thus in webfarm our application is hosted across multiple
servers. WebGarden is another scenario where application is
hosted across multiple processes.
 In case session state is stored in out process mode, then when
the client is revisited the relevant worker process retrieves this
information from the state or sql server, as binary streams
deserialises them and places them in the session objects.
 Session information storing in out of process mode is useful
because as the information is stored separately from the process
in the server, we can restore the information easily if the process
crashes, as the storage is entirely partitioned from the process,
multiple process can use server efficiently and the scalability of
application also increases.
 Configuring Sqlstate: in web.config give the session mode as
sqlstate mode, if it is OutProc and state server we need to give
server name with port number.
 If it is OutProc and SQL then we should run two scripts called
InstallSQLState.sql and InstallPersistSQLState.sql. Both of these
scripts will create ASPState database. The only difference
between these two scripts is that InstallSQLState.sql will create
ASPStateApplication and ASPStateSession tables in TempDB they
are lost if the computer is restarted, where as
InstallPersistSQLState.sql will create this tables in ASPState
database this tables will be retained even if the computer is
restarted.
 Session_end will be fired only in case of inproc mode, it will be
fired after nminutes of inactivity or when abandon () gets called.
 When we store session in outproc mode one of the major cost
factor is serialization/deserialization of the objects stored in
session.
 Asp.net performs serialization/deserialization of variables stored
in the session through an optimized internal method. But it does
only if the variables stored are of basic types.
 Basic types mean all predefined or numeric types of all sizes like
int, byte, But if session includes a variable of type other than the
basic types, then Asp string, decimal, timespan, datetime etc.
 .net performs serialization/deserialization through binary
formatter which is relatively slower.
 Hence when we take performance in to consideration, it is
always advisable to store variables of basic types in to session.
 For ex: if we need to store name and address of an employee in
session we can save this by two options a) store this values in
two string variables like sname, saddress and put this two
variables in session. B) The other way is create a class which has
two fields like name and address and store the values in the
instance of this class and put this object in session.
 When we have options like these, performance wise it’s good if
we go for the option a. Thus when session is stored in out proc
mode the major precaution needs to be considered is all the
variables stored in session are serialized.
 Session ID remains same even after the time out or session is
abandoned bcos this implies that the same Session ID can
represent multiple sessions over time if the instance of browser
remains same. Mean if the browser is not closed and user
continues his activity beyond session time of 25 seconds the
same session ID is maintained because the user is active with
the same browser instance.

Remoting

 Remoting enables us to build widely distributed applications,


where individual components of this application are all present
on one computer or spread across the entire world.
 Remoting enables us to build client applications which can
access the components or objects present in other process of the
same computer or other computer on network.
 Thus Remoting enables to have an interprocess communication,
not only interprocess it will also allow intraprocess
communication mean between the appdomains of the same
process.
 Thus .net framework provides us an abstract method of
interprocess communication through Remoting where the
Remotable object is separated from the client, server, and
communication process.
 Remoting can take place between any applications like web,
windows, console or windowservices etc. Any application can
host the Remotable object.
 The communication protocol between the client and server can
be changed from one protocol to other or the serialisation format
can be changed from one to other with out recompiling the client
and server.
 Thus Remoting is entirely flexible and customizable as it is
separated from client, server, and communication media.
 The three basic things for Remoting are:
1. Remotable Object
2. Host application which will listen to requests for the
object
3. Client application which requests for the object.
 To enable objects of other application domains use our objects
than our class should be inherited from “MarshalByRefObject”.
Mean for client objects to access our server objects present on
different process or appdomains our server class should be
inherited from MarshalByRefObject.
 To enable client applications to access the Remotable object, a
host application or a listener application needs to be built and
configure it to Remoting system.
 The listener application can be developed on any application
domain such as windows, web, console and windows service or
any other managed application domain. Because Remote
configuration can be done on per-applicationdomain basis.
 Once the listener application is built and configured to Remoting
system, then choose and register the channel which is an object
to take care of Communication between client and server
application.
 Thus now the listener application is ready on a channel to listen
the requests from clients. Unlike COM, Remoting does not start
server application; the server application should be explicitly
started to listen to requests. This is the major difference between
COM and Remoting.
 There are two types of channels supported by .net framework
one is HttpChannel and the other is TcpChannel. HttpChannel
involves SOAP formatting and TcpChannel involves binary
formatting. HttpChannel has certain benefits such as it can
penetrate firewalls, and also supports standard security and
authentication protocols.
 RemoteConfiguration of listener application can be done
programmatically or through Application/Machine configuration
file. For ex: RemoteConfiguration.
Configure(“listener.exe.config”)
 Build client application and configure to the Remoting system,
now the client application can invoke call on the Remotable
object as if it is in its local domain itself. The Remoting system
will intercept the call and hands it to the remote object and
returns the value to the client.

Remoting

 Remoting is nothing but exchange of information across


application domains. The appdomains can be on the same
process or different process in the same computer or process
present on the different computer
 So based on the above 3 possibilities we can see remoting does
not always involve two networked computers.
 So regardless of the distance between objects, the entity which
attempts to interact with remote objects is called client and the
entity which hosts the remote object is called server.
 Client and server objects do not communicate via direct
connection, but rather through the use of an intermediary called
proxy. As far as client is concerned a proxy is nothing but the
reffered remote object in his local appdomain. The proxy
exposes the same interface as the remote object. The client calls
the methods on the proxy and which inturn forwards the call to
actual remote object.
 Thus the proxy invoked directly by the client is a transparent
proxy. This CLR autogenerated entity is in charge of ensuring
that client has provided correct number of arguments to invoke a
remote method. Thus transparent proxy is a fixed interception
layer that cannot be modified or extended programmatically.
 Thus the invoking of a method along with its parameters on a
remote object is packaged up in to a message object. Once this
message object has been populated it is then passed to a closely
related type called real proxy present on the client.
 The real proxy is the one which actually passes the message
object on to the channel
 Unlike transparent proxy the real proxy can be extended by the
programmer. Unless if we are not interested to customize the
real proxy’s implementation; the only method of interest in real
proxy is Invoke(). Under hood the .net generated transparent
proxy passes the formatted message object to the real proxy
through this Invoke method.
 Once the proxies have validated and formatted the client
arguments in to a message object, the message is kept on to
communication channel by realproxy. The channel is an entity
which is responsible of transporting this message object to the
remote object and ensuring any return value from the remote
object back to the client.
 .net provides three channel types for communication between
client and server :
a) TCP/IP
b) HTTP
c) IPC
 TCP/IP is a fastest communication channel, the message object
are converted by a binary formatter in to a tight binary format.
The packets thus obtained are very light weight and compact
thus help in faster remote access. But the downside of it is TCP/IP
is not fire wall friendly and may require services of system
administrator to allow message to cross machine boundaries.
 HTTP channel converts message objects in to SOAP format
through SOAP formatter. The packets thus obtained are quite
heavy as they contain very detailed message and hence slow
remote access. But the advantage is Http channel is fire-wall
friendly as most of the firewalls allow text messages on port 80.
 Finally in .net 2.0 we have IPC channel which is inter process
communication channel. IPC is faster than TCP/IP and HTTP.
However this channel can only be used in communication
between appdomains of the same computer. Given this we
cannot use IPC channel for building distributed application that
spans multiple physical computers.
 Client and server applications are allowed to register to channels
of their choice. For ex: Httpchannel c = new
HttpChannel(32456) ;
 Channelservices.registerchannel(c); thus a channel is created on
port 32456 and is registered.
 HTTP channel by default is configured to SOAP formatter and TCP
channel is configured by default to binary formatter. We can
customize this channels to use any of the formatter including
custom formatter.
 Remotable objects are those which are available and are
marshaled outside the appdomains. In contrast all other objects
are non-remotable objects. The objects are available outside the
appdomain only if they are inherited from marshalbyref or when
the class has serializable attribute.
 There are two types of remotable objects:- MBV and MBR
 MBV marshall by value. When the objects of a class are to be
available out side the appdomain then it shld have serializable
attribute.
 MBV objects reside on the server. However when client invokes a
method on the MBV object, the MBV object is serialized
transferred over the network and restored on the client as an
exact copy of the server-side object.
 When this happens MBV is no more a remote object, any method
calls does not require proxy object or marshalling because the
object is locally available.
 MBV objects can provide faster performance by reducing the
number of network roundtrips; but in case of larger objects the
time taken to transfer the serialized object from server to client
can be very significant.
 MBV object is created by declaring a class with serializable
attribute.
 MBR objects are remote objects, they always reside on the
server, and all the methods invoked on this objects are executed
at server. The client communicates with the MBR object on the
server using a local proxy object that holds the reference to the
MBR object.
 Although the use of MBR objects increases the number of
network roundtrips, they are a good choice when the objects are
large.
 An MBR object is created by deriving the class from
marshalbyrefobject class.
 Remote Object Activation: between two types of objects MBV and
MBR; only MBR objects can be activated remotely. No remote
activation is needed in case of MBV because the MBV object itself
is transferred to client.
 Based on the activation mode the MBR is classified into 2 types:
a) server-activated objects
b) client-activated objects
 server activated objects (SOA) are those objects whose lifetime is
directly controlled by server.
 When client requests for an instance of the remote object a
proxy to the remote object is created in client application
domain; but the remote object is only instantiated or activated
only when any method on this proxy is invoked by the client.
 There are 2 possible activation modes for SOA objects:
a) single call activation
b) singleton activation
 in a single call activation mode, an object is instantiated at
server solly to fulfill just tht one client request. After the request
is fulfilled, .net deletes the object and reclaims its memory.
 Objects activated through this single call mode are stateless
because the objects are created and destroyed with each client
request. Therefore they do not maintain state across requests.
 This behavior adds for more server scalability as an object
consumes server resources only for small amount of time and
therefore allowing the server to allocate resources for someother
objects.
 Thus this singlecall activation mode is desired to be used: a)
when there is no overhead in object creation b) the object is not
required to maintain state c) server needs to support a larger
number of client requests for the remote object.
 Usually single call method is followed in load balancing
environment where application is hosted on multiple servers and
the request is served by any server which is free; because the
single call objects are stateless it does not matter which server
process the request each time.
 Singleton Activation Mode: in this mode there will be at most one
instance of the remote object regardless of the number of clients
accessing it.
 The same object serves all the requests from the client and from
the multiple clients. As the same instance is maintained it
maintains the state information. The information in this object
are shared by multiple clients accessing it.
 The lifetime of this object depends upon the lifetime lease.
 Singleton object is a desired solution when the overhead of
instance creation is substantial and when the object has to
maintain state.
 Client Activated Objects: are those remote objects whose lifetime
is controlled by client.
 Client activated objects are created as soon as the client
requests for an instance of the remote object. Unlike an SAO the
CAO does not delay object creation until the first method is
called on the object.
 When a client attempts to create an instance of the remote
object; an activation request is sent to the server and the
instance of the requested class is created by the specified
constructor said by the client and objRef object is returned to the
client. Based on this objref object the proxy is created in the
client appdomain.
 An instance of CAO serves only the client responsible for its
creation and the CAO does not get discarded with each request.
Thus CAO can maintain state with each client it is serving.
 The lifetime of CAO is decided on lifetime lease.

Views

 Views are known as Virtual tables, they are known as virtual


tables because the result set of a view is not stored in memory.
 Views are created with select statement; views can be created
from a single or multiple tables or can be from another view.
 The major difference between a view and a table is the data in
table has a physical storage, but data in view does not have any
storage it references to the data of table.
 View can also be known as “Stored Query” because what
actually stores in database when a view is created is just the
query itself. At runtime this is replaced with the result set.
 Views can also be modified, mean insert, update, delete
operations can be performed on a view, but there are certain
restrictions. Modifications are allowed in view if the view is
created from a single table, if the select statement does not have
any column functions or distinct statement or group by clause,
having clause etc.
 Read- only views a view is said to be read-only if it has any of
one in its statement:
1. from clause having more than one table
2. select statement having distinct word
3. select statement having any column function
4. from clause having Group By
5. from clause having Having
 When views are modified “With Check Option “is used which
validates before modifying view.
 With Check Option comes in two flavors one being Local which
checks the validity on the current view only, the other being
Cascade which checks validity on the current view and also
underlying views.
 Views can be nested to any level, but usually it is not
recommended because if the top level view is dropped or base
view or table is dropped all others become inaccessible.

Assemblies

 Assemblies are fundamental unit of programming, MSIL present


in portable executable file called assembles will get executed by
CLR. The assemblies will not be executed if its associated
manifest file is not found in assembly. Each assembly will have
only one entry point called dllMain or winMain or Main method.
 Assemblies are designed mainly to simplify application
deployment and to solve version issues which will be seen in
component based applications.
 .net framework address these two issues through assemblies,
Assembly is a self- describing deployment unit because it doesn’t
depend upon any registry entries. Assemblies can assure zero-
impact installation.
 Assembly Contents: it contains Assembly Manifest, type
metadata, MSIL code, Resources. All these can be present in one
file called single file assembly or it can be in separate files called
multifile assembly. In multifile assembly the assembly manifest
will have references to the other files such as .netmodules and
resource files. (.Netmodule does not have assembly manifest).
 Type metadata tells the types that are available or used in that
assembly.
 Each computer where Common Language Runtime (CLR) is
installed there will be a machine – wide code cache called
“Global Assembly Cache (GAC)”.
 Shared assemblies will be kept in this GAC. As for Guidelines
until and unless needed an assembly need not be kept in GAC,
because once any assembly that make up an application is kept
in GAC then one cannot replicate or install the application with
xcopy method. This assembly file in GAC should also be moved.
 It is really important to understand that not all the assemblies in
our machine are displayed in the .net tab of Add reference dialog
box. Our custom assemblies are not displayed there and it does
not display all the assemblies located in GAC too. Rather this
dialog simply displays a list of common assemblies that VS 2005
is preprogrammed to display.
 Technically it is also possible to display our custom assemblies in
Add reference dialog box list by deploying a copy of our
assembly in “c:\ProgramFiles\Visual Studio
8\Common7\IDE\PublicAssemblies”
 Not only shared assemblies, private assemblies can also be kept
in GAC through Native Image Generator.exe. Ngen.exe compiles
IL to native code and this native image code is kept in Native
Image code cache of GAC.
 Assemblies can be kept in GAC through different ways:
1. Windows Installer
2. GacUtil.exe
3. Windows Explorer
 In deployment scenarios windows installer should be used and in
development scenarios GacUtil or Windows explorer can be used
because windows installer gives additional features like
referential count etc.
 Assemblies installed in GAC will have strong name to uniquely
identify the assembly, GAC also performs integrity check on each
file that make up an assembly (it checks whether there is any
change in file which is not reflected in manifest).
 Strong Name consists of Assembly name, version number, public
key and culture.
 Strong name is created using SN-Command line utility. This
utility other than creating public/private keys. It does whole lot of
other things like extracting keys from signed dll’s, delay-sign an
assembly and other things associated with signing an assembly.
 Move to the folder where or source code exists, then run SN – K
for the dll, the utility will create a key file for our dll in the same
directory.
 Once this key file is created, then we should update this
information in assemblyinfo.cs file which contains an element
like [assembly: AssemblyKeyFile (“”)] we should provide the
address of key file here.
 Gacutil.exe is used to install our assembly in to GAC. So once the
file is added to GAC, then we can add this shared file assembly to
our project through add reference, browse to the location where
GAC and select the assembly.
 Administrators often protect WINNT through access control list
(ACL), as GAC is installed in this WINNT directory the ACL is also
applied.
 An assembly can be signed either through Strong Name or
SignCode.exe. Strong Name adds a public key encryption to a file
having manifest. Strong name signing uniquely identifies the
assembly. But no level of trust is ensured with strong name
signing.
 Signing an assembly with signcode on other hand ensures level
of trust because here the publisher of assembly in to GAC is
required to prove its identity with a third party and obtain
certificate. This certificate is then embedded in the assembly.
 An assembly can be signed with both strong name and signcode,
the strong name is stored in the file containing assembly
manifest, on other hand the sign code can sign only one file at a
time, in case of multifile assembly, the file containing assembly
manifest is signed, and the signcode is stored in reserved slot in
file containing assembly manifest.
 When both signcode and strong name are used for an assembly
then strong name should be assigned first.
 There are two types of Encryption: Symmetric and Asymmetric
encryption
 Symmetric Encrypition is one where same key is used for
encryption and decryption.
 In case of Asymmetric encryption one key is used to encrypt and
the other is used to decrypt. These two keys always come in pair.
There are two types of asymmetric encryption public and private.
In case of public key encryption, it is encrypted with public key
and decrypted with private key. In case of private key encryption
it is encrypted with private key and decrypted with public key.
 For example James wants to send a mail to suman and wants
only suman to read this mail, so James will write a mail and adds
suman’s public key encryption as the public key is exposed to all,
so when mail reaches suman he will use his private key to
decrypt and read the mail. As the private key is personal, and is
known only to suman it was assured that only suman has read
the mail. But suman was not assured that james has send the
mail. So it can be extended as james will sign the mail with his
private key and adds suman’s public key to encrypt the mail. So
when suman sees the mail he decrypts the mail with his private
key where he sees the digital signature of james so decrypts it
through james public key. Thus suman was assured that the mail
was from james and james was assured that suman alone has
read the mail
 CLR also performs hash verification on assembly, the assembly
manifest contains a list of all files that are referenced and the
hash of those files at time of manifest built. As each file is loaded
the hash of the file is taken and compared with the hash value in
manifest. If there are any differences the assembly fails to load.
 Strongname, signcode digital signatures, hash verification
ensures that the files are not altered in any way.
 Versioning of Assembly: Version number of an assembly includes
four parts major : minor : build : revision. The difference in any of
these numbers is treated as different assembly mean assembly
with same name but different version.
 In automatic version creation the major and minor will be
constant by default as 1.0, for build it is taken as number of days
from Jan 1 2000 and revision number is taken as no of 2 secs
from mid night 12.00 am. In assembly info the version element is
given as 1.0.* which automatically creates version number when
ever it is built.
 Once the assembly is strong named and all other assemblies
which refer these assemblies should also to have strong names.
So that security of strong name is not compromised.
 Assemblies can be of two types called Static Assemblies and
Dynamic Assemblies. Static assemblies are those assemblies
which will be stored in disc even before execution. Dynamic
assemblies will be directly executed from memory; these
assemblies are not stored in disc before execution. These
assemblies can be stored in to disc after execution.
Reflection.Emit is the method which will create dynamic
assemblies.
 Assembly manifest contains assembly name, version, culture,
strong name or public key, list of all files in assembly: hash of file
and its name, type reference information, list of all referenced
files.
 Some extra points on assembly: Code library need not take
always *.dll file extension. It is perfectly possible for an
executable assembly to make use of types defined with in an
external executable file. In this light a reference *.exe can also
be considered as code library. Before VS 2005 the only way to
use external executable file is through /reference flag of the
compiler. But in VS2005 we can refer through Add Reference
Dialog.
 Visual studio allows us to refer the Multifile assembly just in the
manner as we refer the single file assembly. The primary module
of the Multifile assembly is called and referred and the relative
modules of the Multifile assembly are loaded in to process on
demand. The primary module is the one which contains manifest
information. The other modules cannot be directly referenced.
 Net2.0 allows us to specify friendassemblies that allows internal
types to be consumed by specific assemblies –
InternalsVisibleToAttribute class.

Native Image Generator.exe

 Ngen.exe is a tool which increases the performance of managed


applications.
 The tool creates an image file which is a compiled code (native
code) and installs it in to Native Image Cache on the local
computer. So the CLR instead of JIT compiling at runtime it
directly uses this native image. We need not do any additional
procedures to tell the CLR to use this Native Image code, the
runtime automatically gonna use it. This native image cache is a
reserved area in the GAC. Running Ngen.exe on an assembly
allows the assembly to execute faster, because it restores the
code from native image cache rather than generating them
dynamically.
Signing an Assembly
 An assembly can be signed in two different but complementary
ways: with strong name or signcode.
 Signing an assembly with a strong name adds a public key
encryption to the file containing the assembly manifest.
 Strong name signing helps to verify name uniqueness, prevents
name spoofing, and provides callers with some identity when a
reference is resolved or when there is a reference conflict.
 But no level of trust is ensured with strong name, which makes
signcode important.
 Thus through strong name: a) name uniqueness is guaranteed as
we have unique pairs (public & private keys) b) strong name
protects the version lineage of assembly. It ensures that no one
other than the author can create a new version of the assembly
c) strong name provides an integrity check which ensures that
file was not altered from the time it was build and it cross checks
with the hash code stored in manifest.
 Once the assembly is strong-named then all the assemblies it
refers should also be strong named.

 Strong name is created using SN command line utility.


 Strong name have both public and private keys. These are
unique pairs. Normally public key is used in assembly name. No
two users have the same public key.
 No one will be able to generate the same assembly name,
because assemblies signed with one private key have a different
name when signed with another private key. No two assemblies
have the same name.
 Strong name protect the version lineage of an assembly. Strong
name ensures that no one other than the author of the assembly
can generate a new version of the assembly.
 Strong name provides strong integrity check. This check ensures
that the code has not been modified once it was built.
 The important thing to remember is strong name does not
ensure level of trust that provided by a digital signature.
 Delay signing: it is often the case that the author of an
assembly does not have access to the private key needed to do
full signing. Most companies keep this value in well secured
place where only few people will have privileges to access.
 As a result .net framework provides a technique called delay
signing that allows the developer to build an assembly with only
public key. In this mode the file is not actually signed as private
key is not provided. But the space with in the assembly is
reserved for later signing with the strong name utility.

CLR Architecture

Base Class Library Support


Threading Support COM Marshaller
Exception Manager Security Engine
Type Checker Debug Engine
Code Manager IL to Native code Garbage
Collector
Class Loader

Caching

 One of the important factor in building highly-performance and


scalable applications is by storing the items in memory that they
are initially requested for.
 These items can be stored either in web server or any other
software in the request stream (browser, proxy server etc).
 This process of capturing the items and storing them in memory
is called “Caching”.
 There are two types of caching called Output Caching and Data
Caching.
 Output Caching is the caching where Page or part of page is
captured and stored in memory, so subsequent requests to this
page is served by the cached item and not by the server as the
request does not go to server.
 Data Caching is the one where the objects such as Dataset are
stored in Server memory, so the benefit is application can save
the time and resources spend to create this data
 The major difference between output caching and data caching
is the storage of these items, in output caching the items are
stored in client side memory, where as in data caching the items
are stored in server memory. In Output caching the items are
cached when they are requested for the first time, subsequent
requests are served by the cached items. The request does not
go for server. In data caching every request is served by server,
but the dataset is cached for the first time and subsequent
request to server does not spend any time on processing
dataset, the cached dataset is only served.
 Output caching for a page is enabled by adding a @Output
directive to the .aspx page. The output directive has several
attributes like duration varybyparam, location, varybyheader etc.
But the two attributes Duration and varybyparam are mandatory.
If u doesn’t specify this attributes in output directive, then parser
error will be thrown.
 ASP.Net allows to cache multiple versions of web pages. Caching
multiple versions can be done either by varybyParam,
varybyheader, varybyCustom.
 Caching multiple versions of web pages is supported
declaratively by Output directive and programmatically by
HttpCachePolicy class.
 VarybyParam caches multiple versions based on http get query
string or post method form post parameters.
 VarybyHeader caches multiple versions based on different
httpheaders associated with the request.
 Varybycustom caches multiple versions based on browser type
or custom string which is implemented in application.
 HttpCachePolicy class supports the above by VaryByParams,
VaryByHeaders property and SetVaryByCustom method.
 If the user is not interested in caching multiple versions of the
page, he can just set the varybyparam to none in the output
directive.
 Caching multiple versions of pages can be done for one or more
than one by delimiting multiple parameters with semicolon. We
can cache pages based on all parameters by giving * for
varybyparam.
 Sometimes it is impractical to cache the whole page, certain
parts of the page should be dynamically created which requires
too much of server resources. These parts are identified and
isolated and kept in separate user controls. So this part of the
page is cached this is also called as “Fragment Caching” or
Partial Caching.
 Partial caching is supported in two ways : control caching and
post-cache substitution.
 Fragment caching is supported declaratively by output directive
and programmatically by PartialCachingAttribute class.
 User Controls: we can create our own custom and reusable
controls called user controls. The user controls also like web
pages are compiled when first requested and stored in memory
to serve subsequent requests. But unlike web pages the user
controls cannot be requested directly, this are always added in
page.
 Creating user control is similar to web page expect some
differences they are instead of page directive in page, user
control has control directive and second major is user control
does not have <html>,<body>,<form> tag around the content.
 As multiple versions of web page are cached, similarly multiple
versions of user controls can also be cached. Caching of multiple
versions of user control is done declaratively by output directive
and programmatically by partialcachingattribute class.
 Multiple versions of user control can be cached through
varybyparam, varybycustom, varybycontrol, shared.
 When a page includes a user control, an instance of user control
is created and stored in memory; each page having a user
control creates an instance of user control and stores in memory.
For ex a user control is created in an application, and 25 pages in
that application are using this user control, then 25 instances of
user controls are created and stored in memory, if the use any
version caching many more instances are stored in memory.
 We can save memory and can make effective use with out
maintaining that many instances by making shared attribute of
output directive or partialcachingattribute class to true. Then all
pages which are having user control access the same memory
instance of user control.
 Post-cache substitution is just opposite; here the page as a whole
is cached but parts of this page are dynamic. Mean the page is
not recreated every time but some controls in it will get created
and add to the cached page output.
 We can implement post-cache substitution declaratively by
placing the substitution control on the page or programmatically
using the substitution API or implicitly by using AdRotator control
which by default supports post-cache substitution.
 When the substitution control is placed on the page, we should
specify the method name to be called which matches the
signature of HttpResponsesubstitution callback delegate. It takes
httpcontext object as parameter and returns string. The string is
the content that should be place in place of substitution control.
 Programmatically substitution is done by adding the substitution
control to page and specify the methodname. For ex substitution
s = new substitution. And then specify the method name to be
replaced with the string
 The Adrotator control implicitly it supports the post-cache
substitution.
 With the substitution control the only drawback is it takes only
static methods of the containing page class as methodname
values. It cannot call the instance methods or methods of
another object.
 Writesubstitution method of the Response class helps us to call
instance methods or methods on the other objects to be used.
For ex: response.writesubstitution(new aaa().getdatetime);
 DataCaching: Asp.net provides a powerful and easy method of
caching objects that require a high amount of server resources to
create them in memory called “Cache”. The cache is a flexible
and easy to use to store objects in memory. Objects pertaining to
an application are stored in an instance of cache which is private
to an application, and also tied to application which means the
cache expiration is the life time of an application. If application is
restarted then cache is also recreated.
 Unlike sessions which are user specific, cache is application
specific; it is available to all users like application object.
Although application and cache object are almost same, the key
difference between them is additional features in cache object
like expiration policies and dependencies
 Objects are stored in cache as key value pairs, so it is easy to
maintain and retrieve them. Cache also ensures that unused
members in it are removed periodically, such that memory is not
blocked unnecessarily through a method called “Scavenging”.
Cache can also set priorities for some items which are valuable
through “CacheItemPriority”.
 Cache also provides the method to implement expiration
conditions though by default is application lifetime. The
expiration condition can be absolute Expiration or sliding
Expiration. Absolute expiration where the time of type DateTime
at which item should be removed from cache is specified. Sliding
expiration is elapsed time before it expires which is of type
timespan.
 Dependency means than an item can be removed from the
cache when a dependent entity gets changed. There are two
types of dependencies one is File dependency and the other is
Key dependency.
 File Dependency: provides an option to remove an item
automatically from the cache whenever a disk file changes. For
ex: application uses xml file to store error details which is used
to retrieve an error message for a given error number at
runtime. So instead of reading the file each time from the disk
when error occurs, we can load this in to asp.net cache at
application start up.
 So if I need to change this error xml file to add new item or
modify the existing item while the application is running, the
item in cache object will be removed. Thus the changes to
error.xml file, the cache item will automatically get expired and
will be removed from cache.
 Ex: object errorData;
//Load errorData from errors.xml
CacheDependency fileDependency =
new CacheDependency (Server.MapPath ("errors.xml"));
Cache.Insert ("ERROR_INFO", errorData, fileDependency);
 Key dependency is similar to file dependency but the only
difference is that instead of a file the item is dependant on
another item in the cache object and gets automatically expired
or removed when the dependant item changes or removed. This
option is useful when multiple related items are added to the
cache and those items are to be removed if the primary item is
changed. For e.g. employee number, name, address and salary
are added to the cache and if the employee number is
updated/removed all the related employee information are
removed. In this case the employee number dependency can be
used for other employee information.
 When ever an item in the cache is removed we can fire a call
back delegate called “cacheitemremovedcallback” which notifies
us when ever an item is removed from the cache.
 Cache has no information about actual content of the objects in
it, it only consist of memory references of the object.
 Once item is removed from cache, attempts to retrieve this item
will return null.
 We can prevent the browser from caching an aspx page by
calling SetNoStore property. Response object has a cache
property which represents the cache policy of that page. Thus if
we want our browser stop from caching a page we can achieve
SetNoStore property on cache property of response object as
Respone.Cache.SetNoStore.

Difference between C++ dll and C# dll

 C++ dll does not include metadata; c# dll has metadata in


manifest file of assembly.
 C++ dll includes native code, where as c# includes MSIL.
 C++ dll is single file, c# dll could be multifile also.

State Management - Client – Side State Management

 Client – Side State Management Includes:


1. Cookies
2. Hidden Field
3. View State
4. Query String
 Server – Side State Management Includes:
1. Application Object
2. Session Object
3. Database
Cookies:

 Cookies are one way of maintaining the state on client side in


web environment.
 Cookie is a simple text file which is a collection of name – value
pairs. The information that can be stored in cookie is only string
which is encoded when written in to file.
 Cookies are passed between the browser and server along with
requests.
 When a user for first time requests a site, the application process
its requests and responds the client with the requested page and
cookie.
 This cookie is then stored in user’s machine on his hard disc, but
when this user requests for some other page in the same
website, then the browser checks whether any cookie exists for
this site, if exists it will send this cookie along with request. Then
application sees this cookie and recognizes that this user has
already visited the site, so new user id or cookie is not created.
 Thus cookies are application specific or website specific but not
page specific. Only once the cookie is created for a website when
the user requests it. But when the user navigates between pages
in this website cookie is not created.
 Cookies help website to keep the count of number of visitors to
the website.
 But cookies have some limitations: The cookies are given a space
of 4096 bytes to store values, but usually huge information or
sensitive information is not stored in cookies.
 Usually the number of cookies for site is limited to 20.
 Cookies stored in disc and are identified uniquely with their
names. If a cookie is created with already existing name then it
overrides the old.
 Expiration policies for cookies can also be set explicitly.
 Cookies are usually stored in Temporary Internet Files or it can
be stored in Document and settings, user account and in cookie
folder.
 Disadvantages with cookies is :
i. users can disable cookies in their browsers
ii. size restriction by browser around 4kb to 8kb
iii. cannot store structured data in cookies
iv. can’t store sensitive information in cookies
v. Since cookie resides on client machine any web
browser can read the cookies, it should be careful to
leave the cookie having sensitive information.

HiddenField:

 Hidden field is a control through which state can be managed


between round trips.
 Though Hidden field is a form control, it is never visible on form,
even it is not visible on form, we can access its properties as with
other controls.
 This control does not need a closing tag.
 Page specific information can be stored in page through Hidden
field
 The hidden field also posts to the server when the page gets
posted, and server can access this hidden field through form’s
collection.
 In order to use hidden filed the page should be posted to server
only through Http Post method.
 Though hidden field is not visible on page, the value which is
stored in it can be seen by any user through view source
property.

Application State:

 ASP.net can share the information across the application through


an instance of HttpApplicationState class; this instance is
referred through Application property of HttpContext class.
 HttpApplicationState class instance is created for the first time
when a client requests any URL in the application.
 HttpApplicationState is a sealed class inherited from
NameObjectCollectionBase Class
 HttpContext class encapsulates all the information of an
individual Http Request. HttpContext class is a sealed class which
inherits IServiceProvider.
 All the class which inherits IHttpHandler, IHttpModules are
provided with a reference to the instance of HttpContext class.
 Storing variables in Application object mean declaring variables
as global because these variables in Application can be accessed
across the application. An application object is a dictionary –
based object.
 The variables stored in application are stored in memory, and
memory is not released until the element is removed from
application object.
 In a multithreaded environment, multiple threads can access the
application object so there should be mechanism to maintain
concurrent synchronized access to it. By default an application
object is free-threaded where inbuilt synchronization is provided.
All objects which are targeted to CLR are by default free-
threaded.
 If we want to write or update this application object in a
multithreaded environment, then we should have explicit locks to
remove inconsistency in data. Locks on global resources will also
be global. Thus the operating system will make the working
thread to wait until the lock is available.
 HttpApplicationState class consists of two collections Contents
and StaticObjects.
 Contents expose all the items that are added to Application
object. The objects in Application could be accessed through
Contents property of Application.
 StaticObjects is a collection of static objects in page, static
objects are those with an object tag and scope as Application
written in Global.asax file
 <object runat = server scope = application id = obj1 progid =
MSWC.MYINFO>
 HttpApplicationState class consists of a method called Lock ()
which will provide the synchronise access of items. Unlock () of
this class is called to unlock the resource. Even if Unlock () is not
called explicitly the .net framework will unlock when request
completes, or when request times out or when expection occurs.

Web.Config

 In classic ASP application the web site settings information or


configuration information are stored in the metadata of IIS.
 As it is stored in metadata of IIS, it is difficult for remote web
developers to change the website settings configuration.
 But in ASP.net this settings configuration has been moved in to a
separate xml file called web.config which resides in the root
directory of the application.
 Web.config is a simple xml file which includes the setting
configuration information of the website.
 The root tag in this file is <Configuration> which includes
<System.Web> which includes all setting information of the site.
 If any application specific settings information needs to be
added, it can be kept under <appsettings> tag.

Datagrid, Datalist, Repeater controls


Datagrid EditItem Template
Footer Template
Header Template
Item template
DataList EditItem template
Alternatingitem template
Selecteditem template
Item template
Header and Footer template
Seperator template
Repeater control Item template
Alternating item
Header and footer
Separator
The data grid does not have Alternating template, selected item
template, separator template. Thus display wise we can control in
datagrid is only header and footer; where as in datalist we can control
the display of each row through selecteditem template, alternating
item template. In repeater we can’t edit a row it is only for view
purpose, thus edit item template and selected item template is not
supported.

PublisherPolicy

 Vendor releases a new version of an assembly; the vendor can


include a publisher policy so applications or clients that use old
version, now use this new version by adding publisherpolicy
element in the config file.
 To specify whether the publisher policy apply for a particular
assembly put the <publisher policy> element under
<dependentassembly> element.
 To specify whether the publisher policy apply for all assemblies
that the application uses put the <publisherpolicy> element
under <assemblybinding>element.
 The default value for the attribute apply is “yes”. If it is set to
“No” at application level then any assembly-specific setting is
ignored even if it is yes. <publisherpolicy apply =”yes”>.

Different Timers

 System.windows.forms.timers
 System.timers.timer
 System.threading.timers.

Authentication
 The process of identifying and validating the user credentials
such as user name and password is called Authentication.
 After the process of Authentication, the Authorization process
comes in to picture which is identifying the privileges for these
users. Authorization is a process of allowing an authenticated
user access to resources.
 Asp.net provides Authentication through 3 providers:
Windows Authentication Module
Form Authentication Module
Passport Authentication Module
Custom Authentication
 In web.config the mode is set to any one of these, the default is
windows. We can also set it to “None”. When the mode is set to
none we can allow custom authentication through application
code.
 Asp.net has two authentication layers, since asp.net is not a
stand-alone it resides on the top of IIS; and all request moves
through this IIS to applications, the IIS can deny any request
even before the application knows about the request.
 Windows Authentication Module provider relies on IIS in
authenticating the user credentials through any of it’s or
combination of its methods such as Anonymous, Basic, Digest,
and windows integrated methods.
 An important use to windows authentication provider is to
implement Impersonation scheme
 Windows authentication will avoid or remove authentication and
authorization issues from the application code. The application
just relies on IIS in authenticating the users and on NTFS for
authorization or privileges to the protected users. Impersonation
means it gives a token either for a valid or invalid user. In case of
Invalid user if anonymous is selected it gives IUSR_machine
account
 If the user credentials are validated and authenticated an
authenticated token is passed to the application along with
request, if the credentials are not authenticated an
unauthenticated token is sent to the application along with the
request. In either case the application will impersonates the
token it has received if anonymous access is allowed. It will use
IUSR_Machine account if anonymous access is allowed. If
anonymous access is not allowed then the requests are made
taking user credentials in to consideration.
 Anonymous users are those having an unauthenticated token.
When IIS receives a request from these users, then the request is
served under the credential of <IUSR_Machine> who has a very
limited or restricted access.
 Basic Authentication is one of the methods of authentication in
IIS. The user credentials which are collected at the time of login
are passed to IIS server when the user requests for an
application. The major drawback in this method is user
credentials are passed as plain text to the IIS server which is not
secured.
 Digest Authentication works in the same manner, but the user
credentials are encrypted and send to the IIS server when the
user requests a page.
 Integrated windows works in the same manner as Digest, here a
token is generated for the user credentials, and this token is
encrypted and sends to IIS server, it works only on Intranet. The
other two basic and Digest works on public net.

Form Authentication:

 Form Authentication module is a provider which provides form


based authentication. Form authentication is a system where a
user with unauthenticated credentials is redirected to a login
form through client – side redirection.
 Client side redirection mean suppose a user requests any page in
application which does not hold any cookie, then the request is
redirected to login page to collect user credentials. Thus any
request with out authentication cookie in the request is
redirected to login page. Once the user is authenticated then he
will be directed to the request page through ReturnURL which is
stored as querystring in the URL. For example the client request
default.aspx then the server checks for authenticated cookie it is
not found so it is redirected to login page by storing the request
for these default.aspx in “ReturnURL” as querystring to login
page ex https://samples.microsoft.com/logon.aspx?
RETURNURL=/default.aspx.
 Form authentication is useful in scenario where an application
wants to collect its own credentials through an html form or
page.
 The user submits a page along with his credentials; these
credentials are then validated, then application serves the
request and provides the user with the requested page and also
a cookie that contains his credential information.
 The subsequent requests are issued with this cookie in the
request headers. In the form based authentication much of issue
is with identification than with authentication, hence the
credential information should be stored in a durable cookie. Thus
form authentication module exposes cookie-based authentication
services to its applications.
 Form authentication module is configured through form <form>
element. The form element should have loginurl attribute to be
set.
 We can programmatically read the form authenticated user as
Request.ServerVariables [“Auth_User”] or User.Identity.Name.

Passport Authentication:

 Passport Authentication module enables us to use Microsoft


Passport service. If the user log in with passport and
authentication mode for application is set for passport, then all
authentication duties are offloaded to passport server.
 When the user requests a page the server checks for an
authenticated cookie in the request, if it finds, the request is
served else the request is redirected to login form in passport
server where user credentials are validated by this server and
issues a cookie which includes encrypted credential information,
and redirects to the page originally requested for.
 Thus passport authentication module exposes its service through
an encrypted cookie.
 To use passport authentication we have to download passport
authentication software development kit.

Html server controls and Web server controls

 Both are server controls and are processed at server. In


htmlserver controls the tag is html where as in webserver
controls the tag is asp.
 The major difference between html server controls and web
server controls is web server control will detect the browser and
render correct html according to browser but html server
controls could not detect the browser.

Steps to restart appdomain

 changing Global.asax file


 changing web.config file
 after certain amount of time as set in machine.config or
web.config file
 after certain number of requests as set in machine.config or
web.config file

Security Policy level in .net


 security policy levels in .net includes:
1. Enterprise,
2. Machine,
3. User,
4. Appdomain

Symmetric and Asymmetric Encryption

 There are two types of Encryption: Symmetric and Asymmetric


encryption
 Symmetric Encrypition is one where same key is used for
encryption and decryption.
 In case of Asymmetric encryption one key is used to encrypt and
the other is used to decrypt. These two keys always come in pair.
There are two types public and private key encryptions. In case
of public key encryption, it is encrypted with public key and
decrypted with private key. In case of private key encryption it is
encrypted with private key and decrypted with public key.

DataReader, Dataset which one to use

 DataReader is forward-only and read-only no updates. Dataset is


heavy but in memory object which allows updates and can be
cached.
 In case of DataReader connection need to be closed explicitly,
otherwise the connection remains open.

Typed Dataset

 Typed Dataset is a class derived from Dataset class as such it


includes all methods, events and properties. Along with these it
also includes typed methods, events and properties.
 With typed dataset we can access tables and columns through
names instead of collection – based methods. This improves the
readability of code.
 So with typed dataset the type mismatch errors will be caught at
compile time itself.
 For a given xml schema, we can generate the typed dataset
through xsd.exe tool.

Public and Private Queue


 Both Public and private Queues are created on our computer, but
public Queue have domain or Enterprise Administrator access;
where as private queue has only local access.
 Public and private Queues can be created either through sever
explorer or create constructor in code.

Read Text File

 Text file can be read either through TextReader, Stream Reader,


and String Reader.
 TextReader is the abstract base class for stream reader and
string reader which reads streams and strings respectively.
 To read the text file through TextReader create an instance of
the TextReader and give the file to read in the constructor.
 As TextReader is an abstract class, the instance is created for the
derived class such as string reader or stream reader but type
casted to TextReader such that the methods in the TextReader
class are used to read the file. This can also be said as Upcasting
as casting is done to uplevel or base level.
 Ex: TextReader treader = new StringReader(“c:\\txtfile.txt”);

Automatic Memory Management – Garbage Collection


Algorithm

 Memory Management in .Net differs from COM, In COM the


Memory is managed based on Reference Counting; whereas in
.Net the memory is managed through Reference Tracing. Thus
Automatic Memory Management in .net involves Reference
Tracing.
 Memory Management Issues in COM is: if developer forgets to
release the object memory once done with his job will cause a
memory leak, or if the developer decreases the reference count
when is not supposed to, then memory is reclaimed before the
proper time.
 In .net the developer need not worry about memory leaks, the
memory is handled by CLR, Garbage collector works in
background.
 Garbage collector starts when memory has reached a threshold,
it starts from the root objects identified by JIT compiler, and
traverses the object chain, tracing the references and adding
them to graph. When there’s an attempt to add the object that’s
already present on the graph, the garbage collector stops. Thus
Garbage Collector recursively adds all objects that are linked to
the application root objects. Any other objects that are not part
of graph are unused and ready for garbage.
 All other objects that are not added to the graph are marked and
collected. After collection the memory is compacted, all objects
are moved such that they occupy a contiguous memory block.
 However there is one loop hole with Automatic memory
management, the garbage collection algorithm is complex and
runs periodically. The memory is not released immediately once
object goes out of scope; hence it is called as “Nondeterministic
Finalization”. When the memory threshold reaches, the garbage
collector runs to reclaim the unused memory. Thus the
programmer does not have a precise control to destroy the
objects.
 The CLR divides the heap in to three generations Generation 0,
1, 2. All newly created objects are maintained in Generation 0.
When the references to these objects are held from long time
and survive the collection they are moved to Generation 1, 2.
These classifications increase the performance because
collection can be done on specific generations. Usually when
objects are created the memory is allocated from Generation 0.
The garbage collector performs collection on Generation 0 when
it is full, this happens when a request is made for the creation of
object and there is insufficient memory to allocate for the object.
If the memory reclaimed from generation 0 is sufficient to create
the object, the garbage collector does not perform collection on
other generations.
 If memory allocation is not possible even after collecting up to
generation 2 then “outofMemory” exception is thrown.
 CLR cannot release memory for unmanaged resources such as
database connections, window handlers, file handlers etc. The
developer should provide the mechanism to clean up them
explicitly. The cleanup for this can be implemented in the
Finalise method. The finalise method in c# is implemented as
destructor. The calling of this finalise method is still under the
control of garbage collector.
 Usually developers need a deterministic way to clean up these
unmanaged resources. For ex: we have opened a file handler
once read the contents in file we want to close the file handler.
For this explicit cleanup .net has provided a dispose design
pattern.
 Objects which need explicit clean up implement IDisposable
interface. The IDisposable interface has a Dispose () which unlike
finalise method is under the control of developer.
 Dispose method should implement a call to GC_SuppressFinalise
() which notifies the garbage collector that finalization is not
needed on this object.
 The recommended practice is too implement both Finalise () and
Dispose () because finalise will act as a backup if Dispose is
never called mean finalise will release unmanaged resources and
garbage collector automatically takes care of the managed
resources.
 For example: Class A uses file handles hence it inherits
IDisposable interface because file handle is an unmanaged
resource. Thus the class implements a Dispose () which contains
a GC_SuppressFinalise () to notify no need for executing
finalization code to garbage collector. The class also implements
destructor which contains finalization code to clean up
unmanaged resources.
 The .net framework provides another interesting feature which
can be used in implementing various caches. This is the concept
of weak references implemented in .net as
System.WeakReference class. The ASP.Net cache uses weak
references. If the memory usage becomes too high then cache is
cleaned up.
 The .net framework exposes System.GC class to provide control
to the developer on Garbage collector. Garbage Collection can be
forced by invoking GC.Collect (). The general advice is not to
invoke the garbage collection explicitly. In certain situation
developer can justify by calling GC.Collect () or forcing garbage
collection bcos it provides performance boost. But great care
should be taken when calling this bcos GC.Collect will suspend all
other active threads while it is in progress.
 GC.Collect comes in two flavors collect with out any parameters
performs collection on all the generations. And the other is
collect with parameters which does collection on the specified
generation.
 GC.GetGeneration returns the generation number of an object
passed as parameter.

Dispose Method

 When a Dispose () is called on a type it releases all its resources,


the dispose method must also ensure that the dispose () call is
propagated up in the containment hierarchy.
 When a dispose method is called on an object, the Dispose
method implementation must also call a dispose method on its
base class if base class inherits IDisposable interface.
 If an object’s dispose method is called more than once, the
object must ignore calls after the first one, the object must not
throw an exception if its dispose is called multiple times. Dispose
can throw an exception if object is freed and its dispose method
is not called even once before that.
 A dispose () method should call GS.SuppressFinalize () for the
object it is disposing. It the object is currently on the finalization
queue. This method will prevents it’s finalize () being called.
 Dispose (true) mean the user has called the dispose method
from code and it will clear memory for both managed and
unmanaged objects. Dispose (false) mean runtime (destructor)
has called dispose method and only unmanaged resources are
cleared
 This method is called directly by calling Dispose () and indirectly
by Using () which will call Dispose () implicitly.
 Dispose method gives flexibility to the developer for
deterministic collection of objects.

Checked and Unchecked Context

 C# statements execute either in Checked or Unchecked context.


In a checked context the arithmetic overflow raises an exception,
where as in unchecked context the arithmetic overflow is ignored
and the result is truncated.
 If neither checked nor unchecked is specified the default context
depends upon external factors such as compiler options. By
default all arithmetic operations are Unchecked.
 Both Checked and Unchecked options can be used at statement
level or at block level. At block level it is just declared as
Checked or Unchecked at statement level it is written as
Checked (expression) or Unchecked ().
 Checked or Unchecked option can also be set through project
settings in Visual studio editor.

Abstract vs Sealed class

 Abstract class must inherit and cannot instantiate


 Sealed class should not inherit and could instantiate

Constructors

Constructors are generally used to initialize the local variables. 0 for


numeric values, empty string for strings and false for Booleans.
Compiler by default provides with a parameter less constructor.

Static Constructors
 A Static Constructor is used to initialize a class, it is called only
once in its life time, it is called before the first instance is created
or any static members are referenced.
 Static members does not have any access modifiers or any
parameters, it cannot be explicitly called.
 Static constructors are not inherited.
 One cannot predict when this constructor will be called or order
of static constructors gets called.

Private Constructors

 Private constructors are instance constructors.


 If a class has only private constructors and no other constructors,
then that class cannot be instantiated by outside world.
 This class can be instantiated by only some static functions of
this class.
 The declaration of any constructor explicitly stops the compiler
from providing the default constructor.
 The class derived from this class having private constructor
could not also be derived because when we try to create
instance for the derived class, the derived class constructor
executes and in that it tries to execute the base class constructor
but it is inaccessible for its modifier.
 Class with private constructor is useful, only if it has static
members and doesn’t wish to be instantiated.

Destructors in c#

 Destructors are generally used to recover the Heap space, which


implicitly says that destructors can only be used with reference
types not value types.
 Destructor does not have modifiers or have parameters.
 Destructors in c# are normally used to clean or close
unmanaged resources like files or network connections etc.
 The major difference between c++ destructor and c# destructor
is in c++ one can have control over a destructor, but in c# there
is no control over destructor.
 Destructor is not deterministic in c# bcos the destructor gets
invoked when the object is garbage-collected, and GC will be
called by CLR.
 Destructors are invoked automatically, they cannot be invoked
explicitly.
 Destructors cannot be overloaded, so a class can at most have
only one destructor.
 Destructors are not inherited. So a class can have only one
destructor.
 Destructors cannot be used with structs; they can be only used
with classes.
 When the destructor for a particular instance is called, then the
destructors in its inheritance chain gets called in order from
most-derived to least- derived.

Can I write Asp.net using managed C++

 Yes, but firstly C++ lang must be added or configured in IIS,


compiler should be available for this lang. Then it is possible to
write in managed c++, otherwise no. By default c# and vb.net is
supported by IIS.

DataReader

 Data Reader is read-only, forward-only object.


 The result set is stored in network buffer on client side until Read
() is called on it.
 Data Reader is best for performance because, we can start
reading once it is available mean we need not wait up to whole
result set got fetched, and only one row at any given time is
stored in memory reducing the system over head.
 We can increase the performance while reading the datareader
by the use of typed methods. The typed methods reduce the
amount of type conversion performed when retrieving the
column values. We can use typed methods when we know the
underlying datatype of the column. We can call as dr.GetString
(0) where 0th column is varchar type dr.GetInt32 (1) column is int
type.
 We can access the columns in datareader either through ordinal
(index) or through column name.
 Once the job is done on data reader it needs to be closed
explicitly bcos the data reader exclusively lock the connection
object. The connection object cannot be used for other
commands until the data reader is closed. Hence we need to call
Close () method on data reader to use the connection object
such that lock is released on it.
 If multiple result sets are fetched in to datareader we can move
to next result set through NextResult () method of data reader
class. We can read two tables data in to Data reader as:
 SqlCommand cmd = new SqlCommand(“select * from
dept;”+”select * from emp”,conn);
 Now the data from dept, emp is fetched in to data reader, but
the data in data reader is not continuous data it is like two
different result sets. We can read this:
 Do
{
while(dr.Read())
{
console.writeline(dr.GetString(0));
}
}while(dr.NextResult());

 While data reader is open we can read the schema information of


the current result in data reader through GetSchemaTable() of
data reader which returns a table loaded with schema
information. For each column in the result set, a row is created in
this table.

DataAdapter

 DataAdapter serves as a bridge between dataset and datasource


for retrieving and saving data.
 DataAdapter provides this bridge by mapping Fill which maps the
changes in database to match with dataset, and Update which
maps the changes in dataset to datasource.
 Fill method does not need to open the connection, fill
automatically opens the connection, and closes once its job is
done, if connection is opened explicitly then the connection will
remain open until explicitly closed.
 Fill operation adds the rows to the destination table in dataset,
by creating datatable objects if they do not exist. When creating
datatable objects the fill operation normally creates only column
name metadata. However if missingschemaAction property is set
to AddwithKey, appropriate primary key and constraints are also
created.
 Thus when dataset is populated with Fill method only column
name metadata is added to the destination data table object in
dataset, the other details such as primary key are not added to
the table in dataset. We can add the constraint information to
the table in dataset by setting MissingSchemaAction property of
dataadapter to MissingSchemaAction.AddWithKey. This will now
add the entire schema of datasource table to datatable of
dataset.
 The primary and unique constraints are added to the constraint
collection, but other constraints are not added to the
constriantcollection.
 Because primary keys can be made up of two or more columns,
hence the primary key property in datatable has an array of
datacolumn objects.
 When a single column makes up a primary key, the AllowDBNull
is automatically set to false and Unique key for that column is set
to true. But in case of multi – columns that make up a primary
key only AllowDBNull is set to false.
 If the select command of dataadapter reads from outer join then
primary key is not set in datatable object of dataset. We can
explicitly set the primary key for this table object.
 We can use Fill method multiple times on datatable object, if this
table has primary key then incoming rows will be merged with
the rows already existing, else mean if primary key is not there
then incoming rows will be appended.
 If the data table is generated from a single database table, we
can take advantage of command builder to automatically
generate insert, delete and update commands of Data adapter.
 As a minimum requirement we must set the select command
property for automatic command generation to work properly.
The table schema retrieved by the select command will
determine the syntax for automatic generated insert, delete, and
update commands.
 The command builder must need to execute select command
compulsory in order to get the necessary metadata, as a result
an extra trip is necessary which is performance hindrance bcos
here the command builder will just execute select command of
dataadapter to get the table schema.
 The select command must also return at least one primary or
unique key column. If none are present an invalid exception is
thrown and commands are not generated automatically.
 When associated with dataadapter the command builder will
automatically generate commands for insert, delete, and update
command properties of data adapter if there are null references.
If a command already exists an existing command will be used.
 Database views that are created from multiple tables are not
considered as a single database table. Hence command builder
is not used in this case to auto generate commands, commands
need to be set explicitly.
 We might want to map output parameters back to the updated
row of a dataset; one common task is to retrieve the
automatically generated identity field value from the datasource.
The command builder will not map output parameters to the
dataset.
 Rules for Automatically Generated Commands:
 Insert Command: inserts a row in to the database table for all the
rows where the rowstate is added. Inserts values for all columns
except for columns like identities, expressions or timestamps.
 Update Command: updates a row in to the database for all the
rows where the rowstate is modified. Update all columns except
identities, expressions or timestamps. Updates where the data
field value matches with primary key values and other data field
values map with original values of the row.
 Delete Command: deletes a row from datasource where the row
state is deleted. Deletes a row where primary key matches with
the data field value and other field values map with original
values of the row.
 The logic for generating automatic commands for update and
delete is Optimistic Concurrency”. That is records are not locked
during editing and can be modified by other users or processes
at any time. Suppose user1 has read through select command
the value of a row as “a, b, c” but these row is modified by some
user2 as “a, d, c”. But user1 has modified this record as “x, y, z”
so modified values for this row is x, y, z and original values is “a,
b, c” so when user1 issues an update command the command
builder has a where clause which will check whether original
values match with data field values and data row is not deleted.
If it matches it will update the row, but if it does not match no
row will be affected but a DBConcurrencyException will be
thrown.
 If user wanted to update the row regardless of original values
then he can use Update command of data adapter itself than
relying auto generated update command of command builder.
 Limitations for Auto Generated Commands:
 Unrelated tables only; the commands will be generated
automatically for update, delete, and insert commands only if the
datasource table is standalone with out any relations to other
tables.
 Table and column names; automatic generation of commands
through command builder will fail, if the table or column names
contain any special characters such as spaces, periods etc.
 An exception will be thrown if the command text of the
selectcommand property of the data adapter is modified after
generating the automatic commands for Insert, Delete, and
Update bcos the schema what this commands refer is differing
with the schema what select command has given. As it is already
known the command builder will generate the commands
automatically based only on the schema provided by the
selectcommand of the dataadapter. We can come out of this
exception if we explicitly call RefreshSchema of command
builder which will update its schema with the schema of
selectcommand of data adapter.
 We can obtain the reference to automatically generated
commands of command builder by calling GetInsertCommand,
GetUpdateCommand, and GetDeleteCommand of command
builder object and check the command text of the command.

DataGrid Control

 Datagrid supports selection, editing, deleting, paging and


sorting.
 There are 5 different column types in datagrid; different column
types provide different behavior.
 Bound Column: this is the default column type in datagrid, it
displays the values that are bound to the datasource. Each
column bounds to a field in database. The bound column
participates in view state; we can save viewstate and also track
this viewstate. Not all data types can be bound to the datagrid
control. If the field contains unsupported datatypes, a column is
not created for that field. If the data source contains only one
column and that column contains unsupported datatype then an
exception is thrown.
 Button Column: this allows us to create a column of Custom
Button controls to datagrid. It displays a command button for
each item in datagrid. Specify the caption for the button by
setting its Text property. When we set the caption by text all
buttons in this column will have the same text. If we bind this
column to a field in the datasource we can have different
captions. The values in the specified field are used as caption for
buttons by specifying the “DataTextField” property. Clicking on
these command buttons raises an event called “ItemCommand”.
Unlike regular button controls, validation is not performed when
button in button column is clicked. We can add button control
such as button or linkbutton to <Template Column>.
 Edit Command Column: displays a column that contains
editing commands for each item. This is a special column in
datagrid which contains edit, update and cancel buttons for each
row
 If no row is selected an edit command is seen in
EditCommandColumn for each row, when an edit button is
clicked an editcommand event is raised and edit button is
replaced with update and cancel button. The selected row index
can be taken EditItemIndex property.
 We must provide the values for CancelText, EditText, UpdateText
properties. Otherwise the controls won’t be seen in
EditCommandColumn.
 The buttons in this column can be set as hyperlinks or push
buttons by setting the ButtonType property.
 Hyperlink column: displays the content of the column in each
item as hyperlink. This column can be bound to data source or
static text.
 We can set the text and navigate URL property for the hyperlink,
but for all controls in the column the same text will be assigned.
If we want to have different text and navigate URL then we
should set DataTextField and DataNavigateURLField properties.
 But when we want to have querystring, need to be passed with
the requested URL then we need to customize the navigateURL
with DataNavigateUrlFormatString which accepts query string
etc.
 Ex: DataNavigateUrlFormatString = “default.aspx?id={0}”
where this value for {0} comes from the datasource as
DataNavigateURL is bind to database field.
 Template column: displays each item according to a specified
template. This allows us to provide custom controls in datagrid.
 By default “Autogenerate columns” is set true, so bound columns
are added for each field in the datasource. The columns appear
in the order of fields in data source. Each field is then rendered
as a column in datagrid.
 We can programmatically control the columns in datagrid by
setting “Autogenerate columns “ to false, and then adding
columns between opening and closing tag of <Columns>.
 The order the columns appear in datagrid are controlled by the
order the columns are listed in “Columns” collection. We can also
programmatically change the order of columns by manipulating
the columns collection.
 Explicitly declared columns can also be displayed in conjunction
with auto generated columns. When using both, explicitly
declared columns will be rendered first, followed by auto
generate columns. But these automatically generated columns
are not added to the columns collection.
 The appearance of datagrid can be customized by setting the
style properties for different parts of the control. Different style
properties are:
1. Alternating Item style
2. Edit item style
3. Footer style
4. Header style
5. Item style
6. Pager style
7. Selected Item style
 We can also show or hide different parts of the control as
“ShowFooter”, “ShowHeader”.
 We can control the appearance of datagrid programmatically by
adding the attributes or providing code in OnItemCreated or
OnItemDataBound events.
 To add an attribute to the <td> first we need to get the TableCell
which represents the cell in datagrid. We can use Item property
in DatagridItemEventArgs to get the desired table cell and then
use AttributeCollection.Add method to add an attribute to the
cell.
 To add an attribute to the <tr> first we need to get the
DatagridItem which represents the row in datagrid. We can use
Item property in DatagridItemEventArgs to get the desired table
cell and then use AttributeCollection.Add method to add an
attribute to the cell.
 Item created event is raised when an item is created; the event
is raised both in round trips and at time when data is bound to
control.
 Item data bound event is raised after an item is data bound to
the data grid control.

Access Specifiers

 Default access specifier for class is Internal


 Default access specifier for methods is private.
 Null cannot be assigned to value types.

Pagination in Datagrid

 Datagrid supports three types of Paging:


1. Default Paging with Default navigation buttons
2. Default Paging with Custom navigation buttons
3. Custom Paging
 To enable default paging we can set enable paging to true, set
page size and style of paging controls.
 We can set pagination properties at design time using property
builder, and programmatically we can control pagination in
page_load event through Datagrid.PagerStyle; pagerstyle
contains mode, pagecount etc
 Page navigation is handled in PageIndexChanged event where
e.NewPageIndex is assigned to Datagrid1.CurrentPageIndex.
 One of the options in pager tab of propertybuider is
“ShowNavigationButtons”. If this option is not selected then no
navigation buttons are displayed. In this case we can provide our
own custom navigation controls and manipulate the
CurrentPageIndex property of the datagrid. The datagrid will still
take care of breaking the datasource in to appropriate pages and
displaying the selected page.
 Add server controls to use as custom controls to navigate the
pages, in the event handlers of this controls set the
currentpageindex of datagrid to navigate.
 We can also do this efficiently by setting command buttons and
setting their command name like next, prev, first, last etc and
handler all of this in one single event handler by manipulating
based on currentpageindex of datagrid.
 Custom Paging: by using the “AllowCustomPaging” of datagrid
we can have complete control over which records are displayed.
Custom paging improves the performance by reducing the
amount of data moving around the system, since we can retrieve
one page of data at a time from datasource.
 Normally a datagrid control is loaded with all rows in the
datasource every time the datagrid control moves from one page
to the other or navigates between the pages. This can consume
a lot of resources when datasource is very large.
 Custom paging allows us to load that segment of data needed to
display a single page. To enable custom paging set
“AllowPaging” and “AllowCustomPaging” to true.
 In custom paging set the VirtualItemCount of datagrid to the
total number of records in data source table. This
VirtualItemCount along with page size of the datagrid will get a
picture of number of pages required.
 In pageindexchanged event we can get the new page index, so
the start index to display in datagrid is “e.newpageindex *
pagesize” and end index is “startindex + pagesize” thus we can
only extract the number of records for the page itself which will
allow us to save server resources where too many rows are
present.

Asp.net controls that have CommandName property


 There are three Asp.net controls that support CommandName
property they are:
 LinkButton, Button, ImageButton

Why can we store multiple objects in Arraylist and not in Array


 In Arraylist we can store multiple objects because, the insert, add
method takes object as argument. Thus we can store multiple
data types in to arraylist including custom or user defined
datatypes.
 When we extract the item out of arraylist then we need to type
cast it to its type to use it.

DataList Control

 Repeater and Datalist are examples of Templated data-bound


controls.
 DataList control is used to display rows of data on web page.
 At the very minimum, itemtemplate needs to be defined to
display the items in datalist control
 Header, Footer and Seperator templates cannot be databound
through Databinder.
 ItemLayOut: ItemLayOut can be Flow or table, in case of Flow the
Html is rendered inline, but in Table Layout, the Html is rendered
as Table which gives more facilities for look of items.
 Regardless of whether the items are ordered vertically or
horizontally, we can specify how many columns the list control
will have, usually so that horizontal scrolling is avoided. This
feature can be set at design time through property builder –
Number of Columns.
 To programmatically control the collection of items in DataList
control we can use Items property. The Items does not have Add
or Remove items. So in order to get the text in the item we can
use Items.Controls[], if the text is directly binded to
ItemTemplate then use it as Items.Controls[0] and type cast this
to DataBoundLiteral Control in order to get the Text Property
mean it looks as ((DataBoundLiteral)items.controls[0]).Text. If
the text is binded to any other control then typecast to that
particular control and extract the text.
 ItemCreatedEvent is fired in every round trip, and when data is
bound to control. The ItemCreatedEvent is normally used to
control the content and appearance of a row in the control in
every round trip.
 ItemCommandEvent is fired when a button control is clicked, and
that doesn’t have predefined command name such as edit or
delete. We can use this event for custom functionality by setting
button’s command name to a value we need and test in
ItemCommandEvent.
 ItemDataBoundEvent is fired only data is bounded to the control
or when DataBind () is called
 AccessKey: AccessKey is the property supported for every
control to specify the keyboard shortcut to web server control.
This accesskey is supported only at runtime. The AccessKey
allows us to navigate to that control quickly by pressing Alt +
character specified for that control. For ex for TextBox if we
specified access key as d, then Alt+d will focus to the textbox.
 Attributes: Each web server control is associated with some
properties or attributes we can see them in the opening tag of
the control. Attributes of the control is used to programmatically
control these properties for that control.
 DataKeyField: this is used to specify the keyfield in the
datasource. This value is accessed through DataKeys which is a
collection of the Datakeyfield values. These values are used to
identify the exact row in case of delete and modify queries.
 DataMember: Dataset is used as Datasource to bind to controls,
if these dataset has more than one table, we can say the
particular table as data member for the control. The default
name for table in Dataset is Table, Table1 etc. so we can simply
say DataGrid1.DataMember = “Table”.
 ExtractTemplateRows Property: Usually the contents of a Datalist
are specified by templates. Normally we list the controls that are
to be displayed in the templates. We can also specify table
control and get displayed in the datalist. We can specify the list
of controls with in a table in each template; mean each template
will have a table which consists of a collection of data. When this
property is set to false, then each row in the template will be
displayed in a separate table; for every row a separate table is
created which is not good for appearance or control. So when a
table is present in each template then we can set this property to
true such that all rows in every template is displayed under a
single table, which is good for appearance and control.
 RepeatColumns: in how many columns thus the total rows should
be displayed. Suppose there are 10 rows and repeat columns is 2
then each column will have 5 rows. If the Repeat Direction is
vertical the row1 to row5 will be in column 1 and row6 to row10
to column2. Repeat layout is whether Flow or Table.
 Editing can be done in datalist through EditItemTemplate, a
typical edit event handler will set the EditItemIndex property
with the current itemindex and rebinds it; the cancel command
will set the EditItemIndex to -1 and rebinds it.

Repeater Control
 Repeater is a basic templated data – bound list control.
 Repeater is the only control that allows developers to split html
tags across the templates. To begin with start the table tag in
header template, use its row tag in itemtemplate and end the
table tag in footer template.
 This Html tags like table, or td, or <tr> should not have runat
attribute.
 The designer window of the aspx page provides only limited
editing for the Repeater control, as this control allows incomplete
html tags in the template mean table tag is started in header
and ended in footer. Hence the template editing of this control is
done only in html view of the designer, it is not allowed or no
provision in the design view of the page bcos the designer
window always checks for the correctness of the html code.
Hence it is not allowed in designer window, it is done only in html
view of the design window.
 Repeater has in-built editing or selection support. At bare
minimum itemtemplate needs to be specified to get data
displayed.
 Repeater can be databound to itemtemplate and AlternatingItem
template through datasource property. But Header, Footer and
separator template are not databound.
 If the datasource of the repeater control is set but has not
returned any rows, then Header and Footer template are
rendered on page, but if the datasource has returned null
reference than repeater control is not rendered.
 Controls in the template may be bound to the datasource of the
Repeater control or to a separate datasource. Binding controls to
the Repeater control datasource ensures that all controls will
display the values from the same data row. The syntax for
binding the controls to Repeater control uses Databinder.Eval
(Container.DataItem) where container is the Repeater control
bcos it is the container for all the controls, so all the controls
uses the Repeater’s datasource, if you want to use any other
dataset than the container’s datasource than just say that
particular datasource name in place of Container. This might be
used if we want to display a row of data from the datasource
different from the Repeater control’s datasource.
 Unlike datalist where we will use asp tables (not html tables) in
each template, we have property called “Extract Template Rows”
which is set to true to get all the rows in template to be
extracted in to single table, each template will have its own
table; but in Repeater control we can use html tables which can
be split across the templates, hence we don’t have “Extract
Template Rows” property here all the rows will be extracted in to
one table as it has only one table. We can’t split asp table across
the templates.

Delegates and Events:

 Delegates are classes used with in .net framework to build event-


handling mechanisms.
 Delegates roughly equate to function pointers commonly used in
C++ and other object oriented languages.
 Unlike Function pointers, delegates are object-oriented, type safe
and secure; function pointer has a reference to a method, where
as delegates can hold references to one or more methods with in
the object.
 The Event model binds the events to delegates, which inturn is
binded to methods to handle them
 The delegates allow other classes to register to these events,
such that they are raised. When the event is raised, the handler
method is invoked.

using System;

public delegate void DSPLineEventHandler(object sender, EventArgs e);


public delegate void DSPLevelEventHandler(object sender,EventArgs e);

public class aaa


{
public event DSPLineEventHandler DSPLineClick;
public event DSPLevelEventHandler DSPLevelClick;

public void RaiseEvent()


{
if(DSPLineClick != null)
{
EventArgs e = new EventArgs();
DSPLineClick(this,e);
}
}

using System;

class bbb
{
public static void Main()
{
aaa a = new aaa();
bbb b = new bbb();
a.DSPLineClick += new DSPLineEventHandler(b.hello);
a.RaiseEvent();
}

public void hello(object sender,EventArgs e)


{
Console.WriteLine("Hai Jyothi");
}

DatagridTableStyle

 Datagrid control displays data in the form of a grid.


 DatagridTablestyle class represents this drawn grid only; this
class controls the appearance of grid for each datatable.
 To specify which DatagridTablestyle to be used when displaying
data from a particular datatable we can set the MappingName to
that tablename of datatable.
 We can also bind datagrid to an Arraylist, but the feature of an
Arraylist is it is a collection of objects of multiple types. But
Datagrid can be bind only to a list where all the items in the list
are of the same type as the first item. This means all items in the
list must either be of the same type or they must be inherited
from the same class as the first item in the list.
 For example if the first item in the list is a control, then the
second item in the list could be a TextBox (which is inherited
from the control). But vice versa is not correct to bind to
datagrid.
 Further the Arraylist must have items when it is binded,
otherwise an empty Arraylist will result an empty grid.
 Datagrid control automatically creates a collection of
DatagridColumnStyle objects when we set the datasource
property to an appropriate datasource. These objects are
actually the instances of one of the classes that are inherited
from DatagridColumnStyle class: Datagridboolcolumn or
DatagridTextboxcolumn.
 We can also create our own Datagridcolumnstyle objects and add
them to GridColumnStylesCollection; when we do so we must set
the mappingname of each columnstyle object to the
columnname of a datacolumn in datasource to synchronize the
display of columns with actual data.
 To create our own classes, we need to inherit the
datagridcolumnstyle class; we want to do this inorder to create
special columns that host controls for example: datagridtextbox
class which hosts Textbox. We can host a ComboBox or a user
control in Datagridcolumnstyle class.
 When we inherit from the datagridcolumnstyle class to host the
control, we must override Abort, commit, edit and paint twice.

Difference between Webserver and Application server

 WebServer – Basically a webserver process the html code and


provides display to the end user, it can also run code written in a
special way called CGI – common gateway interface(through
scripts).
 The Application server is the server where the business logic is
present, the request from the webserver is actually processed
here and the response is sent to webserver to get rendered on
the page.
 A browser which requests for http://www.yahoo.com is the
webserver, this request takes to the yahoo server which is the
application server and the response sent is displayed in the
webserver.
 The webserver can also run code through scripts, mean the
request is processed here itself with out delegating to the
application server. But the performance in this case will be low
bcos as the code is embedded with in the html, a single user can
use the code at a time.
 But when this code is present in the application server mean
completely demarcated from the webserver, several users can
use the application server simultaneously. The performance is
good in this case.
 The application server includes a webserver.
 Thus the webserver’s delegation model is fairly simple, when a
request comes to the webserver; the webserver simply passes
the request to the application that can be best able to handle it.
The webserver doesn’t do anything other than providing an
environment in which the server-side program can execute and
pass back the response.

Namespaces and Root Namespace

 Namespaces are used as an organizational system, a way of


presenting components exposed to outside programs.
 Namespaces are always public, hence no access modifiers are
allowed for it.
 Root Namespace property: it sets the base namespace for all
files with in a project. Suppose the project name is set as
“Project1” then rootnamespace property value is Project1. If we
have a class class1 outside any namespace directly under project
its namespace would be project1.class1. Suppose if we have
class2 under namespace “orders” in project1, then its
namespace will be “project1.orders.class2”.
 We can clear the value of root namespace property, such that
class2 is referred as orders.class2.

RCW and CCW

 CLR will expose COM components to .net environment through a


proxy called RCW
 Prime function of this proxy is to marshal calls between .net
client and com object. It basically marshals the method
arguments and method return values whenever the client and
server have different representations of data passed between
them.
 For ex: .net client sends string as an argument, the wrapper
converts or marshals it to BSTR type. COM returns BSTR which
proxy marshals it to string and gives to caller.
 Thus both client and server send and receive data that is familiar
to them.
 Runtime creates exactly one RCW for each COM object,
regardless of the number of references that exist on this object.
 Using metadata derived from type library the runtime creates
both the COM object that is called and a wrapper for this object.
 Runtime performs garbage collection on this RCW.
 Thus for a COM component needs to available for a .net client it
needs to implement either INEW or INEWER interface.
 When a COM client refers a .net object, the CLR will create a
proxy CCW for the object.
 The runtime creates exactly one CCW for .net object irrespective
of number of COM clients requesting its services.
 Multiple COM clients can hold references to the CCW that
exposes the INEW interface.
 The CCW inturn holds a single reference to the .net object that
implements this INEW interface.
 Both .net and COM clients can make requests on the same
managed object simultaneously.
 Thus a .net object needs to be available for a COM client it needs
to implement INEW interface.
 CCW gets garbage collected.
 The runtime allocates memory for the .net object from its
garbage-collected heap. In contrast, the runtime allocates
memory for the CCW from a non collected heap.
 CCW is reference-counted as in traditional COM fashion. When
the reference count on the CCW reaches 0, the wrapper releases
its reference on the managed object. A managed object with no
remaining references is collected during the next garbage-
collection cycle.

How can we make window API calls in .net?


 Windows API’s are dynamic link libraries (dll’s) that are part of
windows operating system.
 The .net framework has wrapped a majority of these API’s in to
manage code.
 But still certain API’s are leftover, we can make use of that
functionality through “Platform Invoke Service” also called as
“PInvoke”
 Windows API’s are part of unmanaged world. They are not COM
objects.
 We use them to perform tasks when it is difficult to write
equivalent procedures of our own.
 To make use of windows API functionalities, we have to make use
of PInvoke which enables managed code to call the unmanaged
functionality provided by windows dll’s.
 Pinvoke actually offers a method call to all the functions that are
exported from the unmanaged dll.
 In behind scenes Pinvoke will locate the dll that contains this
function loads it in to memory and locating the address of the
function in memory and calls the exported function and marshals
the arguments.
 Advantage of Windows API: Development time is not there as we
are using existing code.
 Disadvantage: If we don’t know the exact signature of the
Pinvoke we are in trouble. Win API’s are merciless when things
go wrong. If we make any error it will corrupt the memory. More
Pinvoke calls – performance is issue due to the data marshalling.
The major drawback is it requires permissions to make calls to
unmanaged API’s

SqlException class

 This class gets created when data provider for Sql server
generates an error.
 Sqlexception class contains atleast one instance of the sqlerror
object. The sqlerror objects are maintained through
sqlerrorcollection class.
 Messages of different severity levels are seen ranging from 1 –
25. When the message severity level is 19 or less than
sqlconnection remains open, but when the severity level is 20 or
more the sqlconnection is closed. In any case the Sqlexception is
thrown from the method that executed the sqlcommand.
 An error is a property of the Sqlexception class which gives the
count of sqlerror objects in the particular exception class. Each
sqlerror object gives detailed information about exception
generated by the data provider of the sql server.
 Number is a property which gives the severity level of the error.
 Procedure is a property that gives the name of the procedure
that generated the error

What is the difference between Namespace and Assembly?


 Namespace is a logical grouping of related functionality.
 This naming scheme is completely under the control of
developers.
 Namespaces are not related to the assembly.
 Many different namespaces are seen in one assembly and these
namespaces can span across multiple assemblies. They form the
logical boundary for group of classes.
 Assemblies are the output units.
 It forms the unit of deployment and unit of versioning.
 Assemblies contain MSIL code they are self-describing and form
the primary building block of the application.
 Thus namespaces are logical grouping of objects and assemblies
define whether these objects can be accessed due to access
modifiers.

What is the difference between system exceptions and


application exceptions?
 The Exception is the base class for all exception classes.
 Exception class is present under System namespace.
 When an error occurs, either the system or currently executing
application reports it, by throwing an exception containing the
information about the error.
 Once thrown it is handled by the application or by the default
exception handler.
 SystemException is the derived class from Exception class.
 This class does not provide information as to the cause of
exception.
 Systemexceptions are thrown by CLR when a nonfatal error
occurs in programs.
 These errors occur when runtime check fails such as array out of
boundary.
 In many scenarios exception instances of these types are not
advisable bcos they does not give any information about the
cause. Incases where this class is instantiated we can atleast
give human – readable message describing the error should be
passed to the constructor of this class.
 ApplicationException this exception is thrown by the program not
the CLR.
 If we are designing an application that needs to create its own
exception classes then we need to inherit from the
ApplicationException class.
 ApplicationException extends Exception class but does not add
new functionality. This class is provided as means to differentiate
between exceptions defined by applications vs exceptions
defined by the system.
 ApplicationException does not provide information as to the
cause of exception.
 In many scenarios exception instances of these types are not
advisable. Incases where this class is instantiated we can give
human-readable message describing the error should be passed
to the constructor of this class.

Remote Debugging in Asp.net applications


 We will use remote debugging to minimize the impact on the
production server or on client machine.
 Remote debugging is a scenario where the application will be
running on one machine and this application is debugged on
some other machine such that there is no performance impact
on the client machine.
 For remote asp.net debugging the aspnet_wp.exe process is
debugged.
 For remote debugging to happen the client machine or target
machine where the application is running should have “Remote
Debugging Components “be installed.
 In the configuration folder give the path and file name where the
exe can be debugged.
 Copy the exe to the server machine and step through the code
while it is running on the client machine.
 Thus remote debugging is mainly used in scenarios where the
application is already hosted on the client place. But later found
some errors in the pages, so inorder to find out why the error is
happening , we can copy the exe from the client machine to our
machine and debug on our local thus there will be no
performance impact on client. Because if we debug on client
machine performance will be low.
Exception Class
 Exception is any error condition or unexpected behavior
encountered by an executing program. An exception is thrown
from an area of code where problem has occurred.
 Exception class is the base class for all exceptions.
 When an error occurs either the system or currently executing
application reports it by throwing an exception containing
information about the error.
 Once thrown the exception is handled by the application or by
default exception handler.
 When the code in the try block throws an error, it is catched by
the catch block of this try block.
 If there is no catch for this try block it will search for the catch in
the block that has called this method.
 If there are no catch blocks at all in the code then this exception
is handled by the runtime and quits.

using System;

class aaa
{
public static void Main()
{
aaa a = new aaa();
a.abc();
}
public void abc()
{
try
{
pqr();
Console.WriteLine("Exception was Thrown");
}
catch
{
Console.WriteLine("Exception in abc");
}
}

public void pqr()


{
try
{
throw new Exception();

}
catch
{
Console.WriteLine("Exception in pqr");
}
}
}
output: Exception in pqr
Exception was thrown

 The output of the above program is Exception in pqr and


Exception was thrown.
 Only the catch block of that try block will be hit where the
exception was raised. But if there is no catch block for this try
then the catch up in the hierarchy is searched and it will be hit.

using System;

class aaa
{
public static void Main()
{
aaa a = new aaa();
a.abc();
}
public void abc()
{
try
{
pqr();
Console.WriteLine("Exception was Thrown");
}
catch
{
Console.WriteLine("Exception in abc");
}
}

public void pqr()


{
try
{
throw new Exception();

}
finally
{
Console.WriteLine("Finally in pqr");
}
}
}

output: Finally in pqr


Exception in abc

 Here as there is no catch block the method before leaving it will


execute the finally block, and it will search for the catch block to
give this exception, hence it will give the exception to the catch
block of abc method as this is the method which called pqr
method. But if we observe in the previous example the
statement “exception was thrown “ was displayed but this time it
wont display bcos here the exception is catched by abc method,
hence other statements in this try block wont get executed

HttpContext Class
 With every request the client makes, the server receives an
instance of HttpContext class.
 This instance is exposed through the static property of this class
called “current”.
 This object is sent with every request to the server. All the
classes which implements IHttpModule or IHttpHandler receives
a reference of HttpContext class with every request the client
makes. And this holds all the information for that request. As
every webpage is derived from page class and this base page
class implements the above interfaces; we will receive a
reference of HttpContext object with every request.
 The HttpContext holds intrinsic objects like Request, Response,
Application, Cache, Server and Session etc.
 All this properties hold references to their objects respectively
mean Application holds reference to the instance of
HttpApplicationState class and Request holds reference to
HttpRequest class etc.
 In the code page we can directly use Application or Request or
Session etc objects directly bcos all these objects are given to us
through the instance of this HttpContext class.
 We need not refer this objects with full qualify name as
HttpContext.Current.Application because we will refer the
namespace at top as system.web.
 The instance of HttpApplicationState class is created when any
page in this application is requested for the first time. This
instance is reffered through Application property of the
HttpContext object.
 The current user who made the request can be known through
User.Identity.Name. User is the property exposed by the context
object

Command Behavior
 When an ExecuteReader method is called which belongs to
IDbcommand interface, we can use Command Behavior enum.
 The Command Behavior.Close Connection closes the connection
object when the associated datareader is closed.
 The Command Behavior.Default mean it is not applying any flag
it is as simple as calling ExecuteReader () which is another
overloaded method.
 The Command Behavior.KeyInfo means it gives the column and
primary key information.
 The Command Behavior.SingleRow means it returns only single
row.
 The Command Behavior.SchemaOnly means it returns the
schema information or column information of the table.
 The Command Behavior.SequenceAccess means when the
datarow holds a big stream of data where all of it might not be
useful; then instead of loading the entire row, we can only load
the stream of data that is useful specifying the start point and
number of bytes or characters to load through GetBytes () or
GetChars () methods. Once the respective data from the row is
loaded we can access it sequentially.
 The Command Behavior.SingleResult means it gives only single
result set.
 Thus Command Behavior is the enum which specifies the value
to be behaved by the associated datareader object while
executing ExecuteReader method.

Custom HttpHandlers
 When a client requests a page, the request goes to IIS
webserver.
 IIS communicates with .net framework through unmanaged ISAPI
extensions.
 IIS based on file extension of the page requested, it assigns to
the respective engine process.
 If the page requested was for .html then IIS will itself handle it.
 If the page requested was for any asp.net related stuff like .aspx
or .asmx, it delegates the processing of request to asp.net
engine
 If the page requested was for any asp stuff like .asp, it delegates
it to asp engine.
 This delegation of jobs to other engine process has two
advantages a) it provides a nice division of labor and IIS can
concentrate only on its stuff, i.e serving static content to client b)
dynamic server side technologies or files are added to IIS in a
pluggable manner, if IIS was responsible for handling asp.net
pages rather than relying on external engine; in that case, each
time a new version of asp.net came out or file extensions are
changed then a new version of IIS would need to be created to
support this new extensions. With this delegation we can just
map the new extension to the corresponding engine process in
IIS metabase through Internet Service Manager.
 When the request is routed from IIS to Asp.net engine, the
asp.net engine first examines the requested file extension and
then invokes the httphandler associated with that extension
whose job is to render the requested file’s markup
 Technically asp.net engine can invoke either httphandler or
httphandlerfactory which returns an instance of httphandler
 The Http handler receives the request processing call from
asp.net engine and process the request and returns the
appropriate mark up response to the IIS.
 This mark up is served back to the client by IIS.
 The Http handler actually receives an HttpContext object for the
current request. And it provides the response through this
context object as context.Response.write (<html> <b>
hi</b></html>).
 We can write custom handlers to process the request of user
choice like .fm or .cs etc. So in order to process this request we
must create handlers and configure them in IIS and our
application.
 To create a custom handler, take a library project, create a class,
this class should implement IHttpHandler. This interface actually
has one method and one property. The Method is
ProcessRequest (HttpContext obj) which process the request.
And a property IReusable which tells the current instance of the
handler can be used for further requests. If true it can be used
else no.
 So once the custom handler is created we should first configure
to our application. In web.config under <HttpHandler> put the
statement like:
 <add verb=”*” path=”*.cs” type= “../customhandler”/>
 The type should include the path where this custom handler is
stored the dll path. Thus our application is configured to handle
the request.
 Now IIS should be configured bcos when the request comes for
.cs IIS does not know for whom it should delegate this request.
So we will configure the IIS metabase through Internet Service
manager.
 Add a new item, when a file extension .cs comes delegate this
request to asp.net engine.
 So once the request comes to asp.net engine, it will check in the
web.config file of that application whether any handler is written
to handle this request. Thus request is finally handed over to the
custom handler which will process and gives the response
markup to IIS.
 Thus the custom handler which are written by implementing
IHttpHandler must be configured to both IIS and Application

What is Code Access Security?


 Every application that targets the CLR must interact with
runtime’s security system.
 When an application executes, it is automatically evaluated and
given a set of permissions by the runtime.
 Depending upon the permissions that the application receives, it
performs the execution of that part of code.
 Thus CAS is the CLR’s security system that enforces security
policies by preventing unauthorized access to protected
resources and operations.

If you set AutoGenerateColumns=True and still provide custom


column definitions, the DataGrid will render both
Yes the Datagrid will render both. Explicitly declared columns or
custom columns can be used in conjunction with auto-generated
columns. The explicit columns will render first followed by auto-
generated columns. The auto-generate columns are not added to
columns collection.

What is the c# command option to create XML documentation


file
csc:/doc newfiledoc.xml newfile.cs

What is the comment syntax for XML based documentation


/// is the syntax, all the lines which starts with this syntax will be
added to XML documentation.

When creating a C# Class Library project, what is the name of


the supplementary file that Visual Studio.NET creates that
contains General Information about the assembly?
AssemblyInfo.cs is the file which is created by default by IDE. This
file contains the information of assembly. It describes the assembly
and provides version information.

What is the C# escape character for Null?


\0 is the escape sequence for Null

When we will decide to go for a dll or an exe?


Dll is much faster than exe; bcos dll will load in the same
appdomain and can be accessed directly in memory. But in exe it
will have separate process; it needs a separate thread for running
whereas dll will be run in the same thread. If the client needs a user
interface then we need to go for exe else we can still go for dll.

When MissingMethodException is catched?


When an application tries to access a method in the code which
does not exist the compiler will throw an error. But in case of
accessing a method dynamically present in another assembly which
is either renamed or deleted then MissingMethodException is
catched. This assembly is private it cannot be shared assembly.
Because in case of shared assembly there will be a next version if
code changes happens.

Boxing and UnBoxing:


Boxing and UnBoxing is an essential concept in C# type system.
Converting a value type to reference type is called Boxing.
Converting a reference type to a value type is called UnBoxing. C#
provides a unified system, all types including value types are
derived from object. Hence we can call as
console.writeline(3.ToString()).

Memory is allocated on heap for Boxing and value is stored in stack


during unboxing. The type of object will be the type it is holding.
For ex:
Int j = 5;
Object o =j;
If(o is int)
{
console.writeline(“yes”);
}

This will print yes, bcos the type of the object is the value it is
holding and which can be shown through the operator “is”. The
major difference between boxing and unboxing is in boxing the copy
of value is boxed, but in unboxing the value still refers to the same
instance.

What is user control and custom control?


 User controls offer an easy way to partition and reuse user
interface functionality.
 Like any webform we can develop them with any text editor or
using code-behind class.
 Also like webform, user controls are compiled when first
requested and stored in server memory to reduce the response
for subsequent requests.
 However like webforms user controls cannot be requested. They
can only be used in other pages.
 A webform can have multiple instances of the same user control
or it can also have different multiple user controls developed in
different languages. Mean a page can have a user control
developed in vb it can also have a user control developed in c#.
 The user controls have control directive @control rather than
page directive @page.
 A web form can be converted in to user control by changing page
directive to control directive and removing html, body and form
tags.
 The user controls have its own events and are handled with in
the user control itself. The event handlers are written in the
code-behind file of user control.
 Like other webserver controls, user controls can also be added
dynamically at runtime. Loadcontrol method of the page is used
to load dynamically the user control. The loadcontrol method
takes class name of the user control as argument. The class
name will be the attribute in control directive of the user control.
 The control thus loaded is added to control collection of the
page. The loadcontrol method returns a control; hence the
control should be type casted to the appropriate user control to
access its properties.
 For ex: control cl = loadcontrol(“usercontrol.ascx”);
 ((usercontrol)cl).backcolor = blue;
 page.controls.add(cl);
 When the user control is added to the web page declaratively
mean at design time we have to use @register directive; but
when the control is loaded dynamically then we have to use
@reference directive.
 Custom controls are the compiled components that work on
server. They encapsulate the UI and the related information as
reusable packages.
 The custom controls include all the design time features of
standard asp.net controls
 There are several ways to create a custom control: a) when we
want to combine the functionality of two or more controls. B)
When the existing functionality provides all the functionality but
needs to add few more to it, then derive the class from it and
add new features or override the existing features. C) When the
existing server controls does not satisfy our requirements then
derive the class from base control class and develop a new
server control.
 Differences: if none of the existing controls meet our
requirements, we can go for user control or custom controls.
 The major difference between the two lies in: the ease of
creation and ease of use at design time.
 The user control is very easy to create, it’s as simple as creating
a page, but the user control is compiled dynamically at runtime;
hence it cannot be added to toolbox. Moreover all the controls in
user control are added in one placeholder; hence their properties
cannot be handled individually. There is a less designer support.
 But the custom control is compiled code; hence it can be added
to the tool box. It’s difficult to create but very easy to use. The
custom control provides complete designer support.
 If our control has a lot of static layout, a user control makes
sense. If our control is mostly dynamically generated – for
instance rows of data-bound table, nodes of a tree view, or tabs
of a tab control – custom control would be a better choice.
 Thus the basic differences are: user controls – easier to create,
custom – harder to create, b) limited support at design time –
user control, full support of visual design tool – custom control, c)
a separate copy is added to each application when needs to
share; a single copy is kept in GAC and can be used across the
application, d) cannot be added to toolbox – user control; can be
added to toolbox – custom control, e) good for static layout –
user control; good for dynamic layout – custom control

Validation Controls:
 validation server controls are a collection of controls that
validates an associated input server controls like textbox and
displays a custom message when validation fails
 Each validation control plays a specific type of validation. For ex:
against a specific value by compare validator on a range of
values by RangeValidator etc.
 We can collect all the error messages of validation controls on a
page in to ValidationSummary control.
 We can also have user specific or custom validation through
custom validator control.
 Validation occurs when button controls like button, linkbutton,
imagebutton is clicked. We can disable this validation from not
happening when this buttons are clicked by setting
“Causesvalidation” property to false. Usually validation does not
happen whenever page is posted back; for ex: if AutoPostBack
property for dropdown list is true then when the page is posted
back the validation does not happen.
 Every Validation control share some basic validation properties
like a) controltovalidate, b) display, c) EnableClientScript,
d)Enabled, e)ErrorMessage, f)isvalid, and g) Text.
 Display is the property which has three options None, Static,
Dynamic.
 None is used when the error message is captured in to validation
summary control.
 Static: every validation control when placed on the form will
occupy some space for the error message to be displayed. Even
when the input control passes also the space will be reserved by
the validation control on the form. If multiple validation controls
are present for input control then multiple spaces will be
reserved. As the space is reserved there is no change in the
static display of the page. Hence the static layout of the page is
not disturbed when error messages are displayed.
 Dynamic: here validation controls does not reserve any space.
The space is taken dynamically; hence multiple validation
controls use the same space. As the space is allocated
dynamically the page layout might get disturbed if sufficient
space is not allocated for the validation control. Hence in case of
dynamic display the space for validation control should be given
to its max.
 For a validation control we have two properties like ErrorMessage
and Text which is used when is: if validationsummary control is
used to display error messages then content written in
Errormessage property is used. If no validationsummary control
is used then the content written in the Text property is used to
display the error message. If no text is specified then the content
of ErrorMessage is taken and displayed.
 Comparevalidator and Rangevalidator are the two validators that
perform typed comparisons. The datatypes that are supported
are specified in validationdatatype enum. This enum contains
types like: currency, date, double, integer, and string.
 Comparevalidator: the compare validator is used to compare
two input controls through controltovalidate and
controltocompare properties.
 Instead of comparing two input controls, Comparevalidator
control is also used to compare with a constant value. Specify
the value to compare by setting valuetocompare property with
the text.
 The operator property is used to specify the type of comparison
to be performed on the input controls.
 If the validationcompareoperator.datatypecheck is selected, then
validation control will ignore both controltocompare and
valuetocompare property, and just checks whether the value
entered in to the input control can be converted into the type
specified by the Type property of the Comparevalidator control.
In this case if input control is empty no value entered, then no
validation functions are called and validation succeeds. Hence
we have to use RequiredFieldValidator for the user not to skip
the input control.
 CustomValidator: this control will allow us to create a
validation control with customized validation logic specific to our
application.
 Validation controls by default does their validation processing at
server; but we can also make this happen at client side. This
allows the validation to be performed at client before sending the
form to server.
 To does validation at server side, we should provide handler for
servervalidate event. The handler takes servervalidateeventargs
which contains value property that client has sent and isvalid
property which server sends the status of validation to client.
 If validation is performed at client side then Enableclientscript is
set to true and clientvalidationfunction property is used to
specify the clientside validation script function associated with
the customvalidator.
 It is possible to use a customvalidator control with out setting the
controltovalidate property. This is generally used when we want
to validate multiple controls or when we want to validate those
input controls for which validation controls cannot be used such
as checkbox control. In this case the value property in
servervalidateeventargs will be empty. In order to retrieve the
value we have to refer programmatically to the appropriate
control and get its value. For ex: in server validate handler or
client side function for this control we can write as:
 Args.isvalid = ( checkbox1.checked =true)
 RangeValidatorControl: the control allows us to check whether
the user’s entry is with in specified upper and lower boundary.
We can check ranges with in pairs of numbers, characters, and
dates. Boundaries are expressed as constants.
 Controltovalidate property is used for input control and
minimumvalue and maximumvalue properties are used to
specify the range.
 The type property is used to specify the datatype of the values to
compare. The types that are supported are: Currency, date,
string, int and double.
 If the minimumvalue and maximumvalue property values cannot
be converted to the specified datatype of Type Property then
Rangevalidator will throw an exception.
 RegularExpressionValidator: this control will determine the
value specified by the input control matches the pattern defined
by regularexpression control.
 Validationexpression property is used to specify the regular
expression used to validate the input control. Both client and
server side validations can be used. But the validation logic
differs in both, if at all it is client side then jscript regular
expression syntax is used. On the server Regex syntax is used.
Jscript is actually a subset of regex.
 RequiredFieldValidator: this control is used to make the input
control a mandatory field.
 The input control fails its validation if the associated control does
not change from its initial value when validation is performed.
This prevents the user to leave the associated input control
unchanged.
 Extra spaces before and end of the input value is removed
before validation is performed.
 The initial value property specifies the value to be seen in the
control when displayed initially.
 ValidationSummary control: validation summary control
display the error messages of all the controls on the webpage in
a single location. The display could be a list, a bulleted list, or a
paragraph based on the display mode selected.
 The summary includes the errormessages of all the validation
controls that failed.
 The summary can be controlled from displaying through
showsummary property.
 the summary can be given a custom title through HeaderText
property
 The summary information can also be given through message
box enabled through showmessagebox property.

What are different types of @directives?


 @page directive used for every webpage
 @control directive used for user controls or custom controls
 @implements directive used for page or user control implements
the specified .net framework interface. When we implement this
interface in the page then the interface implementation should
happen in the script tag of code declaration block. The
implementation of interface cannot happen in the code behind
file. For ex: <%@ Implements
Interface="System.Web.UI.IPostBackEventHandler" %>
 @import directive used to import a namespace into a page. Thus
all the classes and interfaces under this namespace will be
available to the page.
 @outputcache directive used for caching the pages or user
controls
 @reference directive is used to associate a user control or page
source file dynamically at runtime to the page.
 @register directive is used to associate the user or custom
controls
Global.asax file:
 Global.asax is an application file residing in the root directory of
the application.
 The global.asax file is an optional file. If we do not define the file
the asp.net framework assumes we have not defined any event
handlers for application or session events.
 It contains code that responds to application level events raised
by asp.net or by httpmodules.
 The global.asax file is configured in such a way that any requests
to it are rejected. The global.asax file cannot be requested by the
client. The requests to it are rejected hence external users or
clients cannot download or view its contents.
 The global.asax file at runtime is parsed and compiled in to a
class derived from base HttpApplication class.
 When a user modifies the global.asax file and save the changes
to it, then framework will detect that file has been changed. It
completes all the current requests and sends an
application_onend event to all the listeners of the application and
restarts the application domain. It reboots the whole application.
 The next request from the client will reparse and recompile the
global.asax file and sends an application_onstart event.
 During the lifetime of an application the asp.net creates a pool of
global.asax-derived httpapplication instances. The asp.net
framework will assign one of the instances when ever client
requests any page in the application. This instance is responsible
for managing the entire lifetime of the request. The same
instance can be reused after the completion of current request.
This pool of application objects are created when the user
requests any page for the first time in the application. Later
further request will use these objects.
 The HttpApplication class has several methods like
HttpApplication_init event which is fired on all the httpapplication
instances of the application after they are created.
 Application_onstart is fired only once in the lifetime of an
application when the first instance of httpapplication class is
created.
 Thus the objects which we want to share across the application
by multiple httpapplication instances then we have to put that
code in httpapplication_init event bcos this can be accessed by
all the instances.
 On contrary the objects which you want to share through out the
application by only one and the first instance of the application
then keep the code in application_onstart event.
 Similarly the HttpApplication_Dispose event is called on all the
instances of HttpApplication class. Application_onend is called
only once in the lifetime of an application when the last instance
of the application is torn down.
 Asp.net provides several modules that participate in the
HttpRequest. These modules can be customized or extended or
new modules can be created. All these modules must implement
IHttpModule interface. The events in this module can be handled
in the global.asax file.
 The IHttpModule has two methods init and dispose, so
HttpModule implements these two methods. The events in this
module are handled in the global.asax file.
 Suppose if we have created an authentication module and
registered that for our application, then this module will validate
the credentials given by user. The validation function for this
module is written in global.asax file

What does AspCompat = true mean and when should we use


it?
 One of the attribute in page directive is AspCompat.
 By default its value is false.
 It is an important tool when converting asp pages to aspx pages.
 This should be set to true when the page creates unmanaged
components whose threadmodel is apartment.
 It should also be set to true when page creates unmanaged
components which access intrinsic Asp objects like response or
request.
 What AspCompat does here is it creates a wrapper around the
equivalent objects of asp.net and provides the services to these
unmanaged components.

Serialization Concepts:
 Serialization in basic terms is converting object in to a format
which can be readily passed on to the other process through
communication media.
 For an object to be serialized explicitly by the user, its class
should have a serializable attribute.
 We can serialize an object with a formatter and deserialize the
object with the same formatter. The deserialization does not
invoke a constructor; it always creates a clone of the object
which it has serialized.
 .net supports several types of Serialization like Binary
Serialization, XML Serialization, SOAP serialization, Selective
Serialization, Custom Serialization etc.
 For the object to serialize, we need a stream object to store the
serialized format of the object and a formatter to serialize it.
 Basically .net supports two formatters they are Binary formatter
and Soap formatter.
 The namespace that supports serialization is
system.runtime.serialization. For XML serialization a separate
namespace is present system.xml.serialization.
 Binary Serialization is the fastest of all and more memory
efficient as it produces the most compact stream of data after
serialization which occupies limited amount of space.
 Binary Serialization serializes all public and private data of the
object; it also serializes the name of the class along with
assembly name. As it serializes the entire information of the
object along with its class and assembly name; the type fidelity
info is preserved. As a result we can deserialize the object any
where else with full type identity.
 XML serialization is relatively slower when compared with binary
bcos it has an overhead of creating an XML document in this
process. For an object to get XML serialize its class should have a
default constructor.
 The XML serializes only public data of the object; it doesn’t even
serialize the read-only properties of the object. As a result the
type fidelity is not maintained properly in this type of
serialization.
 In case of XML serialization as only public data of the object is
serialized; the object type fidelity is not maintained.
 SOAP serialization it’s almost same as XML serialization but the
classes which both utilizes comes from two separate
namespaces.
 Selective Serialization: In a class we might want certain member
variables not to be serialized, we can prevent this from not being
serialized by keeping nonserializable attribute for this member
fields.
 Custom Serialization: as .net supports only two formatters, we
can have our own formatters to do a custom serialization. A class
should implements an IFormatter interface to create a custom
formatter. This custom formatter serializes the object in a
custom specified manner.
 In Asp.net sessions have binary serialization. The framework
itself does the binary serialization when session mode is outproc.
 In case of viewstate a special type of serialization is seen called
Losformatter. This is a special type of Text serialization where
the string is compact in to a much smaller or compact ASCII text
in order to sustain with lower bandwidth of the page. This
serialization can persist only for a small duration up to page
request time.
 Webservices uses soap serialization.
 Configuration files in .net like web.config etc are never serialized
they are read by special handlers called configuration section
handlers.

Master Pages

 Master Pages is the new concept in asp.net 2.0


 Master pages usually define the page layout which most of the
pages in the site has to follow. The layout includes menu, logos,
copy right information etc.
 The master page is the class derived from user control class.
 A website can have more than one master page.
 The master page differ from webpage in: a) master page has
@master directive instead of page directive, b) Master pages
cannot be requested, c) instead of .aspx extension master pages
have .master as extension, d) master pages always includes a
special control called “Content Place holder control”.
 When our webform wants to have a layout as master page, then
while creating the webform itself we can select the option master
page and then select the appropriate master page.
 Then our webform with its associated master page will now
called as content page.
 The webform not only while creating we can also set it to master
page even after creating through @ page directive or through
web configuration file
 When we want our web page to be a content page, then in page
directive set the attribute masterpagefile to the master page we
want, and then remove all the other html content which came
along with the page; mean the page should have only page
directive and then now add <asp:content> tag which takes the
content placeholder id, the content place holder is the control
present in master page. So we have to give that id in the content
control present in our page.
 Thus the page which we have just converted as content page
contains only content control. Always the content control is the
top control in content page it contains all other controls that we
will place in the page.
 Other way is to set in the web configuration. In the configuration
file add the element <pages> in that set the master page file
attribute all the pages in this folder will be a content page
 Setting through web file is very effective because we can set all
the pages in that folder to be a content page
 And also some pages which have to be from different master
page put it in a different folder and create one more web
configuration file in that folder.
 When we have <pages> element in web configuration file to set
the master page, then we can set multiple pages on one go. But
if a page directive has a masterpagefile attribute it overrides the
web.config.
 Every master page can have one or more content place holders.
This control is the place where content pages keep their controls.
Similarly a content page can have more than one content
controls in it. the contentplaceholder attribute in the content tag
is very important because this is the attribute which binds the
content to a respective contentplace holder control in the master
page. It is not necessary that every contentplaceholder in the
master page should have an associated content control in the
content page.
 The master page and content page are treated differently only at
design time, but at runtime thy both merge together to form a
single page.
 When a request comes for a web page which is a content page,
the master page replaces the control hierarchy in the content
page. The content page will have master page as its own child
and all the content page content will be moved in to respective
positions of the master page.
 The first step which happens when a web request comes in is
parsing.
 Parsing the master page which will find the content control with
the content place holder id, and when appropriate control is
found all the content from this content control will be moved in
to the respective content place holder id of the master page.
 Now the master page contentplaceholder control replaces the
content control of the content page.

When a web request arrives for an ASP.NET web form using a master
page, the content page (.aspx) and master page (.master) merge their
content together to produce a single page. Let’s say we are using the
following, simple master page.

<%@ Master Language="VB" %>

<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
<title>Untitled Page</title>
</head>
<body>
<form id="form1" runat="server">
<div>
<asp:ContentPlaceHolder ID="ContentPlaceHolder1"
runat="server">
</asp:ContentPlaceHolder>
</div>
</form>
</body>
</html>

The master page contains some common elements, like a head tag.
The most important server-side controls are the form tag (form1) and
the ContentPlaceHolder (ContentPlaceHolder1). Let’s also write a
simple web form to use our master page.

<%@ Page Language="C#" MasterPageFile="~/Master1.master"


AutoEventWireup="true" Title="Untitled Page" %>
<asp:Content ID="Content1" Runat="Server"
ContentPlaceHolderID="ContentPlaceHolder1" >
<asp:Label ID="Label1" runat="server" Text="Hello, World"/>
</asp:Content>

The web form contains a single Content control, which in turn is the
proud parent of a Label. We can visualize what the object hierarchies
would look like at design with the following diagram.

At this point, the page and master page are two separate objects, each
with their own children. When it comes to runtime the master page has
to do its job, the master page replaces the page’s children with itself.
The master page’s next step is to look for Content controls in the
controls formerly associated with the page. When the master page
finds a Content control that matches a ContentPlaceHolder, it moves
the controls into the matching ContentPlaceHolder. In our simple
setup, the master page will find a match for ContentPlaceHolder1, and
copy over the Label.

All of this work occurs after the content page’s PreInit event, but
before the content page’s Init event. During this brief slice of time, the
master page is deserving of its name. The master page is in control -
giving orders and rearranging controls. However, by the time the Init
event fires the master page becomes just another child control inside
the page. In fact, the MasterPage class derives from the UserControl
class. Thus master pages are just masters during design time. When
the application is executing, it’s better to think of the master page as
just another child control.

 As of now we are setting the master page to the web form at


design time either through page directive or web configuration
file or at the time of creating the webform.
 But we can also set the master page at runtime, but it should
happen only in page_preinit event.
 The preinit event is the event which was introduced newly in
asp.net 2.0
 If we set the masterpagefile else anywhere other than in the
preinit event will end up with an exception.
 Because we clearly know now, that at runtime the master page
and the content page merge together to produce one single
page. In this phase the master page will replace the entire
control hierarchy of the content page. As in page life cycle in the
init event the entire control hierarchy of the page should be built.
Preinit is the first and earliest event where we can set the master
page file property.
 When we have several pages to set up the master page file
attribute at runtime, we need not duplicate the code for preinit
event in all the content pages. We can take a base class override
the preinit event and set the masterpagefile attribute in this
event.
 And all the content pages where the masterpagefile attribute
needs to be set at runtime will derive from this base class.
Otherwise they will be not derived from this base class.
 The above logic holds good if we create all content pages or if
the developer knows it. but when the pages are uploaded from
an external site then that pages should match our look and
appearance. So we can force the external pages also to contain
same template as our’s by inducing a new HttpModule .
 In the Init event of this HttpModule we will hook to the
PreRequestHandlerExecute. This is the event which fires just
before the Asp.net runtime begins with page execution. As this
event is fired for .asmx, .ashx and .aspx we will check whether
the current handler is page or not. Once it is a page then we will
hook to the page_preinit event and in this event we will specify
the master page.
 Thus through custom HttpMoudle we can set the dynamic
uploaded pages in to our site is also set to same master
template.
 Page Events: when we have page_load event in both master
page and content page, then the content page’s page load is the
first one to fire then followed by the master page.
 But if the same case is considered with init event then master
page’s page_init event fires followed by the content page init.
 Because all life cycle events like load, render etc work or fire in
the way down the hierarchy. The exception is the init event
which works or fires up in the hierarchy. The initialization event
works from inside out. Since the master page is inside the
content page the master page init event fires first then the
content page.
 The master page can have contentplaceholder controls in the
head tag mean outside the form tag. But the problem is VS 2005
believes that all content place holder controls shld live inside the
form tag. The content place holder inside the head tag will
produce an error message. But the project will still compile and
run.
 Name Mangling: All the controls in the content page will be name
mangled or prefixed with MasterPage ID and ContentPlaceholder
ID ex: ct100_ContentPlaceHolder1_label1 where ct100 is
masterpage id and contentplaceholder1 is the holder ID.
 The another good feature available in Asp.net runtime is URL
rebase.
<div>
<img src="logo.gif" alt="Company Logo" />

<asp:ContentPlaceHolder ID="ContentPlaceHolder1" runat="server">


</asp:ContentPlaceHolder>
</div>

 As long as the master page and content page resides in the


same directory the company logo will display in the browser. But
if the master page and content pages are moved to different
directories then logo is not displayed because; the browser does
not know anything about master pages; it assumes the relative
path for this image is with respect to the content page. If the
image is found in the same directory as the content page it
displays other wise file not found.
 The runtime will try to rebase relative URL’s it finds on server -
side controls. Thus ASP.net runtime will rebase URL’s only if they
are server controls with runat attribute. And runtime cannot
rebase if the URL’s are embedded ex: style="background-image:
url('logo.gif');"

Cross –Page Posting in ASP.net Pages

 In Asp.net we know webforms post back to itself.


 There are three ways now to navigate to other pages: a)
response.redirect, b) server.transfer and c) cross page postback
feature
 default button and other controls which cause postback will
submit the page back to itself
 Under certain conditions we might want the page post to a
different page; in this scenario we can configure certain controls
on the page to post it to different target page. This is called
cross-page posting.
 Now controls inherit a new property in 2.0 to do cross page
posting, and the property is PostBackUrl
 The cross-page posting has some advantages then the transfer
method
 In the cross-page posting the target page can access all the
values that the source page has posted and it can also access all
the public properties of the source page.
 The page class contains a new property called “previous page”
which contains reference to the source page, if both source and
target pages are in the same application. If the current page is
not a target page or it is not in the current web application then
previous page property is not initialized. By default previous
page property is typed to page.
 When the source and target page are not in the same
application, then the target page cannot directly access the
values of the controls in the source page. Target page can only
access the posted data through form property.
 It cannot even access the viewstate because the viewstate will
be hashed. But if we want to save any values such that they will
be accessed or used in the target page (on diff application) then
we can store the values in the strings in hidden field.
 As the target page can access the public members of the source
page, inorder to achieve this target page should have a strongly
typed reference of the source page.
 We can achieve this strong type reference of the source page in
a number of ways; one of them is @previouspage directive in the
target page which allows specifying the source page.
 @pagedirective has attributes like typename and virtual path.
Type name is used when we know the previous page name and
directly specify the name. Virtual path is used to specify the path
to the file. If both are specified then @previouspage directive will
fail.
Deploying Web Applications

 The web application can be deployed in multiple ways :


 XCopy – It’s done by simply copy our application in to the
production server, create a virtual directory with IIS and map the
virtual directory to the application folder. And start executing the
application by requests to the page.
 Copy Website – this is the new technique found in asp.net 2.0,
in visual studio 2005 gives an option to copy the website to the
production server directly. Here the original source code is
actually copied to the production server. So it’s at high risk from
developer point of view as source code is directly exposed in the
production server. The other disadvantage is less error checking
and low startup. Both this reasons is due to no compilation
before deploying in to production server. As it is not compiled up
to now, so errors were not checked previously. And as it is not
compiled at runtime when a page is requested entire application
should be compiled so slow in initial startup. The only advantage
of copy website over xcopy is through this technique we can
deploy file system, ftp sites or any remote websites.
 PreCompilation – the earlier two suffer from no compilation
before deployment in to production server. And hence there is no
proper security to code, slow startup and less error checking.
These issues were addressed in precompilation. Here the
application is precompiled and the precompiled application is
moved to production security. As such no source code is copied
to production server this includes markup, aspx and ascx pages.
Hence the users cannot modify anything even markup after
deployment. Thus the application or website is highly secured.
Error free code in production server; because as the application
is compiled and so errors will be rectified and once cleared only
the precompiled would be created. So no errors are assured in
precompiled website. High speed initial startup because as the
application is compiled before deployment so need of compiling
the pages at runtime. Thus high speed initial startup is assured
in precompilation.
 Precompilation comes in two flavors in-place precompilation, and
Deployment Precompilation. In case of in-place precompilation
we will compile our application before deploying it in production
server. Thus in in-place precompilation in the production server
we wont find any source code. In case of Deployment
precompilation the compilation of our application happens after
deployment in the production but before any user requests the
page.
 Precompilation can improve the performance of your website on
first request because there will be no lag time while asp.net
compiles the site. Precompilation also helps to find errors that
might otherwise be found when a user requests the page.
 Finally if we compile the application before we deploy, then we
can deploy assemblies instead of source code.
 The precompilation of a website can be achieved either through
command line through aspnet_compiler option. Or just add
\deployment\precompilation.axd to the root URL of the website
and open it in the explorer. It will create the precompilation
outputs. Or through build menu publish website option.
 Setup – it’s always desirable to package our web application
such that it would be easy to deploy our application in the
production server. The setup project can be added to our web
application solution. Now we have to build our setup project
which will create the MSI file. Just running the MSI file in the
production server will deploy our application successfully.

Do I need to IIS to run web applications?


If we are using Visual Studio, then we can use Asp.net development
server built in to visual studio to test our web pages. This server
functions as a local webserver.

How do we create pages for Mobile Devices?


We need not create special pages for mobile devices. Asp.net will
detect the type of browser making the request. This information is
used by the page and individual controls to render appropriate
markup for that browser. We therefore need not use a special set of
pages or controls for mobile devices.

Can we hide the source code from page?


Server-side code is processed on the server and is not sent to the
browser, so users cannot see it. However, client script is not
protected; any client script that we add to the page or that is
injected in to the page by server processing is visible to users. If we
are concerned about protecting our source code, we can precompile
our application and deploy the compiled version.

Is it better to write the code in C# or VB ?


We can write our code in any language supported by .net
framework. Although the languages have different syntax, they all
compile to the same object code. The languages have small
differences in how they support different features. For ex: C#
provides access to unmanaged code, while VB supports implicit
event binding through handles clause. However the differences are
minor.

Why we get errors when we try to serialize a Hashtable wih


XML Serializer
Xmlserialiser will not work with objects which implement IDictionary
eg Hashtable. Soap and binary serialiser doesn’t hv this restriction.

Do I have to use one programming language for all my web


pages?
No, each page can be written in a different programming language
if we want, even with in the same application. If we are creating
source code files and putting them under \app_code folder to be
compiled at runtime, all the code in must be in the same language.
However, we can create subfolders in the \app_code folder and use
subfolders to store components written in different programming
languages.

Is the code in a single file and code-behind files identical?


Almost, a code-behind file contains an explicit class, declaration
which is not required for single class file.

Why there is no datagrid control in toolbox


Datagrid control is superseded by gridview control; the gridview
control does all the datagrid will do and still more. The new features
it does is automatic databinding, auto generation of buttons for
selecting, editing and deleting; automatic sorting and automatic
navigation etc. there is a full backward compatibility for datagrid
control. Pages that use datagrid control will continue to work as
they did in 1.0.

What’s the difference between login controls and form


authentication?
Login controls are easy way to implement form authentication
without having to write any code. For ex: the login control performs
same functions, which form authentication class will do like prompt
for credentials, validation of them etc but with all functionality
wrapped up in a control that we can just drag from the toolbox. We
can still use form authentication class and write the code.

Compilation Models in Asp.net 2.0

 The compiled asp.net web applications are comparatively much


faster than the interpreted asp applications and provide much
more advantages one of them being dynamic compilation.
 Dynamic compilation helps the asp.net to automatically detect
the changes, compile the changed files and store them for future
use. It also ensures that applications are up to date.
 In Asp.net 1.x it supports code-behind model, which means every
aspx page will have its code-behind page with user defined logic.
There is a complex relationship between the aspx page and its
code behind page. Every code behind page is inherited from
system.web.ui.page class and every aspx page is inturn inherited
from the codebehind page.
 Visual studio IDE 2003 supports only code-behind model, the
designer finds the relation or associated code-behind page to the
aspx page with @page directive. The code-behind attribute in
the directive says which is the code-behind class for this aspx
page. The visual studio pre-compiles all the code-behind classes
in to a project dll and puts in bin directory. The inherits attribute
in the @page directive tells the class from the project dll where
the aspx page should inherit.
 Instead of compiling all the code behind pages in to single
project dll, we can also have each page as a separate dll, in such
a case instead of code-behind attribute in @page directive we
should have “src” attribute. When the designer sees the src
attribute in the @page directive it assumes that each page as a
separate assembly. The link between the aspx page and its
corresponding code-page is maintained by the src attribute.
Inherits attribute directly specifies the name of codebehind class.
Preprocessor Directives
 #define and #undefine . #define will lets us create a variable or
an identifier. The #define and #undefine should be only as the
first line in the file. They cannot be written else where.
 The use of this preprocessor variables is they can be added or
subtracted to the code at compilation time with (/d) option. This
cannot be done with normal variables.
For ex: #define vijay
using System;

class aaa
{
public static void Main()
{
#if vijay
Console.WriteLine("Defined");
#else
Console.WriteLine("UnDefined");
#endif

}
 Output will be “Defined”
using System;

class aaa
{
public static void Main()
{
#if vijay
Console.WriteLine("Defined");
#else
Console.WriteLine("UnDefined");
#endif

}
 Output will be now Undefined
 But at compilation time if u run this program as
 Csc:\d vijay aaa.cs the output will be “Defined” because the \d
has added or defined variable vijay so Defined will be printed.
Thus \d compiler option lets us create identifiers at the time of
compiling the program. This cannot be done with a normal
variable.
 Thus the advantage of preprocessor variables is that they can be
added or removed at compilation time which is not possible with
other normal variables.
Structures

 structures are value types in .net. they are derived from


system.valuetype class.
 Like class structures can take constructors provided there are
parameters. A parameterless constructor cannot be explicitly
created in a structure.
 Because the default constructor of a structure is reserved and
cannot be redefined.
 When a value type contains with in it a reference type then a
shallow copy takes place mean the assignment of this value
types results in a copy of references.
 For ex: if struct contains a variable of class, then the assignment
of two struct variables, results in referencing the same memory
location of the class object.

OOPs Rule:

 When a reference type is send by value, the called function can


change only the state of the object, but not possible to reassign
what the reference is pointing too.
 The golden rule is when a reference type is passed by reference
the called function can change the state of the object and as well
change the reference location what the reference is pointing too.

SQL Bulk Copy:

 SQL Bulk Copy class is used to copy the data from source to
destination.
 Source could be a sql database or xml or excel or wht ever it is.
 We usually want to transfer or migrate the data between two
tables in the same SQL server or between two different SQL
Servers or also could be between two different types of database
servers.
 SQL Bulkcopy class is very useful in this scenario bcos instead of
migrating or transferring row by row it does transfer a bulk of
rows at a time based on the size specified in the property
Batchsize of SQLBulkCopy class.
 SQLBulkcopy class has an instance method WriteToServer()
which is able to read from either DataRow[],DataReader and
DataTable. Out of them using datareader is a good idea bcos
data reader is forward only readonly and moreover it does not
store anything in system. So it is fast.
 Sometimes we need to transfer the data between tables of
different schema like table 1 having task id taskname taskstatus
and table 2 is have tno tname then we can use bulk copy in this
case by using columnmappings of the bulk copy object.
 For ex: sqlbulkcopy obj = new sqlbulkcopy();
 Obj.columnmappings.add(“task id”,”tno”);
 Obj.columnmappings.add(“taskstatus”,”tname”);
 Thus SQL BulkCopy is a new concept in ADO.net 2.0 to bulk
transfer data from source to destination.

GetHashCode():

 GetHashCode returns an integer value, that identifies an object


based on its internal state data.
 Thus two objects of a class having same data should have same
hash code.

Pillars for OOPs

 The three pillars for OOPs is a) Encapsulation b) Inheritance and


c) Polymorphism.
 Encapsulation – basically the Encapsulation explains the
language ability to hide unnecessary implementation details
from the object user.
 Inheritance – it explains the language ability to allow to build new
class definitions from existing class definitions
 The derivation from the existing class is referred as “is – a”
relationship.
 If class bbb is derived from class aaa then bbb is a aaa we can
say.
 There is another form of code reuse in the world of OOP the
containment/delegation model also known as “has a”
relationship.

Providers

 A provider is a software module that provides uniform interface


between the services and the datastore.
 The services are nothing but which stores there state information
in the database. And this storage happens through a provider.
 If today we have used SQL as database and tomorrow if we
happen to use Oracle as database, no code other than provider
module needs to be changed. And this change in the provider
module also happens declaratively through configuration
element of the web.config file
 Thus storage can be just changed by simply changing the
element in web.config file

 Provider Model : Membership provider has login controls and


other UI controls on the top, below this control sits the
Membership API. The controls talk with the API.
 Membership services stores the data in the relevant datastore,
but instead of directly talking with the datastore it stores through
the provider. Thus the controls and services are itself abstracted
from the underlying datastore. The mode of communication is
only through provider.
 Thus we can plug and place any provider to store data in to any
relevant datasource.

 All providers are derived from the root base class called
“Provider Base” class. This class contains Name and Description
properties and Initialise method which is called by ASP.net when
the provider is loaded.
 Developers typically derive from the providerbase class when
they wanna create a custom provider. Because if they wanna
customize the membership provider to store in different
datastore in a different format instead of deriving from the
providerbase they can derive from membership provider.
 Providers are registered in the <Providers> section under the
services they offer for ex:<membership>
 For ex:
<membership>
<providers>
<add …../>
</providers>
</membership>
 Add element inside the provider register the provider and make
them available to use.

Threading

 Operating system uses processes to isolate applications that


they are executing. Threads are basic units of Operating system
for which the processor time is allocated; more than one thread
can run inside a process.
 The .net has further divided each process in to further light
weight managed subprocessess called appdomains. One or more
managed threads can run in one or more appdomains of the
same managed process.
 Though each appdomain starts with one thread, the code inside
the appdomain can create more appdomains or threads. Thus a
single thread can span across multiple appdomains in the same
process.
 An operating system supports multithready by time slicing the
processor time. The currently executing thread is suspended
when its time elapses saves the thread context of the preempted
or suspended thread and reloads the saved threadcontext of the
next thread in the thread queue.
 Multi threads are mostly or mainly used when the product should
be highly responsive to the end user
 Multithreading does have advantages and disadvantages; the
advantages are high responsive programs to the enduser,
multiple tasks are done almost with in the same time.
 But the disadvantage is every thread has to save its information
to reload in threadcontext which occupies the memory of the
system. With too many threads there is great usuage of system
resources.
 With too many threads in one process the other thread process
hardly gets their time slice of the processor. And with in the
same process because of too many threads the other threads will
hardly get their chance.
 In multithreading programming we can create our own thread or
we can use the system pool threads from the threadpool rather
by creating. But we have to use this pool threads only for short
duration.
 The advantage of creating our own thread than pool thread is we
can set priority to our thread, we can run it for long time, we can
place this threads either in single or multi-threaded apartments
but pool thread will only be in multithreaded apartment. We can
have control on thread created by us like suspend, wait or abort.

Debugging Webservices

 If we are developing the web service our self we can debug it by


using the testpage or ASMX file.
 But if we are using an external service than the debugging
options are limited, since it is unlikely to get the debugging
information from a remote service. All problems cant be
debugged out
 If we are writing both the service and the application the
debugging options are wider. We can debug and test the service
itself, as well as the interface between application and service.
 If we want to debug a webservice present on a remote server,
than that server should be loaded with Remote Debugging
 To start debugging of webservices on a remote server, first keep
a break point in the web service code.
 Go to the application start the application in debug mode, click
on processes from the debug menu and select the machine
running the webservice from the name dropdown list, and then
select asp.net_wp .exe from the available processes list box and
then click Attach.
 So in the front end once we click the button which is calling a
web method than we will step in to debugging of the web
service.

Visual Studio Team Foundation System

 Team foundation server (TFS) integrates source control, work


tracking, reporting, project management, and build automation
and thus enables the development team to work together more
effectively.
 The source code is checked in to source system and builds are
made automatically on this source system by using the build
server. And the build server drops the build in a drop point
location.
 The testing team picks the build from this drop point and
performs manual/automated testing. Test results(Test Result
database of TFS) are stored by TFS to provide quality information
of the build
 The test team can also create work items and bugs on which
development team needs to work. These work items allow the
test team to track the work of development team
 Development environment in TFS includes: a) Team Foundation
Server, b) Build Server, c) A server to store builds from the build
server, d) Developer work stations.
 Test Environment in TFS includes: test work stations with visual
studio team edition for software testers installed.
 TFS plays a vital in collaboration of development and test teams.
A development team interacts with TFS for work items and bugs
etc. a test team interacts with TFS to run tests upload test
results and open a work item or bug and to track them.

Agile Processess/Mehtodologies

 Extreme programming is one of the Agile Processes for the faster


, lighter and iterative software development
 Instead of writing long SRS, the process begins with the
collection of user requirements. These are short use cases.
 After getting these stories the development team estimates the
time to complete these stories. Thus plans for small releases,
iterative estimation and development
 Continuous integration and TDD (test driven development) where
unit test cases are written first before coding.

AJAX
 AJAX – Asynchronous javascript and XML
 Traditionally web applications mean request/response life cycle.
Any ineteraction with the page will cause the page postback to
server. And at the server a series of events occur and the
response to the browser comes as HTML.
 But there is a proficient lag in between this request and response
to be back.
 AJAX is a technique to use javascript and XMLHTTPRequest
object to make light weight HTTP requests to the webserver from
client side scripts
 Ajax- enabled web pages provides a faster user experience.
 The first control which needs to be on the page to have AJAX
functionality is the Script Manager.
 EnablePartialRendering attribute of the script manager is set to
true which will allows partial page rendering.
 To handle the post back of partial data to the server in AJAX is
handled through update panel control. The update panel consists
of content panel where the server controls are kept
 The mode of the update panel is set to conditional which allows
to post only its content to the server when update event on this
panel is called.
 The content of update panel will not be sent to server even if the
rest of the page is gong to server.
 As the attribute of mode is set to conditional; if it is set to always
then the content of update panel will be sent every time when
the page goes to server
 The general misconception which everybody has with AJAX is in
AJAX enabled web pages only a part of web page will go to the
web server on which update is called; but this is not true,
irrespective of actual post back or AJAX post back the whole page
goes to the server.
 But there is a difference in how the request goes to the server. In
the normal page post back the whole page hits the server and
the rendered HTML of whole page is given back to the browser
which causes the whole page refresh(as the whole page HTML is
received by the browser).
 But in case of AJAX post back the request first hits the ajax script
engine on the client side, this engine consist of client side AJAX
libraries. The central nerve of AJAX is Script Manager control.
This control in the client side library register to the page submit
event.
 In this event it recognizes whether it is a page postback or ajax
post back(by keeping the track of update events in all update
panel controls on the page).
 If the request is an AJAX post back then it appends a key word
“AJAX” to the request for the easy identification of AJAX postback
on the server.
 Thus the whole page hits the server even in the case of AJAX call,
infact some more information is added to the normal page
content to identify it is an ajax call.
 Then the only difference it comes is the way the actual page
renders and the ajax page renders. The script manager control
which is present on the page overrides the prerender event. In
this pre render event it calls the method of page request
manager class which will render only that part of the content
which caused postback instead of carrying the whole page
rendered output.
 Thus it causes the updation of the control itself with out the
whole page flickering.
 Thus the functional difference between page post back and ajax
post back is how the page renders.

New features ASP.Net 2.0

 In Asp.net 2.0 we can set the default button on a form,, which


is not possible in earlier versions of asp.net
 We can set it in the form tag <form id = “form1”
defaultbutton=”btn1”/>
 Setting the focus on a particular control in earlier versions of
asp.net needs a javascript code, but this can be easily achieved
by calling focus() method on that particular control in asp.net
2.0
 Multiple validation groups is another new feature in Asp.net
2.0. In 1.1 version the entire page is invalidated as one control
fails the validation. For ex: the page contains a textbox and a
button a required field validator is used for the textbox. And
when we click the button when the textbox is empty, it will not
postback the page as the textbox is empty; which mean the
entire page is invalidated if one control fails the validation.
 In Asp.net 2.0 this can be avoided by using multiple validation
groups, so the controls with in that validation group passes then
the page can be posted to server even if the other controls in
other validation group fails
 For ex: <asp:button id=”btnsearch” ValidationGroup =
“searchgroup”/>
 <asp:button id=”btnget” ValidationGroup=”booksgroup”/>
 so when you click btnsearch then the controls in that validation
group passes then the page will be posted else no but it would
depend upon the controls in other validation group even though
they are empty having required field validator coupled to them.
 Thus the validation of page can be split into multiple groups
through validation groups
 If we don’t specify any group then all the controls belong to one
default group for the backward compatibility
 Cross page posting is posting back to a different page. In
earlier versions page can never be posted to a different page
other than itself and the only way is to be navigated to another
page through server.transfer or response.redirect, in asp.net 2.0
the page can be posted to a different page.
 Controls for ex: button inherently support posting to another
page via the PostBackUrl property.
 In cross posting the first page and second page should not be
coupled mean the second page should not access the controls
present in the first page through previouspage.textbox1.text;
this is wrong because later if the first page has changed from
textbox1 to something else then the code in our second page
also should be changed.
 Hence the pages should not be coupled; the second page should
always access the content of first page through properties,
expose the control as a property and read from there. This short
of decoupling is a good programming practice and it facilitates
strong typing
 But if multiple pages post to the same page then we have to
depend upon dynamic binding.

Project Reference vs File Reference

 when we need to use a type present in another assembly; we


need to make a reference to the other assembly
 this creates an assembly reference in the client assembly’s
manifest about the name and version of the dependant assembly
 visual studio .net supports two types of assembly references:
project references and file references
 the projects page with in the visual studio .net Add Reference
dialog box lists all the other projects in the current solution.
 This allows to create a project reference to another project
present in the same solution. Project references are
recommended way to set references as long as we are
referencing to the projects with in the solution.
 The project references work in all workstations where the
solution and project are loaded. This is because a GUID is placed
in the project file, which uniquely identifies the referenced
project in the context of current solution.
 Project references enable the visual studio .net build system to
track the project dependencies and determine the correct build
orders. If a solution has 3 projects say project A,B,C. and project
A depends upon B and B depends upon C. so the build order is C
first followed by B and then A.
 If we do file reference we need to explicitly specify the build
order of the projects; but when we do project references the
build order is automatically set.
 They avoid the potential of referenced assemblies to be missing
on a particular computer.
 They automatically track the project configuration changes. If we
compile our solution in debug mode than all the debug
assemblies generated by this referenced assemblies are
referred; if we compile the solution in release mode than all the
release mode assemblies generated by this referenced
assemblies are referred. This means we can automatically switch
from debug to release builds across the projects with out having
to reset references.
 They enable visual studio .net to detect and prevent circular
references.
 If we can’t use a project reference because we need to refer an
assembly present out side of our current solution then use file
reference.
 If we set a file reference the path to the assembly is stored in the
project file. A relative path is stored to local assemblies, while
the full network path is stored for server-based assemblies
 Automated Dependency Tracking – each time we build our local
project the build system compares the date and time of the
referenced assembly file with the local working copy. If the
referenced assembly is more recent, the new version is copied in
to local. One of the benefits is the project reference established
by a developer does not lock the assembly or dll and doesn’t
interfere in any way with the build process.

WCF vs Webservices:

 Webservices depend upon XML serializer


 Wcf datacontract serializer.
 The key issues with XML serializer are: only public fields or
properties can only be serialized. Ienumerable type either
classes or fields can only be serialized. Hash table/dictionary
elements cannot be serialized.
 Datacontract serializer addresses all these issues, can serialize
public/private fields or properties. C an also serialize
 Datacontract serializer is better in performance
 Xmlserializer doesn’t indicate which field is serialized into xml,
where as datacontract serializer clearly states which field
serialized to which field in xml

WPF Overview

 With wpf we can develop a wide range of applications starting


from rich UI stand alone applications to browser-hosted
applicaitions
 The core of wpf is resolution independent and vector-based
rendering is built to support modern graphics tools. The core
functionality of WPF is supported through XAML( Extensible
application markup language)

You might also like