You are on page 1of 101

THE TEAM

“suprotimagarwal@dotnetcurry.
Editor In Chief : Technical Reviewers : com”. The information in this
Suprotim Agarwal Damir Arh magazine has been reviewed
(suprotimagarwal@ Daniel Jimenez Garcia for accuracy at the time of
dotnetcurry.com) Keerti Kotaru its publication, however the
Mayur Tendulkar information is distributed without
Art Director : Minal Agarwal Ravi Kiran any warranty expressed or implied.
Yacoub Massad
Contributing Authors : Windows, Visual Studio, ASP.NET, Azure, TFS
Yacoub Massad Next Edition : May 2019 & other Microsoft products & technologies
Ravi Kiran Copyright @A2Z Knowledge are trademarks of the Microsoft group of
companies. ‘DNC Magazine’ is an independent
Keerti Kotaru Visuals Pvt. Ltd.
publication and is not affiliated with, nor has
Gerald Versluis it been authorized, sponsored, or otherwise
Daniel Jimenez Garcia Reproductions in whole or part approved by Microsoft Corporation. Microsoft
Damir Arh prohibited except by written is a registered trademark of Microsoft
permission. Email requests to corporation in the United States and/or other
countries.

LETTER FROM

THE EDITOR
@suprotimagarwal
Editor in Chief

Hello Friends!

It's been 17 years since Microsoft released ASP.NET with version 1.0 of the .NET Framework.
Since then, ASP.NET has come a long way! How did ASP.NET reach this point? What were
some of the defining moments and what influenced some of the design decisions? In this
edition, Daniel charts the rise of ASP.NET, a popular server-side web application framework.

For our C# and .NET Core fans, Yacoub and Damir have a bunch of goodies covering Global
State in C#, .NET Core Global Tools, as well as some new C# 8 features in VS 2019.

For our Angular fans, Keerti and Ravi show how to create Template Driven Forms as well as
control Change Detection in Angular.

Last but not the least, Gerald delivers a useful and elaborate tutorial on working with
BarCodes in Xamarin Forms.

...and a heartfelt thank you to our patrons for purchasing “The Absolutely Awesome Book on C#
and .NET”. Your feedback and support has been overwhelming and encouraging!

How was this edition? Reach out to me directly with your comments and feedback at
suprotimagarwal@dotnetcurry.com or via my twitter handler @suprotimagarwal.
CONTENTS
USING AND DEVELOPING
.NET CORE 42
GLOBAL TOOLS

06
THE HISTORY OF
ASP.NET
18

WORKING WITH 30
TEMPLATE DRIVEN BARCODES IN
FORMS IN XAMARIN.FORMS
ANGULAR

GLOBAL STATE IN CONTROLLING NEW C# 8 FEATURES


C# CHANGE DETECTION IN VISUAL STUDIO
APPLICATIONS IN ANGULAR 2019 PREVIEW

70 80 90

CHECK OUT OUR LATEST BOOK ON C# AND .NET


(COVERING C# 6, C# 7, .NET CORE).
AVAILABLE AT A 25% DISCOUNT.
.NET CORE

Damir Arh

USING AND
DEVELOPING
.NET CORE
GLOBAL TOOLS
The article provides an overview of .NET Core Global tools: how to use them, how to create them
and what to expect of them in future versions of .NET Core.

WHAT ARE GLOBAL TOOLS?


.NET Core global tools were introduced as part of .NET Core 2.1.
In essence, they are console applications, i.e. they are meant to be used from
the command line, or from scripts. Their main distinguishing feature is that they
are distributed as NuGet packages and that the .NET Core SDK includes all the
necessary tooling for installing, running, updating and uninstalling them.

This concept might sound familiar to readers who are also JavaScript developers.
The idea is very similar to globally installed npm (Node.js package manager)
packages that are designed to be used as command line tools, e.g. Apache
Cordova, Angular CLI, Vue CLI, etc.

Since global tools are distributed as NuGet packages, they are published in
the official NuGet Gallery just like any other NuGet package containing a class
library. Unfortunately, there’s currently no way to restrict the gallery search to
only global tools, which makes it difficult to find them and to recognize them in
the search results.

6 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


At the time of writing this article, the best available method for discovering global tools was a list
maintained by Nate McMaster. It is implemented as a GitHub repository and new entries can be easily
contributed to it as pull requests or issues. The maintainer seems to be doing a good job at processing new
submissions quickly.

Global tools primarily target developers.

That’s the reason why most of the tools in the list provide functionalities which are useful in some part of
the development process. To give you an idea of what to expect or what to look for, here’s a short selection
of the tools from the list:

• dotnet-ignore can download a .gitignore file of your choice from the repository of templates for
different types of projects. It is most useful when you are creating a new project to get you started with
an initial .gitignore file before you make your first commit to a Git repository. If your IDE or text editor
doesn’t do that for you already, it can make your life much easier.

• dotnet-serve is a simple web server for static files in your current folder. It allows you to open local files
in the browser via the HTTP protocol. This way you will avoid restrictions that the browsers impose on
JavaScript files loaded from the local file system (e.g. local JavaScript files aren’t allowed to load other
resources dynamically).

• dotnet-cleanup cleans up a project folder, i.e. it deletes files that were download by a package manager
or generated during build, e.g. files in bin and obj folders.

• dotnet-warp is a wrapper tool around Warp, simplifying its use for .NET Core projects. It can be used to
generate stand-alone executable files for applications developed in .NET Core.

• Amazon.ECS.Tools, Amazon.ElasticBeanstalk.Tools and Amazon.Lambda.Tools are official global tools


from AWS (Amazon Web Services) to make it easier to deploy applications to Amazon Elastic Container
Service, AWS Elastic Beanstalk and AWS Lambda, respectively.

This is by no means an exhaustive list. I encourage you to look at the full list on GitHub and see if it
contains any tools that could improve your development process. It’s not a long read, so you should be ok.

When deciding to install a global tool, keep in mind that you will be running it outside a sandboxed
environment with the same permissions as you have.

Therefore, you will have to always trust the tool you are about to install. Seeing the tool in the above-
mentioned list should make it more legitimate. Nevertheless, it would still be prudent to always check the
package download numbers in NuGet gallery and any other packages published by the same author, before
installing a global tool.

Most, if not all, global tools are open source, so you can also check their source code to be extra safe.

MANAGING AND USING GLOBAL TOOLS


The only prerequisite for using .NET Core global tools is having .NET Core SDK 2.1 or newer installed. If
a global tool targets a newer version of .NET Core, you need to have a compatible version of the runtime
installed.

www.dotnetcurry.com/magazine | 7
The management of global tools on the local machine is done using the dotnet tool command. To install
a global tool, its NuGet package name must be specified, e.g.:

dotnet tool install -g dotnet-ignore

The above command always installs the latest stable version of the package. If you want to install a specific
version (an older one or a prerelease one), you must explicitly specify the version:

dotnet tool install -g dotnet-ignore --version 1.0.1

The install process puts the tool in a folder inside your profile (the .dotnet/tools subfolder to be exact).
The folder is automatically added to the PATH user environment variable. This makes it specific to your user
account on the machine, but available from any folder without having to specify the path to it.

The name of the command does not necessarily match the name of the NuGet package (although this is
often the case). The name of the command to invoke is printed out when you install it, e.g.:

You can invoke the tool using the following command: dotnet-ignore
Tool 'dotnet-ignore' (version '1.0.3') was successfully installed.

You can always get a list of all the global tools installed and their corresponding commands by invoking
the dotnet tool list command:

> dotnet tool list -g


Package Id Version Commands
---------------------------------------------
dotnet-ignore 1.0.3 dotnet-ignore

You can control the behavior of commands with command line arguments. Each command has its own set
of supported arguments. You can usually list them if you invoke the command with the -h option, e.g.:

dotnet-ignore -h

The output should be enough for you to learn how to use a command. Some global tools have more
detailed documentation published on their NuGet page or on their project site (the NuGet package always
includes a link to it).

Figure 1: NuGet page of dotnet-serve global tool

8 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


In the case of dotnet-ignore, two subcommands are available:

• List prints out a list of all available .gitignore files:

dotnet-ignore list

• Get downloads the specified .gitignore file:

dotnet-ignore get -n VisualStudio

You can use the dotnet tool command to perform several other actions:

• List all global tools installed for your user account (the list includes package name, version and
command to invoke):

dotnet tool list -g

• Update a global tool to its latest stable version, e.g.:

dotnet tool update -g dotnet-ignore

• Uninstall a global tool from your user account, e.g.:

dotnet tool uninstall -g dotnet-ignore

To install a different version (not the latest stable one) of an already installed global tool, you must
uninstall it first, and then install the selected version, e.g.:
dotnet tool uninstall -g dotnet-ignore
dotnet tool install -g dotnet-ignore --version 1.0.1

DEVELOPING YOUR OWN GLOBAL TOOL


As already mentioned, global tools are .NET Core console applications. Therefore, to create one, you need
to select the Console App (.NET Core) template when creating a new project in Visual Studio. You can use the
dotnet new console command to create a suitable project from the command line.

The project generated from the template is already a very simple global tool which prints out “Hello
World!” when you invoke it:

class Program
{
static void Main(string[] args)
{
Console.WriteLine("Hello World!");
}
}

Of course, you will want to affect the behavior of your global tool with command line arguments. If you
need to support multiple subcommands, each with several options, argument parsing can quickly become
a complex task. To make your life easier, you should use a class library to handle that and focus on the core
functionality of the global tool instead.

www.dotnetcurry.com/magazine | 9
A good choice would be CommandLineUtils class library which is used by many other global tools as well.
To use it, you need to install the McMaster.Extensions.CommandLineUtils NuGet package in your project.

You can then implement each subcommand as a separate class with a Command attribute. In the following
example, the Print command will print out the contents of a file to the console:

[Command(Description = "Prints out the contents of a file.")]


class Print
{
// ...
}

Options are defined as properties with an Option attribute. Data annotation attributes can be used to
implement value validation:

[Required]
[Option(Description = "Required. File path.")]
public string Path { get; }

The OnExecute method should contain the actual command. The injected instance of IConsole should be
used to output text to console:

public void OnExecute(IConsole console)


{
try
{
var contents = File.ReadAllText(Path);
console.WriteLine(contents);
}
catch (Exception e)
{
console.WriteLine(e.Message);
}
}

A command for listing the contents of a folder can be implemented in a similar manner:

[Command(Description = "Lists files and folders.")]


class List
{
[Option(Description = "Folder path.")]
public string Path { get; } = ".";

public void OnExecute(IConsole console)


{
try
{
var dir = new DirectoryInfo(Path);

foreach (var subdir in dir.GetDirectories())


{
console.WriteLine($"{subdir.Name} (DIR)");
}

foreach (var file in dir.GetFiles())


{

10 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


console.WriteLine($"{file.Name}");
}
}
catch (Exception e)
{
console.WriteLine(e.Message);
}
}
}

All supported subcommands must be declared with the Subcommand attribute of the Program class:

[Command(Description = "Text file printer")]


[Subcommand(typeof(List), typeof(Print))]
class Program
{
// ...
}

Since the Program class is also a command, it requires its own OnExecute method which will be invoked
when no subcommand command line argument is specified. This sample global tool will simply notify the
user that a subcommand must be specified, and print out the help text:

public int OnExecute(CommandLineApplication app, IConsole console)


{
console.WriteLine("You must specify a subcommand.");
console.WriteLine();
app.ShowHelp();
return 1;
}

Notice that this OnExecute method returns a non-zero return value which by convention indicates an error
condition. The OnExecute methods with void return type are treated as if they always returned 0. This
matches the behavior of the Main method in a console application.

In the Main method, the input arguments must simply be relayed to the class library:

static int Main(string[] args)


{
return CommandLineApplication.Execute<Program>(args);
}

For the global tool to return an exit code, the Main method must return the value returned by the
CommandLineApplication.Execute static method which will match the return value of the OnExecute
method invoked.

With almost no plumbing code, we have created a fully functional command line tool. If we invoke it with
no arguments, the autogenerated help text will be displayed:

> dotnet-fs
You must specify a subcommand.

Text file printer

Usage: dotnet-fs [options] [command]

www.dotnetcurry.com/magazine | 11
Options:
-?|-h|--help Show help information

Commands:
list Lists files and folders.
print Prints out contents of a file.

Run 'dotnet-fs [command] --help' for more information about a command.

Help text is also autogenerated for each subcommand:

> dotnet-fs print -h


Prints out contents of a file.

Usage: dotnet-fs print [options]

Options:
-p|--path <PATH> Required. File path.
-?|-h|--help Show help information

Missing required parameters are correctly handled:

> dotnet-fs print


The --path field is required.
Specify --help for a list of available options and commands.

Of course, the core functionality of printing out the file contents works as well:

> dotnet-fs print -p .\bin\Debug\netcoreapp2.2\dotnet-fs.runtimeconfig.json


{
"runtimeOptions": {
"tfm": "netcoreapp2.2",
"framework": {
"name": "Microsoft.NETCore.App",
"version": "2.2.0"
}
}
}

This example is only a small part of what the CommandLineUtils class library can do for you in a console
application. To learn more, you should read the official documentation.

PUBLISHING YOUR GLOBAL TOOL


To distribute the global tool (and also to install it locally), it must be packed into a NuGet package. If you’re
using Visual Studio, you can fill out most of the NuGet package metadata on the Package pane of the
project Properties window.

12 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


Figure 2: NuGet package metadata

Unfortunately, Visual Studio does not yet have any user interface for setting the most important NuGet
configuration option. So to do that, the .csproj project file must be edited manually.

Right-click the project in the Solution Explorer window and click the Edit <project-name>.csproj menu item
to open the project file in the editor window. Add the <PackAsTool> element to the <PropertyGroup>
element which already contains all the other NuGet metadata:

<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp2.2</TargetFramework>
<PackAsTool>true</PackAsTool>
<RootNamespace>dotnet_fs</RootNamespace>
<GeneratePackageOnBuild>true</GeneratePackageOnBuild>
<Authors>Damir Arh</Authors>
<Company />
<Description>.NET Core global tool for printing file contents to console</
Description>
<PackageProjectUrl>https://github.com/damirarh/dotnet-fs</PackageProjectUrl>
<RepositoryUrl>https://github.com/damirarh/dotnet-fs</RepositoryUrl>
<PackageLicenseUrl>https://github.com/damirarh/dotnet-fs/blob/master/LICENSE</
PackageLicenseUrl>
<RepositoryType></RepositoryType>
</PropertyGroup>

If you checked the Generate NuGet package on build checkbox in the Properties window, the NuGet package
will be generated automatically in the bin\Debug folder when you build the project, the next time.

To test the global tool before publishing it to the NuGet Gallery, you can use the --add-source option of
the dotnet tool install command to instruct it with the location where the NuGet package with the
global tool can be found:

www.dotnetcurry.com/magazine | 13
dotnet tool install --add-source .\bin\Debug --tool-path tools dotnet-fs

I also used the --tool-path option to install the tool into the tools subfolder, instead of installing it
globally. This option allows you to specify a folder (relative or absolute) where the tool will be installed,
instead of in the default folder (i.e. .dotnet/tools inside your profile). It can be used with all the other
dotnet tool commands as well.

This destination folder is not automatically added to your PATH user environment variable. So, to test your
command, you must specify the full path to it:

.\tools\dotnet-fs.exe

Once you’re satisfied with your new global tool, you can publish it to the NuGet Gallery like any other
NuGet package so that others will be able to use it as well. You should also consider submitting a pull
request to the global tools list repository to add your tool to the list, and increase its discoverability.

UPCOMING CHANGES IN .NET CORE 3.0


In .NET Core 3.0, global tools will be expanded with additional support for locally installed tools specific to
a code repository. When such tools are required by the code in the repository (e.g. during the build process),
this approach has the following advantages over globally installed tools:

• A specific version of a tool can be installed for the repository, independently of the globally installed
version of the same tool. This can ensure predictable and reproducible behavior even when there are
breaking changes between versions of the tool.

• The tools required by the code in the repository are listed in a manifest file which is included in the
repository. When a new developer retrieves the code from the repository, she/he can easily set up the
environment by restoring the tools listed in the manifest file.

This local-tools scenario should be familiar to JavaScript developers just like the global-tools one. When
npm packages are installed locally, they can be added to the package.json manifest as development
dependencies. They are then restored along with all the other dependencies and can be used from scripts
which are also defined in the same package.json file.

Initial support for local tools is included in the currently available .NET Core 3.0 Preview 2. Some
implementation details can still change before the final release, although this doesn’t prevent you from
trying out the feature if you have .NET Core SDK 3.0 Preview 2 installed.

Just don’t rely on it yet for your projects!

Before installing any tools locally, you should first create the manifest file in the root of your repository
using the dotnet new command:

dotnet new tool-manifest

This will create an empty .config\dotnet-tools.json manifest file with the following content:

14 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


{
"version": 1,
"isRoot": true,
"tools": {}
}

You can now install a tool locally by invoking the dotnet tool install command without the -g option:

dotnet tool install dotnet-serve

A different syntax must be used for invoking locally installed tools. This is explained to you when the
installation succeeds:

You can invoke the tool from this directory using the following command: dotnet
tool run dotnet-serve
Tool 'dotnet-serve' (version '1.1.0') was successfully installed. Entry is added to
the manifest file D:\Temp\local-tools-sample\.config\dotnet-tools.json.

Using this syntax, the command can be invoked from any folder (directly or indirectly) inside the one
containing the manifest file in the .config subfolder. This usually makes it a good idea to create the
manifest file in the root folder of the repository.

If you check the previously created manifest file, you can also see the entry for the locally installed tool as
mentioned above:

{
"version": 1,
"isRoot": true,
"tools": {
"dotnet-serve": {
"version": "1.1.0",
"commands": [
"dotnet-serve"
]
}
}
}

There’s no need to check the manifest file to see which local tools are installed. The dotnet tool list
command can be used for that instead:

> dotnet tool list


Package Id Version Commands Manifest
-----------------------------------------------------------------------------------
------------------------
dotnet-serve 1.1.0 dotnet-serve D:\Temp\local-tools-sample\.config\
dotnet-tools.json

The command will list the same tools when another developer downloads a repository with this manifest
file. If she/he tries to immediately invoke the tool, she/he will be instructed to run the dotnet tool
restore command first:

> dotnet tool run dotnet-serve


Run "dotnet tool restore" to make the "dotnet-serve" command available.

www.dotnetcurry.com/magazine | 15
After running the command as instructed, the dotnet serve command will become available as well:

> dotnet tool restore


Tool 'dotnet-serve' (version '1.1.0') was restored. Available commands: dotnet-
serve

Restore was successful.


> dotnet tool run dotnet-serve
Starting server, serving .

Listening on:
http://localhost:8080

Press CTRL+C to exit

Despite their name, local tools aren’t installed inside the repository folder. When invoked using the dotnet
tool run command, they are run from the shared global NuGet packages folder (by default, the .nuget
folder inside the user profile). This means that multiple repositories with the same version of a local tool
installed will share the same install location. Also, the dotnet tool restore command might not
even be necessary if the developer already has the required tools because they are also used by other
repositories.

Conclusion:

.NET Core global tools are a useful addition to the .NET Core ecosystem. They provide an easy way for
distributing and deploying command line tools.

As more developers become familiar with them, the selection of tools available will also grow. As I have
shown, creating a command line tool is not a daunting task if you take advantage of available class
libraries.

.NET Core 3.0 will support even more use cases with the addition of local tools. Having the required tools
specified inside a code repository for a project, simplifies the process of ensuring a working environment
for all developers. This could encourage developers to make projects dependent on specific local tools.

Damir Arh
Author
Damir Arh has many years of experience with Microsoft development tools; both in
complex enterprise software projects and modern cross-platform mobile applications.
In his drive towards better development processes, he is a proponent of test driven
development, continuous integration and continuous deployment. He shares his knowledge
by speaking at local user groups and conferences, blogging, and answering questions on
Stack Overflow. He is an awarded Microsoft MVP for .NET since 2012.

Thanks to Daniel Jimenez Garcia for reviewing this article.

16 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


ANGULAR

Keerti Kotaru

Template Driven
Forms in Angular
The most common aspect of any web application is handling user input.
Traditionally, Forms facilitate user input which creates data for a system.
Angular has two approaches to forms
a. Reactive Forms
b. Template driven forms.
Reactive forms are scalable, sophisticated, synchronous and immutable.
For creating a complex form, Reactive is a better approach. Download the
40th edition of the DNC Magazine to learn about Reactive Forms.
However, template driven forms are simple, easy to build, asynchronous
and mutable. This article focuses on creating template driven forms.

TEMPLATE DRIVEN FORMS - USE


CASE
Let’s take the example of a web application for booking travel. As
developers of this application, we will create a form to let the user
create a new travel destination. Imagine an administrator adding a new
destination to the system. User will key-in destination city name, country,
travel season etc. The data will be saved in the database of the system
and will allow travelers (users) to book flights and hotels to this place.

Saving destination to the database is not in scope for this article.


However, we will accept input from the user and validate for sanctity of
data.
GETTING STARTED WITH TEMPLATE DRIVEN FORMS
Following are the steps to create template driven forms in Angular.

1. Import forms module

2. Create a component for the form.


- Create the component HTML template that will include all fields to accept input from the user.

3. Create a class representing form fields. It will have all the fields as that of the form.

4. Bind class fields with the form in the template.

5. Perform validations to maintain sanctity of data.

IMPORT FORMS MODULE


When we create an Angular project with Angular CLI, it will install @angular/forms package in the
Angular monorepo. In a relevant Angular module, import FormsModule from @angular/forms.

Installing and importing form module is a prerequisite for using template driven forms. This will enable the
application to use ngModel and a couple of additional features covered in future sections.

Consider the following code snippet. In the sample application, we are importing FormsModule in the
primary root module, AppModule.

import { FormsModule } from '@angular/forms';


// Removed code for brevity.
@NgModule({
declarations: [
...
],
imports: [
FormsModule
],
providers: [],
bootstrap: [AppComponent]
})

export class AppModule { }

CREATE A COMPONENT FOR THE FORM


Using Angular CLI, create a component for the form with the following command.

ng generate component create-destination

The CLI command will create the component, and add it to the module declarations. The component is now
ready to use.

Let’s create the form to add a new destination. To begin with, add the following fields to the form.

www.dotnetcurry.com/magazine | 19
Figure 1: Create destination form

Consider this HTML template to create the form.

<form style="margin-left:20px; margin-top:20px">

<div class="form-group">
<label for="cityName">
<strong>Destination</strong>
<input class="form-control" type="text"
name="cityName" id="cityName" >
</label>
</div>

<div class="form-group">
<label for="country">
<strong>Country</strong>
<input class="form-control" type="text"
id="country" name="country" >
</label>
</div>

<div>
<label for="timezone">
<strong> Timezone </strong>
<input type="text" class="form-control"
id="timezone" name="timezone" />
</label>
</div>

<div>
<label for="distFrmCapital">
<strong> Distance from capital </strong>
<input type="text" class="form-control" id="distFrmCapital"
name="distFrmCapital" />
</label>

20 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


</div>

<div>
<label for="language">
<strong> Local Language </strong>
<input type="text" class="form-control" id="language" name="language" />
</label>
</div>
<button class="btn btn-success"> Create Destination </button>
</form>

CREATE A CLASS REPRESENTING FORM FIELDS.


We will need a model object representing the form fields. It will hold the form data. We will use this in the
component’s TypeScript code.

To create a class representing the model, use the following Angular CLI command.

ng generate class destination

It will create a class with the name Destination. Add the following class fields representing the above
created form.

export default class Destination {


constructor(
public name: string,
public country: string,
public timezone: string,
public distanceFromCapital?: number,
public language?: string
){
}
}

Notice the parameterized constructor. When we create an instance, we will have to provide default values
for each field. The last two fields distanceFromCapital and language are optional. Hence the use of
TypeScript syntax, which postfixes the variable name with a question mark.

Next, import and instantiate the class in the create-destination.component.

import { Component, OnInit } from '@angular/core';


import Destination from '../destination';

@Component({
selector: 'app-create-destination',
templateUrl: './create-destination.component.html',
styleUrls: ['./create-destination.component.css']
})
export class CreateDestinationComponent implements OnInit {

private destinationModel: Destination;

constructor() { }

ngOnInit() {
this.destinationModel = new Destination(

www.dotnetcurry.com/magazine | 21
"Hyderabad",
"India",
"IST",
930,
);
}
}

In this sample, we are instantiating the component with default values. See ngOnInit function. The model
object is named destinationModel.

BIND CLASS FIELDS WITH THE FORM IN THE TEMPLATE


Use two-way data binding to show and retrieve model values in the template. Use banana-in-a-box syntax
with ngModel for two-way data binding. Use destinationModel (instantiated in the component class) in
the template.

<form style="margin-left:20px; margin-top:20px">


<div class="form-group">
<label for="cityName">
<strong>Destination</strong>
<input class="form-control" type="text" name="cityName" id="cityName"
[(ngModel)]="destinationModel.name" />
</label>
</div>
<div class="form-group">
<label for="country">
<strong>Country</strong>
<input class="form-control" type="text" id="country" name="country"
[(ngModel)]="destinationModel.country" />
</label>
</div>
<div>
<label for="timezone">
<strong> Timezone </strong>
<input type="text" class="form-control" id="timezone" name="timezone"
[(ngModel)]="destinationModel.timezone" />
</label>
</div>
<div>
<label for="distFrmCapital">
<strong> Distance from capital </strong>
<input type="text" class="form-control" id="distFrmCapital"
name="distFrmCapital" [(ngModel)]="destinationModel.distanceFromCapital" />
</label>
</div>
<div>
<label for="language">
<strong> Local Language </strong>
<input type="text" class="form-control" id="language" name="language"
[(ngModel)]="destinationModel.language" />
</label>
</div>
<button class="btn btn-success"> Create Destination </button>
</form>

Notice the name attribute on all input elements. It uniquely identifies the field on an Angular form. In a
Form, it is mandatory to provide a name attribute and a unique value when used with ngModel.

22 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


If you do not wish to register the text field with the form, use ngModelOptions.standalone. With it, we
do not need to use a name field.

Note: NgModel is part of FormsModule in @angular/forms. Hence importing FormsModule (in the Angular
module) is a prerequisite to use ngModel. See section Import forms module at Page 19 for details.

Now with the input fields’ data binding setup, as the user types in some values, the component class can
access the data. And the component shows values set in the component.

In the sample we just saw, we already initialized the model object (in the component) with a default object.
It shows the model values on the form. See figure -2.

// Code initializing model object


ngOnInit() {
this.destinationModel = new Destination(
"Hyderabad",
"India",
"IST",
930,
);
}

Figure 2: Form with initialized values

Notice, Distance from capital and Local Language are optional parameters to the constructor. See the postfix
“?” on the constructor’s parameter on Page 21. Local language hasn’t been provided a value to initialize.

To capture the values as user types, add onChange handler to the form.

<form (change)="onChange()">

The change handler in the sample prints the model object to the the console. There is no real purpose to it.
The function demonstrates model value changing with the user typing in values in the form. See figure-3.

www.dotnetcurry.com/magazine | 23
The updated value is printed to the console on change.

onChange(){
console.log(this.destinationModel);
}

Figure-3: Model updated on change

We can use the onClick event on the Create Destination button to invoke a component method, which in
turn submits form data to a server-side API.

<button (click)="onClick()" [disabled]="!destinationForm.form.valid" class="btn


btn-success"> Create Destination </button>

As mentioned earlier, all form fields are using two-way data binding. Values typed by the user are set
automatically on the model object, destinationModel. We can use this object as an input to the server-
side API. See the code snippet below.

onChange(){
console.log(this.destinationModel);
// Following service instance makes a HTTP POST call to create a destination.
serviceInstance.createDestination(this.destinationModel);
}

A form can have more than one button. Each button will have its own handler in TypeScript class. For a
form, it’s preferred to handle the submit event at the form level. Hence let’s modify the button to submit
type.

<button type="submit" [disabled]="!destinationForm.form.valid" class="btn btn-


success"> Create Destination </button>

We will remove the click event (click)=”onClick()” and move to the form level. The ngSubmit will be invoked
when the form submits.

<form style="margin-left:20px; margin-top:20px" (change)="onChange()"


(ngSubmit)="onClick()" #destinationForm="ngForm" ></form>

24 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


VALIDATIONS AND FEEDBACK TO USER
One of the important aspects of a form in a web page is to manage sanctity of data. In the above example,
the model object defines all fields but language and distance from capital are marked mandatory. The
template needs to validate the same and show an error message to the user when he/she doesn’t provide
an input to a required field.

To mark a field as required, we can use the text field' attribute required. See the code below.

<form style="margin-left:20px; margin-top:20px" (change)="onChange()">


<!-- removed code for brevity -->
...
<input class="form-control" type="text" name="cityName" id="cityName" required
[(ngModel)]="destinationModel.name " />

...
<input class="form-control" type="text" id="country" name="country" required
[(ngModel)]="destinationModel.country" />

...
<input type="text" class="form-control" id="timezone" name="timezone" required
[(ngModel)]="destinationModel.timezone" />

...
<input type="text" class="form-control" id="distFrmCapital" name="distFrmCapital"
[(ngModel)]="destinationModel.distanceFromCapital" />

...
<input type="text" class="form-control" id="language" name="language"
[(ngModel)]="destinationModel.language" />

</form>

We still need to do two more things:


1. Prevent the user from submitting the form if it contains incorrect data
2. Show error messages when needed, so that user can correct the form.

PREVENT FORM SUBMISSION WITH INCORRECT DATA


To achieve this, we can take advantage of the ngForm directive features. The directive is available as part of
@angular/forms module. Set it on the form element as shown here.

<form (change)="onChange()" #destinationForm="ngForm" >


<!-- form fields’ code goes here -->
</form>

Notice the template reference variable #destinationForm. To every form in Angular, it automatically adds
the ngForm directive. To get access to the form, a reference variable is useful.

The ngForm directive also adds a field form to the reference variable. It has additional properties
demonstrating status of the form. Refer to the submit button template code here. It uses
destinationForm.form.valid field to identify an invalid form and disable the button.

www.dotnetcurry.com/magazine | 25
<button (click)="onClick()" [disabled]="!destinationForm.form.valid" > Create
Destination </button>

See Figure-4 with the submit button (Create Destination) disabled when a required field doesn’t have a
value. This shows that the Form is invalid.

Figure-4: Create Destination submit disabled for an invalid form.

SHOW ERROR FEEDBACK TO THE USER


It may not be enough to just prevent the user from submitting an invalid form. We need to show errors, so
that corrections can be made to the form. These error messages may be next to the text field, dropdown or
any other control.

To achieve this, let’s add a template reference variable at the text field or control level. Consider the
following required text field.

<input class="form-control" type="text" name="cityName" id="cityName" required


#cityName="ngModel" [(ngModel)]="destinationModel.name" />

Similar to the form, even to the text field ngModel adds a property valid on the template reference variable.
When a required field doesn’t have a value, it will be set to false. Temporarily, it shows the value on the
screen. See the code snippet and figure-5.

<!-- show the value with the following temporary code snippet in the template -->
<strong> Valid: {{cityName.valid}} </strong>

Figure-5: Valid property on template reference variable.

26 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


We can use this field to show an error message. See the following code snippet. The div element is hidden
if the text field is valid.

<label for="cityName">
<strong>Destination</strong>
<input class="form-control" type="text" name="cityName" id="cityName" required
[(ngModel)]="destinationModel.name" #cityName="ngModel"/>
</label>
<div [hidden]="cityName.valid">
<strong style="color: #a94442; font-size: 9pt" > Please enter a city name.
</strong>
</div>

See Figure 6 showing an error message for an invalid text field. We can code specific error messages for
each form field.

Figure 6: Show error message when the field is invalid.

We can provide additional visual feedback to the error message. For the code sample, let’s show a red
border indicating a problem with the text field.

ngModel adds CSS classes indicating status of the form field. When the text field is invalid, it adds ng-
invalid. It adds ng-valid otherwise. We can override this style to show status of the text field.

For example, look at the following code snippet. Here, we show a red colored border on an invalid field.
And a green border for a field that’s entered by the user and is valid. ngModel adds ng-touched CSS class
indicating a mouse or keyboard focus to the field at some point. See figure-7 for the result.

.ng-invalid{
border-color: #a94442;
}

.ng-valid.ng-touched{
border-color: #42A948;
}

www.dotnetcurry.com/magazine | 27
Figure-7: Visual feedback

If the user attempts to edit a field, ng-dirty CSS class will be added on the field and the form.

Most validations in template driven forms are performed with HTML attributes. minlength, maxlength,
forbiddenName etc are examples for such validations. We may use *ngIf or [hidden] attribute to show/
hide error messages.

Conclusion

Forms are an important aspect of building web applications. Angular has two approaches to building forms.
Reactive Forms and Template Driven Forms. To read about Reactive forms, which are scalable, synchronous
and sophisticated, download the 40th Edition of the DNC magazine.

This article talked about template driven forms, which are simple, and are managed primarily in the HTML
template of the component with two-way data binding.

This article also demonstrated steps to create template driven forms. It began with a prerequisite to import
ngForms module, creating the HTML template, creating a model typescript class, two-way bound form
fields with the class instance and finally how to perform validations.

Download the entire source code from GitHub at


bit.ly/dncm41-templateforms

Keerti Kotaru
Author

V Keerti Kotaru has been working on web applications for over 15 years now. He started his career
as an ASP.Net, C# developer. Recently, he has been designing and developing web and mobile apps
using JavaScript technologies. Keerti is also a Microsoft MVP, author of a book titled ‘Material Design
Implementation using AngularJS’ and one of the organisers for vibrant ngHyderabad (AngularJS
Hyderabad) Meetup group. His developer community activities involve speaking for CSI, GDG and
ngHyderabad.

Thanks to Ravi Kiran for reviewing this article.

28 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


www.dotnetcurry.com/magazine | 29
XAMARIN

Gerald Versluis

Working with Barcodes


in Xamarin.Forms

For as long as I can remember, Barcodes have been a thing!

It all started with the 1D barcodes that give you a string value
of max 48 characters. With QR codes (2D), you can store more
information. These days they are often used to guide people to a
certain URL, or use it for two-factor authentication.

It is evident that barcodes are still very much alive and can
be used for a great variation of solutions. Barcodes are most
suited to be used in Mobile, since a camera can potentially read
a barcode and extract the data, which can then be processed
further.

In this tutorial, I will show you how you can scan barcodes as
well as generate them in your Xamarin.Forms mobile app.

30 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


DIFFERENT TYPES OF BARCODES AND
USAGES
Over time, numerous barcode formats have been invented. The most famous ones are the simple 1D
barcode, the one with just the vertical stripes, and the QR code, the square, black and white barcode. You
can see these two formats in Figure 1.

Figure 1: 1-dimensional barcode(left) and QR code(right). Both hold the same data.

Depending on its appliance, a certain format might be preferred. Some benefits of the QR code is that it can
be read even when it is damaged. Also, it can hold much more data than the traditional barcodes.

As I have already mentioned, although the technique is quite old, barcodes still play a big role in today’s
world. You can see barcodes on every book, every product from the supermarket, you might see QR codes on
stickers on advertisements on bus stops or you can even connect guests to your Wi-Fi connection with a QR
code.

The latter is what we will be focussing on in our sample app.

CONNECTING TO WI-FI WITH A QR CODE


In the app that we are building in this tutorial, we will read and generate QR codes that will allow people
to connect to a certain Wi-Fi network.

We are not actually connecting people to Wi-Fi with our app. While that technically is possible, that is not
the scope of this article.

For this app, we will focus on generating a QR code that can connect a user to a Wi-Fi network, and scan a
QR code to see what it contains.

The QR code is nothing more than a string in a certain format that has been standardized and implemented
in the operating system of Android and iOS. When this string is recognized in a QR code, the OS will
suggest that you connect to the network with the settings encoded in that QR code. The string format looks
like this: WIFI:S:<SSID>;T:<WPA|WEP|>;P:<password>;;

The string has to start with WIFI:. The order of the parameters doesn’t really matter. All parts of the string
are then key/value pairs separated by a colon. Underneath you can see what each letter stands for.

www.dotnetcurry.com/magazine | 31
The parts between angle brackets will be replaced with actual values.

BARCODE LIBRARIES
There are multiple libraries out there that allow you to work with barcodes. The most notable ones are
ZXing (short for Zebra Crossing) and Scandit. Both can do mostly the same thing and have libraries for
usage in Xamarin apps. I will use the ZXing library for this sample. You can find background information
on their GitHub page: https://github.com/zxing/zxing. The Xamarin.Forms specific port can be found here:
https://github.com/Redth/ZXing.Net.Mobile and is created by Jonathan Dick. This version is also available
through NuGet as we will see in a minute.

CREATING THE SAMPLE APP


Let’s get started on the actual app. I will be using Visual Studio for Mac, but everything should work on
Windows as well. I have created a new Xamarin.Forms project and am targeting iOS and Android. All code
can be found on my GitHub page for it: https://github.com/jfversluis/WifiBarcodeSample.

Since I am not a great designer, the UI is going to be super simple, I mean minimalistic. A first version will
look like the one shown in Figure 2.

Figure 2: A very simple UI for our Wi-Fi QR code app

32 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


As you can imagine, the code for this UI is not very complicated. I am a big fan of using XAML for the UI. You
can see the XAML markup for this page in the following code. It doesn’t really do much right now, but it will
help you understand where I will be placing certain other elements on the page later on.

<?xml version="1.0" encoding="utf-8"?>


<ContentPage Title="Wi-Fi QR Code" xmlns="http://xamarin.com/schemas/2014/
forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" xmlns:local="clr-
namespace:WifiBarcodeSample" x:Class="WifiBarcodeSample.MainPage">
<StackLayout VerticalOptions="CenterAndExpand">
<Label Text="SSID" HorizontalOptions="Center" VerticalOptions="Center" />
<Entry x:Name="Ssid" WidthRequest="200" HorizontalOptions="Center"
VerticalOptions="Center" />
<StackLayout Orientation="Horizontal" HorizontalOptions="Center">
<Switch x:Name="HiddenSsid" HorizontalOptions="Center"
VerticalOptions="Center" />
<Label Text="SSID not broadcasted" HorizontalOptions="Center"
VerticalOptions="Center" />
</StackLayout>
<Label Text="Security" HorizontalOptions="Center" VerticalOptions="Center" />
<Picker x:Name="Security" WidthRequest="200" HorizontalOptions="Center"
VerticalOptions="Center">
<Picker.ItemsSource>
<x:Array Type="{x:Type x:String}">
<x:String>WPA/WPA2</x:String>
<x:String>WEP</x:String>
<x:String>None</x:String>
</x:Array>
</Picker.ItemsSource>
</Picker>
<Label Text="Password" HorizontalOptions="Center" VerticalOptions="Center" />
<StackLayout Orientation="Horizontal" HorizontalOptions="Center">
<Entry x:Name="Password" WidthRequest="200" IsPassword="true"
HorizontalOptions="Center" VerticalOptions="Center" /> <Button Text="Show/Hide"
Clicked="ShowHidePassword" />
</StackLayout>
<Button Text="Scan QR Code" WidthRequest="200" HorizontalOptions="Center"
VerticalOptions="Center" />
<Button Text="Generate QR Code" WidthRequest="200" HorizontalOptions="Center"
VerticalOptions="Center" />
</StackLayout>
</ContentPage>

ADDING THE ZXING LIBRARY


As I have already mentioned, the ZXing library for Xamarin.Forms is available through NuGet, so you can
simply add it to your projects.

Notice how I say project(s) i.e. plural.

If you have been working with Xamarin.Forms before, you might know that a lot of the libraries are to be
installed in all projects. This means: your shared code project as well as your platform projects. This is
needed because the library needs an abstraction at the shared code level and the actual implementation
lives in the platform project. It’s basically how all of Xamarin.Forms is setup.

Add the NuGet package to each of the projects by opening the Add package dialog and find the ZXing.
Net.Mobile.Forms package. Repeat this for each project in your solution, in my case it was three. You can

www.dotnetcurry.com/magazine | 33
see the project structure in Figure 3.

Figure 3: The project structure of the sample app. You can see that the ZXing library is installed on all projects.

At the very root, you see the solution. Directly under that you see the WifiBarcodeSample project. This is a
.NET Standard library that contains our shared code. You will want to have most of your code in this library,

34 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


since that will allow us to share it across all platforms.

Then there is the WifiBarcodeSample.Android and WifiBarcodeSample.iOS projects, which are bootstrap
projects for the Android and iOS platforms. If we would need to write platform-specific code, this is where
we would do it.

iOS

In the case of the ZXing library, we do need some initialization code per platform. On iOS, go into the
AppDelegate.cs file and in the FinishedLaunching method, add this line:

ZXing.Net.Mobile.Forms.iOS.Platform.Init();

Android

For Android we need to do something similar. Go into the MainActivity.cs file and in the OnCreate
method, place this line:

ZXing.Net.Mobile.Forms.Android.Platform.Init();

For some reason, you also need to add another NuGet package to your Android project, be sure to add the
ZXing.Net.Mobile package manually. This is not needed for iOS.

Other

If you do not do this, the scanner and barcode images will not work. There is also support for Windows
Phone and UWP. Please refer to the ZXing GitHub page to see the code for those platforms.

GENERATING BARCODES
Besides scanning barcodes, this library can also generate barcodes. Since that is quite simple, let’s get that
out of the way first. For that, ZXing has an object named ZXingBarcodeImageView. This is what we will
add to our layout first.

Because this is in a different namespace, we will also need a new namespace declaration in XAML. The
resulting XAML (snippet) will then look like the following.

<ContentPage Title="Wi-Fi QR Code" ... xmlns:zxing="clr-namespace:ZXing.Net.


Mobile.Forms;assembly=ZXing.Net.Mobile.Forms" xmlns:zxcm="clr-namespace:ZXing.
Common;assembly=zxing.portable">

<StackLayout VerticalOptions="CenterAndExpand">
<zxing:ZXingBarcodeImageView x:Name="BarcodeImageView" BarcodeFormat="QR_CODE"
IsVisible="false" BarcodeValue="Foo">
<zxing:ZXingBarcodeImageView.BarcodeOptions>
<zxcm:EncodingOptions Width="300" Height="300" />
</zxing:ZXingBarcodeImageView.BarcodeOptions>
</zxing:ZXingBarcodeImageView>
<Label Text="SSID" HorizontalOptions="Center" VerticalOptions="Center" />
...
</ContentPage>

www.dotnetcurry.com/magazine | 35
I have omitted some code to make it more readable. The things to note here are the xmlns:zxing and
xmlns:zxcm attributes on the ContentPage node and the zxing:ZXingBarcodeImageView.

The first two attributes are used to import the right namespace, so the XAML page knows where to look
for the control. Then, we can actually use the image view which will show our barcode. You can see that
we gave it a name, with this we can access this control from code as we will see in a bit. Also, I specified a
BarcodeFormat. With this property you tell the control which type of barcode is to be rendered.

Because of some quirks in the ZXingBarcodeImageView, I also specified an initial value for the
BarcodeValue. And since I have to set an initial value, I set the IsVisible to false to only show the QR
code when it shows a right value.

The last thing that stands out is the ZXingBarcodeImageView.BarcodeOptions. If we do not specify
this options object with a width and a height, the barcode will look very fuzzy. Probably because the actual
image is rendered smaller than the image view and is thus stretched.

To make it all work, we need to take the values entered by the user in the input fields and generate the
actual barcode.

Let’s start with adding a Clicked parameter to our Generate QR Code button. Make sure it looks like this:

<Button Text="Generate QR Code" WidthRequest="200" HorizontalOptions="Center"


VerticalOptions="Center" Clicked="Generate_Barcode" />.

I have added the Clicked parameter and specified a name of the method that is going to handle the event
when someone taps the button. While I am using simple code-behind code right now, everything can also
be done with data-binding if you want to use an architectural pattern like MVVM.

In the code-behind, create a new method that looks like the code shown here.

public void Generate_Barcode(object sender, EventArgs e)


{
// TODO Implement error handling and validation
var security = "";
var ssidHidden = "";

switch (Security.SelectedIndex)
{
case 0:
security = "WPA";
break;
case 1:
security = "WEP";
break;
default:
security = "";
break;
}

if (HiddenSsid.IsToggled)
ssidHidden = "H:true";

BarcodeImageView.BarcodeValue = $"WIFI:S:{Ssid.Text};T:{security};P:{Password.
Text};{ssidHidden};";

BarcodeImageView.IsVisible = true;
}

36 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


Here you can see how I compose the string that is needed to connect to the Wi-Fi network. Depending on
the input, I set different parts to different values. At the end, I set the string value to the BarcodeValue of
the image view and set the image view to be visible. A sample output on iOS can be seen in Figure 4. Go
ahead, try to scan it with your camera app and see if it responds!

Figure 4: Barcode image view in iOS.

As indicated by the todo in the comment, I did not take into account any validation or error handling in this
case so we can just focus on the relevant part.

SCANNING BARCODES
Now for the reverse part. Let’s scan barcodes!

For this piece of functionality, we will allow the user to scan a barcode with the camera and put the values
in the input boxes.

There is a couple of things we need to do before we can start implementing the code for this. Since we will
be using the camera, we need to request the permissions for that in both iOS and Android.

IOS
The way to do it in iOS is pretty simple. You will have to provide an entry in the info.plist file. With this
entry, you specify a description that the user will see whenever the camera is accessed for the first time.

By adding this entry, you are letting iOS know that you intend to use the camera and you get the ability to
explain to the end-user what you want to use the camera for. If you open the info.plist file, you will see it

www.dotnetcurry.com/magazine | 37
is just an XML file. Add the entry shown here between the <dict></dict> tags and give it a meaningful
description.

<key>NSCameraUsageDescription</key>
<string>Please allow the camera to be used for scanning barcodes</string>

This concludes all we need to do for iOS.

ANDROID
For Android it is roughly the same, but also different. By adding the ZXing package, the permission for the
camera should have already been added. You can double-check this by right-clicking the Android project
and go to Options (or Properties on Windows). There, find the permissions list and see that Camera is
checked. Additionally, you will need to add the following method to your MainActivity.cs file to support
the new permissions model for newer Android versions.

public override void OnRequestPermissionsResult(int requestCode, string[]


permissions, Permission[] grantResults)
{
global::ZXing.Net.Mobile.Android.PermissionsHandler.OnRequestPermissionsResult
(requestCode, permissions, grantResults);
}

IMPLEMENT BARCODE SCANNING


Finally, time to implement the real magic.

Go back to the shared project and to the MainPage.xaml file. Under the barcode image view, we will now
add a scan view. This is nothing more than simply adding this line of XAML:

<zxing:ZXingScannerView x:Name="BarcodeScanView" IsVisible="false"


HeightRequest="200" OnScanResult="Handle_OnScanResult" />.

We give it a name so we can reference it from code-behind and just like the image view, set the IsVisible
to false to only let it show up when we need to.

Set a HeightRequest to make sure it will show up in our layout. With the OnScanResult, set up the event
that will be triggered whenever the camera discovers a barcode. While I am not doing this now, you can
also provide an options object to the scanner view which lets you specify which barcode formats to detect,
amongst other things. Let’s have a look at this method.

In our code-behind, first let’s add the code for the “Scan QR Code” button. With this button we will show the
camera view and start scanning. To hook it up, add the Clicked attribute to the button and implement the
event handler. I trust you know how to do this by now, else have a look at the button to generate the QR
code. The implemented event handler can be seen underneath.

public void Scan_Barcode(object sender, EventArgs e)


{
BarcodeImageView.IsVisible = false;
BarcodeScanView.IsVisible = true;
BarcodeScanView.IsScanning = true;
}

Notice that I hide the image view – in case the user generated a barcode before this – and how I show

38 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


the scanner view. Additionally, I set the IsScanning property to true. This tells the scanner view to start
the camera feed and start looking out for barcodes. Each time a barcode is recognized within the camera’s
viewport, the scanner view’s event will be invoked.

Now we will add the method we mentioned in the OnScanResult attribute earlier. This is the event
whenever a barcode is spotted. Underneath you will find the code for this method. I will walk you through
the implementation.

public void Handle_OnScanResult(Result result)


{
if (string.IsNullOrWhiteSpace(result.Text))
return;
if (!result.Text.ToUpperInvariant().StartsWith("WIFI:", StringComparison.
Ordinal))
return;

var ssid = GetValueForIdentifier('S', result.Text);


var security = GetValueForIdentifier('T', result.Text);
var password = GetValueForIdentifier('P', result.Text);
var ssidHidden = GetValueForIdentifier('H', result.Text);
Device.BeginInvokeOnMainThread(() =>
{
Ssid.Text = ssid;
switch (security)
{
case "WPA":
Security.SelectedIndex = 0;
break;

case "WEP":
Security.SelectedIndex = 1;
break;

default:
Security.SelectedIndex = 2;
break;
}
Password.Text = password;
HiddenSsid.IsToggled = !string.IsNullOrWhiteSpace(ssidHidden) && ssidHidden.
ToUpperInvariant() == "TRUE";
});
}

The Handle_OnScanResult method is invoked each time a barcode is detected. Note, this can be multiple
times one after the other. On the scanner view, you can set a delay to wait between scans. By default,
this delay isn’t long, so the event might be fired multiple times after each other, and quickly. If that is not
desirable, you can increase the delay. Simply set the IsScanning property to false or implement a check
to see if the incoming result is different from the last result. This is totally up to you.

In my implementation, I start with two simple validation checks: is the incoming string value not empty, and
does it start with WIFI: .

If these checks are true, I know it is a Wi-Fi connection string. Then, I try to extract all the different sections
from the connection string and put it in the right UI controls. For this, I wrote a helper method called
GetValueForIdentifier. This is not really important for barcode scanning, but is important for the sample
to work. I won’t go into the code here, but you can find it on the repository.

www.dotnetcurry.com/magazine | 39
After I have retrieved all the values, I start assigning them to the controls. This needs to be wrapped in a
Device.BeginInvokeOnMainThread call. Because the event is fired in a background thread and the UI
can only be updated from the main thread, I need to use this Xamarin.Forms built-in helper method. This
method will invoke all code inside it on the main thread. In Figure 5, you can see that a QR code being
scanned has filled the UI with the actual values.

Figure 5: Scanning a QR code containing a Wi-Fi connection string now fills our UI with the values

This concludes this article for now. There are some options which allow you to scan with the front camera,
toggle the torch for better lighting and a few others, but I’ll leave that for you to discover on your own.

One thing that is important to note is the fact that besides the scanner view, there is also a scanner page.
Instead of a small camera view that fits in your design, you can also scan with a full page. Here you can
also supply an overlay with some instructions and indicators for the user on how to work with your barcode
scanner.

Summary

In this article, I have shown you what barcodes are and how they can be used. More importantly, we have
learned how we can generate and scan barcodes in our own apps. Through an example that allows us to
scan and generate barcodes that connect us to a Wi-Fi connection, we have seen how easy it is to integrate
this functionality. While there are a number of different things you can still do, the gist of it will be the
same. Just the format of the barcode will be different.

40 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


The library we used, ZXing has some strange bugs. I’m not sure if that is in the original ZXing library, or
rather in the Xamarin library wrapper. Luckily, there are no critical bugs that we cannot work around.
Another option would be to use an alternative library like Scandit. I do not have any hands-on experience
with that one yet, but it seems it can do practically the same thing. If you choose to use that library, please
let me know how it works for you.

I hope you can use this knowledge to your advantage and I am always curious to see what you build. Be
sure to reach out to me on Twitter (@jfversluis) with any questions or anything else you want to let me
know. All code from this article can be found on: https://github.com/jfversluis/WifiBarcodeSample.

This solution and a couple more can be found in my latest book, full of solutions for Xamarin.Forms
applications. So, if you want to know more, please find it on Amazon: https://www.amazon.com/Xamarin-
Forms-Solutions-Gerald-Versluis/dp/1484241339

Gerald Versluis
Author
Gerald Versluis (@jfversluis) is a full-stack software developer and Microsoft MVP
(Xamarin) from Holland. After years of experience working with Xamarin and .NET
technologies, he has been involved ina number of different projects and has been building
several apps. Not only does he like to code, but he is also passionate about spreading his
knowledge - as well as gaining some in the bargain. Gerald involves himself in speaking,
providing training sessions and writing blogs
(https://blog.verslu.is) or articles in his free time.
Twitter: @jfversluis | Email: gerald@verslu.is | Website: https://gerald.verslu.is

Thanks to Mayur Tendulkar for reviewing this article.

www.dotnetcurry.com/magazine | 41
ASP.NET

Daniel Jimenez Garcia

THE HISTORY OF

ASP.NET
The first version of ASP.NET was released 17 years ago in early 2002 as part of .NET
Framework V1.0. Microsoft initially designed it to provide a web platform better than classic
ASP and ActiveX, one that would give a sense of familiarity to existing Windows developers.

During these years, we have seen ASP.NET evolve, often struggling to cope with the changes
happening both on the web and its surrounding technologies. However, as we look back,
it might be surprising to see how concepts like MVC, web services, JSON or JavaScript
types were considered, discussed and/or introduced into ASP.NET earlier than you might
remember!

It is fascinating to see how the ASP.NET team and


Microsoft constructively reacted through these years to
the major shifts happening on the web. Initially a platform
that was closed and tried to hide and abstract the web;
Microsoft metamorphized ASP.NET into open source and
cross platform - one that fully embraces the nature of the
web.

42 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


To me, this article has been great fun to research and
write. I have particularly enjoyed working through the
early years since I first used ASP.NET around 2009, but
it wasn’t until 2011 using MVC3 that I fully transitioned
from desktop development into web development.

I hope you too will enjoy reading it!

THE WEB FORMS ERA (2002-2008)


Microsoft released ASP.NET to the world as part of the 1.0 release of the .NET framework during January
2002. XML was king, and the future was based around XML web services. As Rob Howard, Program Manager
for ASP.NET at that time wrote:

In the last several months, the industry has awakened to the promise of Web Services.
Did I mention that XML Web Services represents the underpinning of Microsoft's .NET
strategy, and is a core feature of ASP.NET?

In the era of XML, Microsoft wasn’t shy to push forward with IE in areas like XML data islands or the
XMLHTTP ActiveX control. Few anticipated that other browsers would implement the latter as the standard
XMLHttpRequest API and that it will form the basis for what would be known as Asynchronous JavaScript
and XML, or AJAX. Of course, browser standards were only a dream and techniques like JavaScript forking for
multi-browser support were commonly promoted.

These were also the years when DHTML was promoted as the next big thing (before becoming obsolete
by standardized DOM, JavaScript and CSS). Enterprise developers who were used to create desktop
applications were increasingly moving towards web applications, often deployed on corporate intranets.

It was then that Microsoft released the .NET framework 1.0 alongside Visual Studio.NET, with ASP.NET Web
Forms being a core part of the package. This gave developers of the Microsoft platform a much better way
of building web applications than the previous mixture of classic ASP, ActiveX controls and VB6 DLLs.

The design of Web Forms


The turn of the millennium saw Microsoft with a few significant fronts opened:

• It had failed in its embrace, extend and extinguish strategy against Java (which ended settling in a
lawsuit), and it had now embarked on providing its own managed language alternative that could
compete with Java.

www.dotnetcurry.com/magazine | 43
• It needed a better solution for building and hosting web applications in Windows, so it could keep
competing in the context of the dot.com bubble.

• Its RAD (Rapid Application Development) platform, the aging Visual Basic 6, needed a replacement. A lot
of buzz was being generated around visual tools and designers for developers, as the new silver bullet
for developer productivity!

To overcome these challenges, Microsoft finally came up with its own managed and interpreted platform,
the .NET Framework, and the languages C# and VB.NET (simply known as Visual Basic today). The 1.0
release of the framework came with specific tools for desktop and web development, named Win Forms
and ASP.NET Web Forms. As we will see, the similarity in the name of their desktop and web frameworks
wasn’t a coincidence!

Win Forms was designed to be the successor to VB6 for developing Windows desktop applications,
providing a RAD experience around a forms designer in Visual Studio that would be familiar to VB6
developers. ASP.NET Web Forms would then provide a very similar experience for developing web
applications, with a similar forms designer in Visual Studio and a programming model that would truly
resonate with their existing desktop developers. Its name also suggested this new framework was the
successor to the classic ASP scripting framework.

Figure 1, The Web Forms designer in Visual Studio 2003


This was magical at the time and made the transition to the web smoother for developers used to Windows
applications. Or at least on paper, because this was only possible through some clever abstractions around
which the Web Forms framework was built, abstractions that would hide the reality of the web and would

44 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


end up causing the shift to MVC!

Web Forms was designed following the Page Controller pattern, which is one of the classic web
presentation patterns. Each page in the application is created as a Web Form and associated with an .aspx
URL. The page is loaded through a GET request and sent to the browser. Controls within the page such as
buttons would cause a POST to the same URL, which is handled by the same Web Forms page (or page
controller).

Each Web Form had two distinct parts. The aspx template that ultimately defined the HTML of the page,
and the class that implemented the Page Controller and provided the necessary logic. This would be the
HelloWorld.aspx template:

<%@ Page language="c#" Codebehind="HelloWorld.aspx.cs" AutoEventWireup="false"


Inherits="HelloWorldPage" %>
<HTML>
<body>
<form id="Form1" >
Name:<asp:textbox id="name" />
<p />
<asp:button id="MyButton" text="Click Here" OnClick="SubmitBtn_Click" />
<p />
<span id="mySpan" ></span>
</form>
</body>
</HTML>

Notice that although the view engine used in aspx files allowed for code to be mixed inside <% %> blocks, this
was frowned upon. The ASP.NET community had just moved away from classic ASP and its spaghetti code issues!

..and its corresponding code-behind HelloWorld.aspx.cs file:

using System;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.HtmlControls;

public class HelloWorldPage: System.Web.UI.Page


{
protected System.Web.UI.WebControls.TextBox name;
protected System.Web.UI.WebControls.Button MyButton;
protected System.Web.UI.HtmlControls.HtmlGenericControl mySpan;

public void SubmitBtn_Click(Object sender, EventArgs e)


{
mySpan.InnerHtml = "Hello, " + name.Text + ".";
}
}

The ASP.NET framework provided a set of controls that could be used on each page.

The asp:textbox and asp:button tags are examples of Server Controls, which are not standard HTML
tags. The final HTML and JavaScript sent to the client is generated by the framework when rendering the
page and depends on both its properties and potentially, the client. Each of them gets a variable added to
the code-behind class.

www.dotnetcurry.com/magazine | 45
There are also HTML Server Controls, which are directly related with HTML elements and whose intention
is for the code-behind class to have access to them. There is one in the earlier example, the span mySpan,
with its corresponding variable of the code-behind class.

The asp:button in the example shows another interesting characteristic. Server Controls can render the
necessary HTML and JavaScript to process an event on the server through PostBack requests, which is
nothing else than a POST request to the same page, alongside its current state and the name of the control
and event that was triggered. A method in the code-behind class can be associated with each of these
events, so it can be executed on the server during the handling of the PostBack request. The click event of
MyButton associated with the SubmitBtn_Click method is an example of this.

As you can see, each page contained a single HTML form, which would cause the browser to submit it as a
POST request to the same URL. The server would then instantiate the same page controller class, which can
then react to the post back.

This model would be immediately familiar to developers used to working with desktop applications where
the view and page controller run as part of the same process. However, it hid from them the fact that client
and server were separated over a network, one stateless by nature.

The trick was possible by combining:

• Its state management, which combined different alternatives for keeping data. Apart from the usual
Application and Session State, ASP.NET introduced the View State. Before the page HTML is sent to the
browser, the server encoded the state of the page controls into a Base64 encoded string which was
included as a hidden form field. This hidden field would then be automatically included by browsers in
POST requests.

• A page lifecycle with events that would be automatically invoked by the framework at different points
during the processing of the request. These would allow the page state to be initialized from scratch
(as in fetching from the database) or rehydrated from the posted View State (as in properties of form
controls)

The following diagram shows this flow in action:

Figure 2, Flow of a typical page in ASP.NET 1.0

As you can see in Figure 2, the framework is designed assuming most of the work would be done by the
server, with the client mostly rendering the HTML documents and issuing new GET/POST requests. This was

46 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


a sign of the times, since clients were not as powerful, nor browsers as advanced as they are today.

Overall, ASP.NET succeeded in its objectives of providing a model closer to desktop development and a
RAD platform for web development. However, its stateful model over a stateless network combined with an
abstraction that hid the web from developers, ended up hurting the framework in the long term.

Of course, the years elapsed since the ASP.NET launch gave us an insight people didn’t have back then and
it is easy to criticize or mock something done 17 years ago.

However, you would be surprised by what was possible to achieve in ASP.NET!

I found it particularly interesting reading through the patterns provided by Microsoft’s Patterns and
Practices team back in in June 2003, all of them implemented in ASP.NET:

• They provided an implementation of the MVC pattern, years before the first release of ASP.NET MVC

• The implementation for the Front Controller pattern is nothing but an early attempt at what would
become the routing component of ASP.NET MVC

• The intercepting filter pattern provides similar functionality to the Filters in ASP.NET MVC or even the
Middleware in ASP.NET Core.

It is true that most developers never followed these patterns. And yes, guidance mainly focused on the
RAD aspects and the ease of building applications by dragging and dropping controls on the designer and
writing event handlers in the code-behind.

But it is also true that developers could build clean applications with a little more effort, as we have seen
with those patterns!

Before we move on, let’s consider what else was going on around that time in the web development world.

• The main external competitor to ASP.NET were Java frameworks like Struts and later, Spring. It seems
Microsoft recognized them as such and their early documentation has content aimed for people familiar
with Struts and J2EE web technologies in general, as per the JSP (JavaServer pages) migration articles.

• As mentioned earlier, XML in general and XML-based Web Services were a huge deal back then. ASP.NET
had a web services offering over HTTP and the .NET framework came with a more general purpose .NET
Remoting, the predecessor to WCF. Web Services interaction between different platforms like .NET and
Java was a big deal.

• Microsoft still had a large base of VB6, ActiveX and classic ASP programmers around that had to move
into their new .NET platform. As you can imagine, abundant materials were produced to help with the
transition, like this series of articles collected in their official docs.

It is fair to say that the introduction of ASP.NET was a huge success for Microsoft. It became one of the
dominant web development platforms and saw huge adoption in the enterprise world.

ASP.NET Web Forms gets perfected


Even though it was a success, the .NET framework was still in its infancy and in much need of maturing and
refining. Microsoft adopted a schedule that released major new framework versions every two years, with

www.dotnetcurry.com/magazine | 47
all the pieces including ASP.NET upgraded at the same time.

In 2003, the .NET Framework 1.1 was released and ASP.NET received the Mobile Controls amongst other
minor updates. It was with the release in 2005 of the .NET Framework 2.0 that the framework took a big
step forward and came of age.

At that moment, .NET received some of its most useful and widely adopted features. It is amazing looking
back and imagining yourself writing .NET code without generics or nullable value types. Let’s briefly revisit
some of these features:

• The introduction of generic types made it possible to adopt patterns that increased reusability and
type safety, like the generic collections. Before generics, it was relatively common for runtime casts to
cause exceptions and to incur a performance penalty due to boxing/unboxing of value types. Microsoft
themselves would add generic methods and types across many areas of the framework.

• Nullable types were also introduced, so it was now possible to assign null to value types when using
the Nullable<T> structure (another example of a generic type)

• C# received support for static classes. That’s right, C# didn’t have static classes in its first release!

• Partial classes were now possible in both C# and VB.NET. This would greatly help Win Forms and Web
Forms to separate the designer code generated by Visual Studio from the actual code-behind written by
the user.

• A new model for managing transactions that was able to transparently enlist transactions in existing
ambient transactions and promote between the local transaction manager and the Microsoft
Distributed Transaction Coordinator (MSDTC). This greatly benefitted many using ADO.NET for data
access and/or integrating with existing COM+ components.

The importance of the 2.0 release is even more obvious when looking specifically at ASP.NET 2.0. Apart
from benefitting from new .NET features such as generics, ASP.NET really received a major overhaul and
became a more powerful and easy-to-work-with framework.

A long list of new server controls was included in this release. In fact, controls became one of the preferred
ways to expand ASP.NET in the community, and started an industry of 3rd party control vendors.

But including more controls wasn’t everything.

There were many other additions and improvements to the framework that changed the development
experience:

• The View State was improved and reduced in size, one of the criticisms towards ASP.NET. The concept
of “Control State” was introduced to separate the absolute minimum data required for a control to
function (which cannot be disabled) from the rest of the View State.

• Achieving a consistent look and feel was easier after the introduction of Master Pages, a concept
close to the layouts seen in ASP.NET MVC and ASP.NET Core. The styling of the entire website could be
centralized through themes and skins, where a theme is made by a number of CSS and skin files, and a
skin is an XML take on CSS that could set properties for ASP.NET Server Controls as well.

• It was now possible to create a Site Map, an xml file describing the location and names of pages in your
website. This file could then be directly used with new Server Controls added for navigation purposes,

48 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


able to render breadcrumbs, menus or tree menus from the Site Map.

• The page lifecycle was updated with new events.

• Cross-page postbacks were now allowed. That is, a page could now send a POST to a different page and
not just to itself.

• The caching features were significantly improved. New dependencies based on SQL server were
introduced, developers could now write their own cache dependency implementations and dynamic
parts were allowed in cached pages.

• Web Parts allowed users to personalize websites, an idea like the iGoogle or My Yahoo! portals of the
time. Server Controls could be used as Web Parts (and custom ones could be created), which users could
add, remove and customize from their portal-like page.

• Finally, the factory design pattern was combined with generics in order to create the so-called Providers.
This is nothing but a contract that decoupled ASP.NET features like Membership or Profiles from the
actual source of the underlying data. The framework already contained providers for typical sources
like SQL Server or XML, while developers could create their own. These could then be wired to their
features as part of the providers configuration section.

I am sure that even with a brief summary like the one above, you would agree with me that ASP.NET 2.0
overall was a significant step forward.

However, you might have noticed that there was almost nothing new for the client side. Most of the new
features were still geared towards the server, with the browser mostly dedicated to render the pages
generated on the server.

This was about to change, and it would happen fast!

The importance of JavaScript and AJAX


Web 2.0 introduced the need for more dynamic websites that increasingly needed to take advantage of
client-side scripting. This trend continued through the first half of the decade and entered an exponential
growth when these features became a common and expected feature of modern websites, particularly after
its adoption in Google products. The term AJAX (Asynchronous JavaScript and XML) coined in early 2005
quickly became a buzzword mentioned everywhere and the next big thing in web development.

Prior to ASP.NET 2.0, the term AJAX hadn’t been coined nor adopted, but the XMLHttpRequest API it relies
upon had been available in browsers for some years. In fact, nothing really stopped an ASP.NET developer
from creating server side endpoints that could then be called from JavaScript, as described in articles such
as this one which introduced the pattern to ASP.NET developers.

ASP.NET 2.0 recognized the growing importance of JavaScript interaction with XMLHttpRequest (still
not commonly identified as AJAX) and introduced a feature called script callbacks. This allowed a server-
side method to be called from JavaScript through an XMLHttpRequest. This release also introduced the
ClientScriptManager, an early attempt at managing and bundling the JavaScript code needed on each
page!

Around the time ASP.NET 2.0 was about to be released, the AJAX craze had already started. Microsoft had
already taken notice and announced that they were working on a project codenamed Atlas which will bring

www.dotnetcurry.com/magazine | 49
first class AJAX support to ASP.NET.

Atlas was finally released as Microsoft AJAX 1.0 during January 2007. Apart from bringing AJAX to the
forefront of ASP.NET, this release marked two interesting firsts for ASP.NET:

• It was released separately from the .NET Framework, as a standalone installer that added Microsoft
AJAX to ASP.NET 2.0. Before it, developers had to wait for the 2-year release cycle of the .NET
Framework in order to get new features.

• The source code of the new controls part of the AJAX Control Toolkit was open sourced and available
in CodePlex. The client-side JavaScript code was released under the MS-PL (Microsoft Public License)
which was similar to the MIT license. The server-side code used the more restrictive MS-RsL (Microsoft
Reference Source License) instead, with the aim to facilitate development and debugging.

Microsoft AJAX focus wasn’t limited to AJAX requests, instead it focused on all the client-side aspects
needed for the modern dynamic websites of the time:

• The controls included in the Controls Toolkit, such as date pickers, accordions or dropdowns, contained
rich client-side functionality. The inclusion of control extenders of regular ASP.NET controls made it
easy to add some of the new functionality into existing Server Controls.

• Server-side controls like the UpdatePanel simplified the task of partially updating parts of the page
without the need for any JavaScript code to be written.

• An entire type system for JavaScript was introduced, which allowed developers to write JavaScript code
using types. As we all know, this idea would be later implemented again and perfected with the release
of TypeScript. Seeing the early attempt, it is interesting nonetheless, and I wonder what lessons were
learned that were later used during the TypeScript development (if any).

• Both ASMX and WCF web services could be exposed to client-side scripting using the new
ServiceReference object, which automatically generated a JavaScript proxy to call the service.

While the work on Microsoft AJAX neared its initial release,.NET Framework 3.0 released in November of
2006. The sole purpose of this release was to introduce the trio of XML-based frameworks WCF (Windows
Communication Foundation), WPF (Windows Presentation Foundation) and WF (Windows Workflow Foundation).

It wasn’t until a year later when .NET Framework 3.5 was released in November of 2007, that ASP.NET
received new features. The most significant one was the fact that Microsoft AJAX was now part of the
framework. Another major feature was the inclusion of the ListView control, which allowed to easily build
CRUD-like functionality for any data source through the customization of different templates.

We also shouldn’t forget that .NET 3.5 introduced LINQ (Language Integrated Query) to the delight of many,
including ASP.NET developers, who even got a specific LinqDataSource control.

It was the end of the year 2007 and we had a mature .NET framework after three major revisions, which
included a very capable ASP.NET framework.

However, the cracks were starting to show.

The abstraction provided by ASP.NET did a reasonable job at providing client-side functionality without the
need for a deep understanding of HTML, HTTP and JavaScript; but the same abstraction leaked and got in
the way when things didn’t work, or custom functionality was needed.

50 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


The focus on server-side controls and declarative programming through bindings hid the realities of the
web even more from developers. When developers tried to keep up with the increasing expectations for
web applications, they could find themselves working around the framework rather than in the way the
framework intended.

At the same time, things were beginning to change for both developers and web users.

Ruby on Rails had been released in 2004 and had a huge impact on the web development world. Its
approach was considerably different from ASP.NET, providing a lightweight MVC framework that accepted
the strengths and limitations of the web and explicitly tried not to get in the developer’s way. In
comparison, it made ASP.NET look old and heavy, criticized by those seeking a better way of developing web
applications and often cited as the counter-example.

A NEW DIRECTION (2008-2014)


Changes in the web were happening fast and ASP.NET was somewhat ill prepared for them. We have
already discussed how the Web Forms abstraction meant the reality of the web and HTTP, HTML and
JavaScript was hidden from developers. While ASP.NET worked well in the old 2002 world of server-centric
web applications and XML-based web services, advances in web development meant the abstraction
became even more leaky.

On one hand, increasing amounts of functionality were being moved to the client side as JavaScript
code. While ASP.NET had tools aimed at JavaScript, particularly once Microsoft AJAX became part of the
framework, the focus was still predominantly on controls.

It was possible to write JavaScript and add it to a page using the ScriptManager but it wasn’t easy to
work around the PostBack model unless building SPA applications using ASMX or WCF services, something
for which JavaScript wasn’t fully ready. It was also possible to add custom JavaScript into custom Server
Controls, but the process was cumbersome.

In the end, many developers mostly used controls like the UpdatePanel, and the controls/extenders part of
the AJAX Control Toolkit, instead of writing JavaScript.

On the other hand, JSON became the most common payload format for AJAX requests, due to its smaller
size compared to XML and the ease of integration with the JavaScript code that typically consumed them.
Both ASP.NET and WCF focused on XML-based SOAP services, with support for JSON over HTTP added later.
Both ASMX web services and WCF services required to be specifically configured to handle JSON over HTTP.

But that wasn’t all.

Since Web Forms abstracted the web for ASP.NET developers, directly working with JSON was a
cumbersome afterthought without first class support in the framework.

In summary, we had a leaky Web Forms abstraction combined with the increasing importance of JavaScript
and JSON, at the time when frameworks such as Ruby on Rails exploded.

www.dotnetcurry.com/magazine | 51
Introducing ASP.NET MVC
Microsoft was aware of the situation, announcing in late 2007 that they were working on a new framework
called ASP.NET MVC where they were trying to address these concerns. The framework would be open
sourced and explicitly designed with testability and pluggability in mind. Microsoft went as far as sharing
the following list of key design goals as part of the announcement:

• Follows the separation of concerns design principle

• Grants full control over the generated HTML

• Provides first class support for TDD (test driven development)

• Integrates with existing ASP.NET infrastructure (Caching, Session, Modules, Handlers, IIS hosting, etc.)

• Pluggable. Appropriate hooks to be provided so components like the controller factory or the view
engine can be replaced

• Uses the ASPX view engine (without View State or postbacks) by default, but allows other view engines
to be used like the one from MonoRail, etc.

• Supports IoC (inversion of control) containers for controller creation and dependency injection on the
controllers

• Provides complete control over URLs and navigation

It is worth highlighting that this wasn’t the first attempt at a cleaner web framework for .NET. We have
already seen that Microsoft themselves provided guidance on implementing patterns like MVC or the front
controller back in 2003. The .NET community also worked on providing its own alternatives, with MonoRail
(part of the Castle Project, and inspired by Ruby on Rails) being the most prominent one. In fact, this is one
of the usual criticisms Microsoft faced for its attitude towards open source. Rather than embracing and
supporting open source initiatives, they would build and promote their own.

In the words of Steve Naidamast:

MVC had been around for a long time as a pattern, and the .NET Community had
it as well in the guise of the “Castle Project” where it was known as “MonoRail”. It
was simply never regarded by anyone as anything substantial until Microsoft began
promoting it with their own version of MVC, which appeared to be an exact copy of
what the “Castle Project” people had already developed.

Eventually, version 1.0 of the framework was released in March 2009. It shipped as a standalone release
separate from the .NET framework, which stayed on its 2-year release cycle. Its code was available in
CodePlex and used the MS-PL license, with broad rights to modify and redistribute the code. This marked
an important milestone in Microsoft’s attitude towards open sourcing.

52 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


Developers now had an alternative from Microsoft that embraced the web and didn’t try to hide details
such as HTML, HTTP, JavaScript or its stateless nature. Instead, it provided the tools and flexibility needed
to work with them.

An HTTP request would now be served by a specific Action method of a Controller class. The URL was
matched to the controller and action through a new routing module that allowed convention-based routes
with the default /controller/action. The action method would then perform the necessary logic needed to
serve the request and return an Action Result, where rendering a View was only one of the results. Views
would be ASPX templates located in a different folder from the controller.

Figure 3, Request lifecycle of an ASP.NET MVC application

The customary Hello World example could contain a HelloWorldController class located in the
Controllers folder:

public class HelloWorldController : Controller


{
public ViewResult Index()
{
return View((string)null);
}

[HttpPost]
public ViewResult Index(String name)
{
return View(name);
}
}

A corresponding view Index.aspx would be in the Views/HelloWorld folder. Initially the framework used
the ASPX view engine, which made mixing C# code and HTML not as nice as current Razor code can be:

<%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<String>"


MasterPageFile="~/Views/Shared/Site.Master" %>

<asp:Content ID="Content1" ContentPlaceHolderID="MainContent" runat="server">


<% if (Model != null) { %>

www.dotnetcurry.com/magazine | 53
<p>
Hello <%=Html.Encode(Model) %> !
</p>
<% } %>
<% using (Html.BeginForm()) { %>
<p>
<label for="name">Name:</label>
<input id="name" name="name" type="text">
</p>
<input type="submit" value="Say Hi" />
<% } %>
</asp:Content>

Common layout and HTML structure were achieved through Master Pages, with Views specifying the Master
Page in its @Page directive. In this example, there would be a shared master page Site.Master.aspx in
the Views/Shared folder:

<%@ Master Language="C#" AutoEventWireup="true" Inherits="System.Web.Mvc.


ViewMasterPage" %>
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Hello World</title>
</head>
<body>
<h1>ASP MVC 1.0</h1>
<asp:ContentPlaceHolder ID="MainContent" runat="server">
</asp:ContentPlaceHolder>
</body>
</html>

The framework didn’t need to be told where the controllers or the views were located. The idea of
conventions was built into it and used for the routing and controller/views location.

The default route /controller/action was provided, and the framework would automatically look for a
controller class inside the Controllers folder that matched the name from the URL. It would then look for a
method of that controller whose name matched the one in the URL. Similarly, the framework would look for
views with the right name inside the Views folder. Developers were also free to replace these conventions
with their own, customizing the routes or wiring their own controller factory and/or view engine.

Overall, a Hello World example like this, and even simple CRUD-like functionality looked more complex
and harder to implement than its Web Forms counterpart. This is why scaffolding, tooling to automatically
generate pages and controllers (an idea imported from Ruby on Rails) became so important.

MVC or Web Forms?


Development in MVC continued and its 2.0 version was released in March 2010, a month before .NET
Framework 4.0 was released. ASP.NET 4.0 further refined Web Forms, amongst other new features, it was
now possible to disable View State except on those controls where you explicitly enable it and it was also
possible to use the routing module with Web Forms pages.

Microsoft faced a tough challenge. It had a significant investment and user base with Web Forms, while at
the same time, it had to shift towards MVC as their future strategy. The position adopted since MVC 1.0 was
that both frameworks were compatible and had different strengths. Web Forms was positioned as a RAD

54 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


platform due to its set of controls and stateful nature, allowing developers to quickly build applications
without dealing with the complexities of the web.

Regardless of how Microsoft positioned both frameworks, the reactions in the community of ASP.NET
developers were mixed. Some were enthusiastic about it since its announcement and saw it as the change
ASP.NET needed, something akin to what the ALT.Net movement had been preaching about.

For many who worked regularly with Web Forms, skepticism was the initial reaction, considering Web Forms
the mature and stable framework to build anything cost-efficiently and MVC as the experiment for purists
and experts. We can read an example here and another one here:

The problem with MVC is that even for "experts" it eats up a lot of valuable time and
requires lot of effort. Businesses are driven by the basic thing "Quick Solution that
works" regardless of technology behind it. WebForms is a RAD technology that saves
time and money. Anything that requires more time is not acceptable by businesses.

Time/money are the greatest reasons why webforms would be chosen over MVC. If
most of your team knows webforms, and you don't have the time to get them up to
speed on MVC, the code that will be produced may not be quality. Learning the basics
of MVC then jumping in and doing that complex page that you need to do are very
different things. The learning curve is high so you need to factor that into your budget.

The complexity and learning curve of MVC was highlighted as one of its disadvantages, since ASP.NET
developers would need to write much more code to achieve the same functionality they got using the
Server Controls and View State. In the words of Rick Strahl or in this answer:

I’ve seen an early build of the framework and it’s not really clear to me how to
effectively handle more complex pages […]. I have built several Web frameworks in the
past that use a similar approach to MVC and while you can easily do many simple
things, in my experience this approach really starts falling apart when you have pages
that contain many different and somewhat independent components. Managing this
complexity through a controller-based approach that manages no state for you is
much more difficult than what Web Forms provides today.

Since, you are not using WebForms you cannot use any ASP.NET control. It means
if you want to create a GridView you will be running a for loop and create the table
manually. If you want to use the ASP.NET Wizard in MVC then you will have to create

www.dotnetcurry.com/magazine | 55
on your own. […] you need to keep in mind that would you benefit from creating all
the stuff again or not? In general, I prefer Webforms framework due to the rich suite of
controls and the automatic plumbing

Another point that generated some push back was the move back to have code mixed with markup in the
views, seen as a step back to the classic ASP days that Web Forms moved away from. We can find some of
these examples in the answers to this Stack Overflow question:

Now someone in their great wisdom (probably someone who never programmed
classic ASP) has decided it’s time to go back to the days of mixing code with content
and call it "separation of concerns".

The biggest downside of MVC is we are going back to the days of ASP. Remember
the spaghetti code of mixing up Server code and HTML??? Oh my god, try to read an
MVC aspx page mixed with javascript, HTML, JQuery, CSS, Server tags and what not...

There were also concerns about giving up the existing investment in Web Forms like knowledge, Server
Controls, tooling and infrastructure in favor of a completely new framework. For companies that had
invested heavily in Web Forms, this wasn’t an easy decision.

To me, these initial reactions, and particularly the consensus amongst them about the complexity of MVC
and the web without the Web Forms abstraction, prove that Microsoft had to make the move into MVC. This
question contains the perfect example:

How would that work? No viewstate? No events?

These are the symptoms of a community of developers about to fall behind with the web, right at the
time when a good understanding of HTML, HTTP and JavaScript would become vital skills for any web
developer!

And let me tell you something, I was one of them.

Around this time, I had been working with Microsoft technologies for about 5 years. I did desktop development
with VB6 and Win Forms, WCF SOAP-based services and the occasional internal site in ASP.NET.

I remember clearly going through the MVC 2 docs while trying to build an example site and being annoyed at
the complexity of it. I had a particularly hard time understanding that the web was stateless and managing state
between browser and server was hard! My instinct told me that when some UX event fired you should be able
to run code in response to it, code which had access to the current state. This worked well in Win Forms and was
simulated in Web Forms but clashed frontally with MVC and its unashamed exposure of the HTML, HTTP and
JavaScript realities of the web.

56 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


At the same time, I hated the few Web Forms applications that I had seen at work (which we managed to
transform into piles of spaghetti code-behind messes that were impossible to follow) and knew there must be a
better way. Even after I was a convert to MVC, it still took me years to adapt to the harsh realities and limitations
of the web, after years of Windows development!

Outside of Microsoft, after jQuery was released in 2006, it quickly became the de-facto client-side library
for DOM manipulation and AJAX, so much so that even Microsoft ended including it in ASP.NET project
templates.

Initially, client-side code consisted of a collection of jQuery event handlers tied together more or less
elegantly. But as the shift of logic to the client continued, client-side JavaScript frameworks emerged and
tried to provide some structure. By 2010 we already had 3 major frameworks competing for our attention:
Backbone, Knockout and AngularJS. Even Microsoft themselves caused another stir by releasing the open
source project TypeScript.

Having a server-side framework that played nicely with these client-side frameworks, giving full control
over HTML, CSS and JavaScript, and with first class support for AJAX requests was necessary for ASP.NET to
remain relevant.

It seems that MVC arrived at the right time!

ASP.NET MVC is here to stay


As MVC continued its way forward, the skepticism among ASP.NET developers morphed into confusion. Ruby
or PHP developers used to deal with HTML and HTTP would have an easier time picking up the framework,
but a traditional Web Forms developer was confused by its apparent complexity and lack of functionality
compared to Web Forms. And as soon as the framework lost its “experimental” tag in the eyes of many,
forums, blogs and magazines would start discussing the right approach as to when and how to adopt MVC.

The development cycle of MVC continued to be independent of the .NET framework, with MVC 3 released in
early 2011. This was a major milestone for MVC that improved the scaffolding features and introduced the
Razor view engine, finally replacing the old ASPX view engine. Another highlight was the release of NuGet
which finally brought a package manager to the .NET community, similar to the existing RubyGems or NPM
which made the Ruby and Node.js open source ecosystems thrive.

Releasing and using open source libraries for .NET had never been easier!

Sure, it was possible to use open source libraries earlier, but most companies relied either on their in-house
libraries or Microsoft ones, with Web Forms controls such as Telerik being the one common exception.

NuGet changed this scenario!

Furthermore, NuGet was the perfect match for MVC and its pluggable nature.

Discussions raged about which dependency injection library was the best, or which mocking framework
should one use, or whether NHibernate was better than Entity Framework (although Microsoft would keep
creating their own libraries, like Unity or EF, rather that supporting the open sourced ones). One could read
the NuGet package of the week column from Scott Hanselman, run the NuGet install command and have it
working on your website in minutes.

www.dotnetcurry.com/magazine | 57
# This was the future!
Install-Package MiniProfiler

It was around this time, with MVC 3 as the latest iteration, that many developers and companies started
to realize MVC was here to stay. Slowly, companies tiptoed into MVC with low-risk projects or proof of
concepts, and developers were getting used to it. As both companies and developers gained experience
with it, MVC began to consolidate as the default choice for new developments.

Even more importantly, MVC succeeded in enabling separation of concerns and TDD. The SOLID principles,
dependency injection and unit testing became part of the common vocabulary of ASP.NET developers!
These were not new, and certainly not impossible in the past, but they became relevant and broadly
accepted whereas before they were a struggle of an informed minority.

But that wasn’t everything. Microsoft released Web Pages as an attempt to merge the concept of Web
Forms with the Razor view engine and other MVC features, together with WebMatrix, a free tool bringing a
simplified IDE and IIS Express together.

Meanwhile the ever-increasing popularity of client-side JavaScript frameworks and the advent of mobile
applications multiplied the need for HTTP services, with REST (Representational State Transfer) becoming
the new buzzword.

It’s no surprise then that mobile and REST services were the major topics when ASP MVC 4 was released in
mid-2012. A new mobile template that used jQuery mobile was introduced, as well as the Display Modes,
which allowed the framework to automatically select between desktop/mobile versions of the same view
based on the browser’s user agent. However, the real star of this release was a new framework called Web
API, introduced to simplify the creation of REST HTTP services by adapting the MVC architecture (but still
being a different framework). Controllers, Filters and Routes were all known building blocks by then, which
meant anyone using MVC would find themselves at home with Web API:

public class ProductsController: ApiController


{
public IEnumerable<Product> GetAllProducts()
{
return repository.GetAll();
}
public Product GetProduct(int id)
{
Product item = repository.Get(id);
if (item == null) throw new HttpResponseException(HttpStatusCode.NotFound);
return item;
}
public Product PostProduct(Product item) { ... }
public void PutProduct(int id, Product product) { ... }
public void DeleteProduct(int id) { ... }
}

Shortly after, an update to MVC was released that introduced SPA (Single Page Applications) project
templates using frameworks like Knockout, AngularJS or Backbone. The template combined both MVC and
Web API with different client-side libraries, further moving the logic to the client.

By then, the ASP.NET team was well used to frequent releases introducing updates and new libraries.
During 2012, they further improved ASP.NET by releasing the SignalR library as a NuGet package. This gave
developers a tool that allowed them to easily implement real-time functionality in ASP.NET applications.

58 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


And with SignalR, the extended ASP.NET family was complete.

One ASP.NET
The refinements to ASP.NET continued in late 2013 with the release of version 4.5.1 of the .NET Framework.
In this release Microsoft repositioned all the technologies now part of ASP.NET (i.e. Web Forms, MVC, Web
API and SignalR) as components under the One ASP.NET umbrella. A unified project template in Visual
Studio now acted as the entry point for all the different components of ASP.NET, with the idea being that
developers could mix and match the different frameworks and find the right fit for their requirements.

Figure 4, All the frameworks rebranded as One ASP.NET

Web API reached its version 2.0 with this release. Amongst other features, the support for OData (Open Data
Protocol) was improved, a specific package to implement CORS (Cross-Origin Resource Sharing) was added,
and above all, the now ubiquitous attribute routing was introduced:

[Route("api/books")]
public IEnumerable<Book> GetBooks() { ... }

At the same time, MVC 5 was released. It received the same attribute routing as Web API, scaffolding
support was improved, a new library called ASP.NET Identity for authentication was released and the
request lifecycle was refined with separated filters for authentication and authorization.

The fact that the default project template was updated with Bootstrap 3 is a great excuse to take a look at
the evolution of the MVC home page:

www.dotnetcurry.com/magazine | 59
60 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)
Figure 5, the evolution of the MVC home page from MVC 3 (top) to MVC 5 (bottom)

Even Web Forms continued receiving new features. Interestingly, most of these features was aimed at
bringing Web Forms closer to its MVC cousin. Developers could now use model binders, data annotations,
routing or unobtrusive JavaScript that began its life as MVC features.
ASP.NET reached 2014 in good shape, with several libraries offered under the One ASP.NET umbrella to suit
different web application needs.

Outside of ASP.NET, React arrived in 2013 and instantly became one of the SPA frameworks to consider,
while Node.js and the NPM ecosystem had experienced an explosive growth. In fact, many were enjoying
building web applications with Node.js thanks to lightweight frameworks like Express, based around the
idea of a request pipeline of middleware functions. So much so that the MEAN stack (MongoDB, Express,
AngularJS and Node.js) became another buzzword of the era!

THE CORE PRESENT (2014-TODAY)


It seemed like the ASP.NET team had finally achieved its long-term vision with the One ASP.NET idea and its
different frameworks.

But this was far from truth.

The team was busy taking a much deeper look at the framework and even at its .NET fundamental roots.
They even decided to code name ASP.NET vNext internally instead of its (then) public name ASP.NET 5.
This took many by surprise, but the roots could be seen in the Project Katana, Microsoft’s OWIN (Open
Web Interface for .NET) implementation for .NET. OWIN and Katana were Microsoft’s initial attempt at
modernizing ASP.NET by taking inspiration from the likes of Ruby and Node. It tried to provide a lightweight
web platform (no System.Web) built on best practices learned from other frameworks (request pipeline

www.dotnetcurry.com/magazine | 61
composed of middleware functions) that could be hosted on any server.

Project Katana was released in 2013 with two of the existing ASP.NET libraries, Web API and SignalR
becoming compatible with it. For example, it was possible to host Web API as a console app thanks to its
OWIN compatibility.

As of today, OWIN has been superseded by ASP.NET Core, since it achieves the same goals and more.

LOOKING BACK, IT’S EASY TO SEE THE ASP.NET TEAM TESTING THE DIRECTION AND GAINING KNOWLEDGE
BEFORE EMBARKING ON THE ASP.NET CORE JOURNEY.

Early development stages, ASP.NET vNext


As early as May 2014, David Fowler announced vNext was in the works:

For the past few months I've been working on what we're now calling ASP.NET vNext
[…] We took a look at some of the common problems that exist in our ecosystem
today, and took best practices learned around .NET, ASP.NET and web development
over the years and combined them to come up with the following requirements.

And right at the end of his blog post one could read the following:

Bonus: We are working with the Mono team to make it work with *nix, osx

Although Mono had been around for quite some time (version 1.0 released in 2004), this was the first time
Microsoft would build a cross-platform .NET product! In a follow-up blogpost he shared preliminary details
of the new architecture, including its new runtime (back in those days known as the KRuntime), new HTTP
abstractions, simplified hosting and the plans to unify MVC/Web API.

Around the same time Scott Hanselman also posted on his blog about vNext. He showed the new runtime
in action, highlighted how NuGet was used to manage all the dependencies and gave us a glimpse of the
new project.json project file.

{
"webroot": "wwwroot",
"version": "1.0.0-*",
"exclude": [
"wwwroot"
],
"packExclude": [
"**.kproj",
"**.user",
"**.vspscc"
],
"dependencies": {
"Microsoft.AspNet.Mvc ": "1.0.0-beta1",
"Microsoft.AspNet.Hosting ": "1.0.0-beta1",

62 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


"Microsoft.AspNet.Diagnostics": "1.0.0-beta1"
},
"frameworks": {
"aspnet50": { },
"aspnetcore50": { }
},
"commands":{
"web": "Microsoft.AspNet.Hosting --server Kestrel --server.urls http://
localhost:50
}
}

While the change to project.json was sadly dropped in Core 1.1.1, in favor of a simplified .csproj (to remain
compatible with existing tooling), it’s another great example of the bold approach taken by the team. Scott
Hanselman also gave an important hint of the open source mentality that was driving the team:

There’s some really cool stuff going on on the ASP.NET and Web Tools team. The team
has been pushing open stuff at Microsoft for a few years now and we've joined forces
with the amazing innovators from the .NET core team and beyond!

This had been a very slow revolution that took them years to accomplish. Remember when we looked back
at Microsoft AJAX and its MS-PL license? It’s great to see that from those humble beginnings, the team
managed to fully adopt open source. With vNext they were planning to do a complete rewrite of ASP.NET
with a new CLR while doing development in the open, using GitHub for code and issue tracking, community
standups and which would end up releasing under the Apache license. I recommend reading Scott Hunter’s
excellent entry Starting the .NET open source revolution.

As the work on vNext continued, the team was keen on sharing information and receiving feedback. For
example, Daniel Roth wrote for the MSDN magazine in Oct 2014 an entry about the recently released
ASP.NET 5 preview. This would reinstate the goals of a cross-platform framework, flexible hosting, unified
MVC/Web API models, built-in dependency injection, a new request pipeline based on the middleware
concept and with NuGet packages as the unit of dependency.

ASP.NET shipped as part of the Microsoft .NET Framework 1.0, released in 2002 along
with Visual Studio 2002. It was an evolution of Active Server Pages (ASP) that brought
object-oriented design, the .NET Base Class Libraries (BCLs), better performance and
much more. ASP.NET was designed to make it easy for developers used to writing
desktop applications to build Web applications with ASP.NET Web Forms. As the Web
evolved, new frameworks were added to ASP.NET: MVC in 2008, Web Pages in 2010,
and Web API and SignalR in 2012. Each of these new frameworks built on top of the
base from ASP.NET 1.0.

With ASP.NET 5, ASP.NET is being reimagined just like ASP was reimagined to
ASP.NET in 2002.

www.dotnetcurry.com/magazine | 63
After years of focus on GUI-based tools for Visual Studio, it was refreshing to see a new take on command
line tools as part of the new framework. The initial previews used the so called KRuntime, with three
different CLI tools:

• kvm – used to install and switch between versions of the runtime

• kpm – used to restore project dependencies and package (or build) a project into a self-contained image

• k – used to run commands defined in the project.json file, starting the application

One would kvm list existing versions of the runtime, kpm restore && kpm pack your project and
finally k web to run it on localhost. If these commands sound alien to you, it is because they went through
a couple of renames. The KRuntime became the DNX runtime and the commands became dnvm, dnu and
dnx instead of kvm, kpm and k. They would change once more with ASP.NET Core 1.0 as part of the unified
dotnet CLI, becoming subcommands like dotnet list, dotnet restore, dotnet publish, dotnet run,
etc.

ASP.NET vNext vs Node.js


It is amusing to see how similar vNext appears to Node.js and this isn’t a coincidence since Node.js is
admittedly one of its influences, alongside Ruby, Go, the learnings from Project Katana and others:

The development process would continue, with up to 8 beta public releases before the first RC (Release
Candidate) was released in November 2015. There would still be a second RC release in May 2016 when
ASP.NET 5 was officially renamed as ASP.NET Core (ASP.NET vNext had always been an internal name)
alongside a renamed .NET Core framework. Before RC2 was released, Scott Hanselman had already
announced the rename and explained the reasons for it (since it caused some pain for early RC1 adopters):

Why 1.0? Because these are new. The whole .NET Core concept is new. The .NET Core
1.0 CLI is very new. Not only that, but .NET Core isn't as complete as the full .NET
Framework 4.6. We're still exploring server-side graphics libraries. We're still exploring
gaps between ASP.NET 4.6 and ASP.NET Core 1.0.

64 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


Armed with a new name that conveyed better the difference between ASP.NET Core and the existing
ASP.NET, the team was ready for the initial launch.

ASP.NET Core launch


ASP.NET Core 1.0 was finally released in late June 2016. It took more than two years of development, but
the result was certainly promising. As Microsoft would put it in its announcement:

We challenged everything instead of delivering an incremental update so you can


have an extremely modular, fast and lightweight platform perfect for the new era
of software development where monolithic applications are replaced by small,
autonomous services that can be deployed individually. All of that while keeping and
extending what .NET is best for: developer productivity, and modern languages and
libraries. […] The end result is an ASP.NET that you’ll feel very familiar with, and which
is also now even more tuned for modern web development.
Creating a web application in ASP.NET Core felt very similar to creating an MVC/Web API application. A
controller wan't much different from the earlier model, in fact this could very well be ASP.NET MVC code:

public class HelloWorldController: Controller


{
public IActionResult Index()
{
return View();
}
[HttpPost]
public IActionResult Index(String name)
{
return View(name);
}
}

Its corresponding Index.cshtml view using Razor also appeared familiar, even though it hinted at one of the
main changes in Views, the Tag Helpers:

@model String
@if (Model != null) {
<p>Hello @Model</p>
}

<form asp-action="Index">
<p>
<label for="name">Name:</label>
<input id="name" name="name" type="text">
</p>
<input type="submit" value="Say Hi" />
</form>

However once one started scratching the surface, the differences started becoming evident.

The first one is the new request pipeline based on middleware functions. Common functionality like
Authentication is implemented as a middleware, and developers can easily create and register their own

www.dotnetcurry.com/magazine | 65
middleware. This way a pipeline gets defined, in which a request flows through the different middleware
until it reaches the routing middleware that delegates to the controller action. And that’s assuming a
traditional MVC model is being used, the framework is flexible enough. Replacing Routing with a simple
request handler written by you, is very straightforward.

Figure 6, Unified request pipeline in ASP.NET Core

Looking past the middleware concept, we also had a dependency injection framework built-in, with the
dependency inversion pattern applied throughout the framework. There was also a completely different
project startup and setup that reflected the new lightweight hosting approach and the middleware-based
pipeline.

Some of the most immediately obvious improvements were the additions made to Razor for building
cleaner views. I am referring to Tag Helpers and View Components which replace the old MVC HtmlHelpers
and Partial Views. Check out one of my earlier articles to see how these could be used to create cleaner
views.

When we got our hands on the 1.0 version, the ASP.NET team was busy working on the following releases.
In November 2016, version 1.1.0 was released with several improvements and refinements like the usage
of middleware as filters or the possibility to render any View Component with a Tag Helper. The latter
would make the usage of View Components very similar to the usage of components in any JavaScript
SPA framework. That is, to render a View Component called LatestArticles it was now possible to
simply write <vc:latest-articles></vc:latest-articles> instead of @await Component.
InvokeAsync(“LatestArticles”).

There would be two more minor releases, 1.1.1 and 1.1.2 before the next major release was publicly
announced.

ASP.NET Core 2.0 and the future roadmap


After two previews, ASP.NET Core 2.0 was finally released during August 2017.

66 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


One of its biggest features was the inclusion of Razor Pages, which in a way brings back the Page Controller
pattern we discussed during the design of Web Forms in 2003!

However, this time there is no forms abstraction, and instead we are fully in control of the HTTP requests
and the generated HTML. Our HelloWorld example can be implemented as a new HelloWorld.cshtml page:

@page
@model HelloWorldModel

@if (Name != null) {


<p>Hello @Name</p>
}
<form method="post">
<p>
<label asp-for="Name "></label>
<input class="form-control" asp-for="Name" />
</p>
<input type="submit" value="Say Hi" />
</form>

Alongside its code-behind file (yes, we got back the code-behind. Isn’t it fun how it all comes around?)
implementing a Page Model:

public class HelloWorldModel: PageModel


{
[BindProperty]
public String Name { get; set; }

public IActionResult OnPost()


{
Return Page();
}
}

This added to the framework the possibility to use the MVVM (Model View View-Model) pattern that
developers can mix and match with the traditional MVC approach according to their needs.

Another important addition in ASP.NET Core 2.0 was the introduction of IHostingService. This brings a
very convenient and straightforward way to run background processes alongside your web application. This
was great for scenarios that don’t require complex distributed architectures. You can read more about these
and other features in one of our earlier articles on ASP.NET Core 2.0.

Version 2.1.0 was released in May 2018 and amongst many features and improvements, it finally brought
the rewritten SignalR framework into ASP.NET Core. This is a simpler, more performant and lightweight
version of SignalR, designed to scale and take advantage of .NET Core.
The broader .NET Core received global tools as a means to install global command-line tools, similar to
globally installed NPM packages. It is still early days to see if this will cause a boom of tools written in .NET
like it happened in Node.js, but the curated list by Nate McMaster is quickly getting longer.

With SignalR and Razor Pages, the original set of libraries under One ASP.NET had finally been ported to
.NET Core! (Assuming Razor Pages as the counterpart for both Web Forms and Web Pages)

Looking ahead, we can see two exciting features in the imminent release of .NET Core 3.0:

• Support for Windows applications will finally be added, allowing WinForms, WPF and UWP applications

www.dotnetcurry.com/magazine | 67
to be supported in Core 3.0

• The experimental Blazor framework will partially ship with .NET Core 3.0. This is explained by the fact
that the server-side model of Blazor will be extracted and renamed as Razor Components. The client-
side model based on web assembly, which can run .NET in the browser will become the sole model of
Blazor and will remain an experimental release not ready for production. Check our article about Blazor
to understand more about both modes.

Let’s not forget all the performance improvements that have been made in .NET Core, and the introduction
of constructs like Span<T> that have made ASP.NET Core 2.2 the 3rd fastest web server (in plain text
responses) according to TechEmpower benchmarks. The team has promised an even better performance for
its next release.

Seems like the future is bright for ASP.NET, even 17 years after its first release!

And funnily enough, with Razor Components being part of ASP.NET Core 3.0, we have gone a full circle
again towards a page model encapsulating template and logic, able to react to UX events. Although this
time it is actually running in the same process, except a small client in the browser, rendering the DOM
updates pushed through a SignalR connection.

Conclusion

17 years is a long time. During these years we have seen ASP.NET rise after its initial launch, struggle with
the pace of changes in the web, correct its course and finally re-invent itself as one of the fastest servers
available by running on Linux.

Along the way, a closed-source Microsoft product with an ecosystem that suffered from the “not invented
here” syndrome has turned into an open source community, in a huge way, thanks to the efforts of the ASP.
NET team.

I hope you have enjoyed this nostalgic trip back in time. I can only wonder what the future might bring!

Daniel Jimenez Garcia


Author

Daniel Jimenez Garcia is a passionate software developer with 10+ years of experience.
He started as a Microsoft developer and learned to love C# in general and ASP MVC in
particular. In the latter half of his career he worked on a broader set of technologies
and platforms while these days is particularly interested in .Net Core and Node.js. He
is always looking for better practices and can be seen answering questions on Stack
Overflow.

Thanks to Damir Arh for reviewing this article.

68 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


PATTERNS AND PRACTICES

Yacoub Massad

Global State in
C# Applications
– Part 1
In this article, I will discuss about global state in C#
applications. I will talk about the problems of global
state and discuss a solution.

Introduction
When writing code, we often use mutable variables to store changing data.

Consider this method for example:

70 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


public static void ProcessDocuments(string[] documentIds)
{
int countSuccess = 0;
int countFailedToFetch = 0;
int countFailedToTranslate = 0;
int countFailedToStoreInDatabase = 0;

foreach (var documentId in documentIds)


{
var processingResult = ProcessDocument(documentId);

if (processingResult == DocumentResult.Success)
countSuccess++;
else if (processingResult == DocumentResult.FailedToFetch)
countFailedToFetch++;
else if (processingResult == DocumentResult.FailedToTranslate)
countFailedToTranslate++;
else
countFailedToStoreInDatabase++;
}

string report = $@"Processing complete.Total Success: {countSuccess}


Total failed to fetch: {countFailedToFetch}
Total failed to translate: {countFailedToTranslate}
Total failed to store in database: {countFailedToStoreInDatabase}";

SendEmailToAdministrator(report);
}

The ProcessDocuments method goes through an array of document ids, and calls a method called
ProcessDocument on each document id. The ProcessDocument method processes a single document. It
obtains each document from some repository, translates it, and then stores it in some database.

The return value of the ProcessDocument method which is of type DocumentResult indicates whether
the processing was successful or not. In case of failure, it indicates one of three kinds of failure.

Based on the return value of the ProcessDocument method, we increment one of the four counters
declared in the code above. There is one counter for each possible value of DocumentResult. Such
counters are used later to create a report that will be sent to the administrator via email.

These four counters hold state.

It is called state because the counters don’t keep the original values they are initialized with (0 in this case),
but change as the method executes.

In this article, I am going to talk about state in two different scopes.

The first scope is the method scope. In this scope, mutable variables are defined, mutated, and used inside a
single method.

The second scope is the multi-method scope. In this scope, mutable fields (or properties) can be defined in
one place, mutated in another place (e.g. one method) and used in yet another place (e.g. another method).

I will start by talking about techniques for eliminating state in the method scope.

www.dotnetcurry.com/magazine | 71
State in the method scope
The ProcessDocuments method I talked about earlier, is relatively readable. Because the counters are
defined and used only inside this method, and because this method is relatively small, using state in this
method is not really an issue.

Usually, state in the method scope doesn’t need to be eliminated. Still, we can eliminate it if we want to.

Consider this method that uses LINQ’s Count method:

public static void ProcessDocumentsViaLinqCount(string[] documentIds)


{
var results = documentIds
.Select(id => ProcessDocument(id))
.ToList();

int countSuccess = results.Count(x => x == DocumentResult.Success);


int countFailedToFetch = results.Count(x => x == DocumentResult.FailedToFetch);
int countFailedToTranslate = results.Count(x => x == DocumentResult.
FailedToTranslate);
int countFailedToStoreInDatabase = results.Count(x => x == DocumentResult.
FailedToStoreInDatabase);

string report = $@"Processing complete.Total Success: {countSuccess}


Total failed to fetch: {countFailedToFetch}
Total failed to translate: {countFailedToTranslate}
Total failed to store in database: {countFailedToStoreInDatabase}";

SendEmailToAdministrator(report);
}

In this updated method, we use LINQ. We use the Select method to call the ProcessDocument method
and obtain a DocumentResult for each document id. We call the ToList method to force execution and
to put the results inside a List<DocumentResult>. For each possible value of DocumentResult, we
define a count* variable to hold the corresponding number of documents.

To count the documents, we use the Count method. We use an overload of this method that takes a
predicate. We use this predicate to specify which values to count.

Although we see no variables mutated inside the ProcessDocumentsViaLinqCount method, the Count
method internally uses a mutable variable to keep track of the count. This is how the method looks like
based on the source code of the .NET Framework version 4.7.2 (modified to remove code irrelevant to the
discussion here):

public static int Count<TSource>(this IEnumerable<TSource> source, Func<TSource,


bool> predicate)
{
int count = 0;
foreach (TSource element in source)
{
if (predicate(element)) count++;
}
return count;
}

72 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


The count variable is mutated every time the predicate returns true. So, by using the Count method, we
simply move state from ProcessDocumentsViaLinqCount to Count.

Note: the ProcessDocumentsViaLinqCount is not very optimized when it comes to performance. There are ways to
make it more performant while keeping it state-free. This is outside the scope of this article, however.

Recursion is another technique that can be used to eliminate method-scoped state. Consider this method
for example that calculates the factorial of a number:

public static int Factorial(int number)


{
int factorial = 1;
for (int i = 2; i <= number; i++)
{
factorial = factorial * i;
}
return factorial;
}

There are two variables that get mutated in this method: factorial and i.

Here is how the recursive version looks like:

public static int FactorialRecursive(int number)


{
if (number <= 1)
return 1;
return number * FactorialRecursive(number - 1);
}

No variables are mutated in this version. It is easy to understand this version of the method if we see that
n! = n * (n – 1) * (n-2) * … * 1 = n * (n -1)!

Let’s now talk about a more important type of state.

State in the multi-method scope


As I mentioned before, method-scoped state is usually not an issue and doesn’t need to be eliminated,
especially if methods are small. It is state that is defined outside the scope of a single method that needs
special treatment. Consider this example:

public static class GlobalServer1State


{
public static DateTime Server1DownSince { get; set; }
public static bool Server1IsDown { get; set; }
}

public static class GermanTextTranslationModule


{
public static Text TranslateFromGerman(Text text)
{
bool useServer1 = true;
if (GlobalServer1State.Server1IsDown)
{

www.dotnetcurry.com/magazine | 73
if (DateTime.Now - GlobalServer1State.Server1DownSince < TimeSpan.
FromMinutes(10))
{
useServer1 = false;
}
}

if (useServer1)
{
try
{
var result = TranslateFromGermanViaServer1(text);

GlobalServer1State.Server1IsDown = false;

return result;
}
catch
{
GlobalServer1State.Server1IsDown = true;
GlobalServer1State.Server1DownSince = DateTime.Now;
}
}

return TranslateFromGermanViaServer2(text);
}

//...
}

public static class SpanishTextTranslationModule


{
public static Text TranslateFromSpanish(Text text)
{
//Same logic as in TranslateFromGerman
//...
}
//...

The complete example is available in the UsingGlobalVariables project in the StateExamples solution. You
can see this solution here: https://github.com/ymassad/StateExamples

Note that for simplicity, all the projects use dummy data instead of real data.

The TranslateFromGerman method takes some text as input and translates it from German to English
using some remote servers.

Based on customer requirements, the application should use translation server 1 to translate. In case there
are errors using server 1, server 2 should be used. Also, if there is an error with server 1, server 2 should be
used for any translation requests for the next 10 minutes. Only after that should the application try to use
server 1 again.

The Server1IsDown and Server1DownSince properties of the GlobalServer1State class are used
to store the state required to make this work. When there is an error talking to server 1, the code sets
Server1IsDown to true and Server1DownSince to the current time. The values of these two properties

74 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


are used at each invocation of TranslateFromGerman to determine whether to use server 1 or server 2.
After a successful call to server 1, Server1IsDown is set to false.

The TranslateFromSpanish method works in the same way as TranslateFromGerman. They both share
the state stored in GlobalServer1State. If the application detects that server 1 is down when invoking
TranslateFromGerman, TranslateFromSpanish would be smart enough to use server 2 if it was
invoked within 10 minutes after that.

This code, however, has the following issues:

1. The behavior of TranslateFromGerman, for example, cannot be understood by reading this method
alone. Whether this method will use server 1 or server 2 does not depend solely on the parameters
passed to this method.

Previous invocations of TranslateFromGerman might affect how the method behaves. Also,
invocations of TranslateFromSpanish can affect how TranslateFromGerman behaves. Other
relevant methods might also be designed to change the Server1IsDown and Server1DownSince
properties and thus affect the behavior of TranslateFromGerman. Also, the signature of
TranslateFromGerman does not tell us that this method modifies state. This makes this method
dishonest. See the Writing Honest Methods in C# article for more details.

2. When writing tests for TranslateFromGerman or for methods that call it directly or indirectly, we
need to initialize the state properties correctly before running the tests so that the results will be
predictable. It is easy to forget something like this. For example, the TranslateDocumentsInFolder
method from the example project calls the TranslateFromGerman method indirectly
(TranslateDocumentsInFolder > TranslateDocument > TranslateParagraph > TranslateText >
TranslateFromGerman).

In a large application that uses global variables to store state, it would be hard to figure out which state
does a method like TranslateDocumentsInFolder depend on indirectly.

3. What starts as global state might be required later to become scoped state. For example, consider the
following new changes:

• Now there are two servers (1 and 2) in location A, and another two servers (1 and 2) in location B.

• Documents in folder 1 should be translated using servers in location A, and documents in folder 2
should be translated using servers in location B.

• The state of servers in location A is different from the state of servers in location B. This means that
when server 1 in location A is down, translation of documents from folder 2 should continue to use
server 1 in location B.

It doesn’t make sense to duplicate the code in TranslateFromGerman and TranslateFromSpanish (and
other methods that call them) so that each copy of these methods (a copy for documents in folder 1 and
another copy for documents in folder 2) uses a different global state object. It is much easier to reuse all of
these methods. This means that we cannot simply use global state here.

There is a solution that addresses all of these issues. Consider this updated method signature:
public static Text TranslateFromGerman(Text text, ref Server1State server1State)
public class Server1State

www.dotnetcurry.com/magazine | 75
{
public DateTime Server1DownSince { get; }
public bool Server1IsDown { get; }
//Constructor..
}

Notice the additional Server1State parameter added to the TranslateFromGerman method. This
parameter is passed by reference. This means that this parameter allows the caller to give this method a
Server1State value, but also the TranslateFromGerman method can give the caller back an updated
value of this parameter.

See the PassingStateViaRefParameters project in the sample code

The following are differences between this and the global state solution:

1. The signature of the new TranslateFromGerman method indicates to the reader of the code that this
method both reads and writes state. This makes it easier to understand how this method works.

2. When testing, we must provide a value for the server1State parameter or the test will not compile.
This will fix the issue of forgetting to initialize the state when writing tests.

3. Different components can use this method passing different state variables for the server1State
parameter. This allows the reuse of this method.

Note that the Server1State class is immutable, that is, once an instance is created, the value of its
properties cannot change. When a method like TranslateFromGerman wants to update the state, it
creates a new instance of Server1State with updated values of the properties and assigns it to the
server1State parameter.

We could make the Server1State class mutable and make the server1State parameter a normal
parameter instead of a ref parameter. However, the issue with this approach is that the method
signature wouldn’t tell us that such state is mutable. Using the ref keyword with an immutable state class
makes it very clear to the reader of the code that this parameter represents something that the method can
change.

Now, if we can simply use this solution to fix the issues introduced by using global variables, why do so
many people use global variables?

Take a look at the PassingStateViaRefParameters project. Start from the Main method where the
TranslateDocumentsInFolder method is called. This method calls TranslateFromGerman indirectly
like this: TranslateDocumentsInFolder > TranslateDocument > TranslateParagraph > TranslateText ->
TranslateFromGerman.

Notice how I had to add a server1State parameter to all of these methods. Currently, only
TranslateFromGerman and TranslateFromSpanish really require these parameters. But because
we want to get rid of global variables, many methods have to have this parameter as well. This is
important because when TranslateDocumentsInFolder ends up invoking TranslateFromGerman
and TranslateFromSpanish multiple times, we need to make sure that all of these invocations get a
reference to a single state object.
So basically, we polluted the entire call hierarchy with an extra parameter that does not make sense to all

76 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


methods. Why should TranslateDocument, for example, care about the state of server 1?

What happens when a lower-level method requires some new state parameter? We have to update all
methods that call it directly or indirectly to take and pass this new parameter. What is the solution?

The solution is dependency injection.

Dependency injection can be done using classes or using functions. There is not a conceptual difference
between doing dependency injection using classes or functions. In a previous article, Composing Honest
Methods in C#, I proved this by solving the same problem once using dependency injection with classes,
and the second time using functions.

In this article, I will concentrate only on functions.

Dependency injection allows us to specify some parameter value of a function at composition time so that
the function only requires the rest of its parameters when invoked at runtime.

For dependency injection to work, we need to invert control. For example, the
TranslateDocumentsInFolder method would not call the TranslateDocument method directly.
Instead, it would receive a function parameter that it calls instead.

public static void TranslateDocumentsInFolder(


string folderPath,
string destinationFolderPath,
Func<Document, Document> translateDocument)
{
IEnumerable<Document> documentsEnumerable = GetDocumentsFromFolder(folderPath);

foreach (var document in documentsEnumerable)


{
var translatedDocument = translateDocument(document);

WriteDocumentToDestinationFolder(translatedDocument, destinationFolderPath);
}
}

Notice the translateDocument function parameter. It does not take any state parameters. Also, the
TranslateDocumentsInFolder method does not take any state parameter.

See the PassingStateViaRefParametersWithIOC project. It contains the full source code. Take a look at all
the relevant methods. Many methods take function parameters instead of invoking lower-level methods
directly.

Here is an excerpt from the Composition Root (the Main method):

Server1State server1StateForLocationA = new Server1State(false, DateTime.MinValue);

FolderProcessingModule.TranslateDocumentsInFolder(
"c:\\inputFolder1",
"c:\\outputFolder1",
document => DocumentTranslationModule.TranslateDocument(
document,
paragraph => DocumentTranslationModule.TranslateParagraph(
paragraph,

www.dotnetcurry.com/magazine | 77
paragraphText => DocumentTranslationModule.TranslateText(
paragraphText,
text => GermanTextTranslationModule.TranslateFromGerman(
text,
Location.A,
ref server1StateForLocationA),
text => SpanishTextTranslationModule.TranslateFromSpanish(
text,
Location.A,
ref server1StateForLocationA)))));

In this project, I use lambda expressions as a mean to compose the functions together. Note that only
the TranslateFromGerman and TranslateFromSpanish methods have a ref parameter of type
Server1State. The other methods don’t know about the existence of such state.

We can define state instances as variables in the Main method, and then pass them by reference only to the
functions that really need them. Note also that the TranslateFromGerman and TranslateFromSpanish
methods were updated to take a Location parameter. This parameter determines whether to use servers
in location A or B. In the excerpt above, the code uses location A. Note also, that this code uses a variable
named server1StateForLocationA to store the state.

In the Main method, there is also similar code that processes documents in the c:\inputFolder2 folder and
uses servers in location B. This code uses another variable, i.e. server1StateForLocationB, to store the
state.

The solution we reached so far is far from perfect.

When the application becomes bigger, the lambda composition block becomes larger. Look at the lambda
composition block above. Notice how indentation becomes longer as we compose the lower-level functions.
Also, it is very hard to understand a large single block like this. It would be better if we extract each
composed function into its own variable like this:

Func<Text, Text> translateFromGerman = text =>


GermanTextTranslationModule.TranslateFromGerman(
text,
Location.A,
ref server1StateForLocationA);

Func<Text, Text> translateFromSpanish = text =>


SpanishTextTranslationModule.TranslateFromSpanish(
text,
Location.A,
ref server1StateForLocationA);

Func<Text, Text> translateText = paragraphText =>


DocumentTranslationModule.TranslateText(
paragraphText,
translateFromGerman,
translateFromSpanish);

Func<Paragraph, Paragraph> translateParagraph = paragraph =>


DocumentTranslationModule.TranslateParagraph(
paragraph,
translateText);

78 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


Func<Document, Document> translateDocument = document =>
DocumentTranslationModule.TranslateDocument(
document,
translateParagraph);

This fixes the indentation issue and makes it easier to individually understand each composed function.
However, this brings up other issues. I will talk about these issues and propose a solution in an upcoming
article.

There is more to dealing with state than what is covered in this part. For example, are there alternatives
to ref parameters? How to deal with state in multi-threaded applications? I will talk about these topics in
another article.

Conclusion:

In this article, I talked about state.

State represents data that changes. When we have a variable in a method that changes its value as the
method executes, we say that such variable holds state. State that is scoped to a single method is generally
not a problem, and no special care should be given to such state. When multiple methods are required to
share state, people tend to use global variables/objects to store such state. Such usage of global variables
makes it hard to understand, test, and reuse code that uses such variables.

To solve such issues, we can make methods take the state using parameters. This makes it easier to
understand, test, and reuse such methods. However, this means that all methods that directly or indirectly
use these methods might be required to take and pass such parameters themselves.

To fix that, we can invert control and use dependency injection. In this case, functions don’t call each other
directly but call function parameters passed to them. The functions that don’t need to deal with state
directly don’t have to take and pass any state parameters.

However, someone must connect these functions together. I showed an example in this article that uses
lambdas to do just that. Only the functions that directly need to deal with the state are given a reference to
the state.

There is still more to talk about regarding state. I will talk about this topic more in the next part(s).

Yacoub Massad
Author
Yacoub Massad is a software developer who works mainly with Microsoft technologies. Currently, he works
at Zeva International where he uses C#, .NET, and other technologies to create eDiscovery solutions. He
is interested in learning and writing about software design principles that aim at creating maintainable
software. You can view his blog posts at criticalsoftwareblog.com.

Thanks to Damir Arh for reviewing this article.

www.dotnetcurry.com/magazine | 79
ANGULAR

Ravi Kiran

CONTROLLING
CHANGE DETECTION
USING ONPUSH
AND CHANGEDETECTIONREF

Controlling Change Detection in


Angular
A big reason behind Angular.js popularity is its rich data binding system. Because of this
feature, one can easily bind a piece of data to the view and not worry about updating it on
the view again.

The data binding system in Angular is powered by its change detection system.
The change detection system lets Angular know when to check for the changes in
the data and apply them on the view. To build robust applications or to build some
components that have to be reused in multiple applications, it is essential to have a good
understanding of the change detection system.

This tutorial will explain the default behavior of change detection on a component and
then it will discuss ways to customize it.
Change Detection Strategies in Angular
Angulars’ data binding system relies on change detection. The expressions bound with property bindings
and interpolations are evaluated whenever change detection runs and the elements on the UI are updated
if there is any change in the value.

A change detector is attached to every component and it takes care of any change detection on the
component. Behavior of the change detector depends on the strategy applied on the component and the
settings applied on the change detector by the developer.

Let’s understand how the strategies work and we will discuss about the change detector object in the next
section.

Change detection strategy can be applied on a component using the changeDetection property of
the component decorator. Values of this property are defined in the enum ChangeDetectionStrategy,
defined in the package @angular/core.

Let’s understand the values of this enum and the behavior.

Default
By default, every component’s change detection is set to ChangeDetectionStrategy.Default.

An Angular application can be seen as a tree of components. As every component has a change detector,
there would be a tree of change detectors. Each of them is triggered on every browser event.

The NgZone API, specifically the application’s onTurnDone event triggers on every browser event and
it in turn triggers the change detection. A browser event could be an event raised because of the users’
interaction with the page by performing an action like clicking a button, or when a pending XHR call gets
completed, or when a setTimeout block has to execute its callback or due to anything that goes through
the browser’s event loop.

Let's understand this behavior with help of an example. Consider the following component:

import { Component, OnInit, Input, ChangeDetectionStrategy } from '@angular/core';


import { Place } from '../place';

@Component({
selector: 'app-place',
templateUrl: './place.component.html',
styleUrls: ['./place.component.css'],
changeDetection: ChangeDetectionStrategy.OnPush
})

export class PlaceComponent {


@Input()
placeDetails: Place;

get IsVisited(): string {


console.log('Finding if the place is visited...');

www.dotnetcurry.com/magazine | 81
return this.placeDetails.isVisited ? 'Yes' : 'No';
}
}

The PlaceComponent component accepts a place and displays it as a card. The place object has a boolean
field isVisited. The component wraps it around the getter IsVisited, to return a yes or no string value
corresponding to the boolean value. The console.log is added to the getter to know how many times it is
getting called.

The following snippet shows the template and styles used in this component:

<!-- template -->


<div class="col-md-3">
<div class="card" [ngClass]="placeDetails.isVisited ? 'visited-place': ''"
style="width: 18rem;">
<div class="card-body">
<h5 class="card-title">{{placeDetails.name}}</h5>
<div>City: {{placeDetails.city}}</div>
<div>Country: {{placeDetails.country}}</div>
<div>Visited: {{IsVisited}}</div>
<div>Rating: {{placeDetails.rating}}</div>
</div>
</div>
</div>

<!-- styles -->


.place-name {
font-weight: bold;
font-style: italic;
}

.visited-place {
background-color: azure;
}

.place-changed {
background-color: palegreen;
}

.card {
margin-top: 10px;
}

This component is used in the following PlacesComponent. The PlacesComponent has an array of places
and it passes each of those place objects to the PlaceComponent. And it has a method to toggle the
visited status of the third place.

The following snippet shows the PlacesComponent:

import { Component, OnInit } from '@angular/core';


import { Place } from '../place';

@Component({
selector: 'app-places-component',
template: `<button style="margin-left: 5px;" class="btn btn-primary"
(click)="toggleVisited(place)">Toggle Visited for Red Rocks Park</button>
<br />

82 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


<br />
<div class="row">
<app-place *ngFor="let place of places" [placeDetails]="place"></app-place>
</div>`,
styleUrls: ['./places-component.component.css']
})
export class PlacesComponentComponent implements OnInit {

places: Place[];

selectedPlace: Place;

constructor() { }

ngOnInit() {
this.places = [];
this.places.push({ name: 'Charminar', city: 'Hyderabad', country: 'India',
isVisited: true, rating: 4 });
this.places.push({ name: 'Tower Bridge', city: 'London', country: 'UK',
isVisited: false, rating: 4.5 });
this.places.push({ name: 'Red Rocks Park', city: 'Denver', country: 'USA',
isVisited: true, rating: 3 });
this.places.push({ name: 'Taj Mahal', city: 'Agra', country: 'India',
isVisited: false, rating: 5 });
this.places.push({ name: 'Eiffel Tower', city: 'Paris', country: 'France',
isVisited: true, rating: 4 });

this.selectedPlace = this.places[0];
}

toggleVisited(place: Place) {
this.places[2].isVisited = !this.places[2].isVisited;
}
}

Now when you run this application, you will see the following messages logged on the console:

Figure 1 – Console with messages

The message "Finding if the place is visited..." is printed 20 times; 4 times each for the 5 instances of the
PlaceComponent. The following listing explains this behavior:

• First time when the PlaceComponent instances are initialized, the input property in these components
is undefined.

• In the development mode, Angular runs a check for every binding expression after the change detection
runs. The next set of messages are printed during the checking.

• After the input properties of the PlaceComponent instances are set, change detection runs on each of
these components to update their bindings.

www.dotnetcurry.com/magazine | 83
• Because the application is running in the development mode, Angular does a check again to validate
the bindings.

• The second check to validate the bindings doesn't run when Angular runs in production mode, and the
page looks like the one shown in Figure 2.

Figure 2 – Page on the browser

The button on the page toggles the visited status of Red Rocks Park. On clicking this button, you would see
background color of the card displaying details of Red Rocks Park toggling between Azure and White. Every
click on this button results in 10 messages on the console.

Figure 3 shows the state of the console after clicking the button once:

Figure 3 – Messages on the console after clicking the button once

This happens because every component validates its bindings after the browser event and because it is in
development mode, Angular runs its check.

The performance degradation because of this behavior is not visible in this application, this behavior will
have visible impact on performance of the applications where there are a lot of bindings on the page. The
time taken by these events can be measured by recording the performance.

Figure 4 shows the time taken by Angular when the button is clicked three times:

Figure 4 – Time taken by the event handler on three instances

84 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


As we can see, when the button was first clicked, it took 86.5 ms. The second click took 15.2 ms and the
third click took 15.3 ms.

Now let’s tweak the code a bit and see how it performs.

OnPush
As stated earlier, change detection on a component is controlled using the changeDetection property
in the component annotation. By default, its value is ChangeDetectionStrategy.Default. It can
be changed to ChangeDetectionStrategy.OnPush. Angular doesn’t check for the changes in the
component on every event when the OnPush strategy is applied, but it checks them whenever value of an
input property to the component changes.

The OnPush strategy works based on immutable objects. The change detector would detect changes on the
component if one of its input properties is assigned with a new object.

With mutable objects, the change might take place any number of times in the object and the change
detector might have to check for the changes on each change. Whereas with immutable objects, all the
changes on the object would get combined into a single change with re-assignment of the object and the
change detector will get triggered just once for the entire object.

To understand this better, let’s modify the PlaceComponent to use the OnPush strategy. For this, the
component annotation alone needs to be modified. The following snippet shows the modified annotation:

import { Component, OnInit, Input, ChangeDetectionStrategy } from '@angular/core';


import { Place } from '../place';

@Component({
selector: 'app-place',
templateUrl: './place.component.html',
styleUrls: ['./place.component.css'],
changeDetection: ChangeDetectionStrategy.OnPush
})

Now you will see the color of the Red Rocks Park component toggling. Let’s also check performance of the
events. Click the button a couple of times and record performance of the page. The following image shows
the time taken by Angular in three instances:

Figure 5 – Time taken by event handler when OnPush is applied

www.dotnetcurry.com/magazine | 85
On comparing the time taken by the events in Figure 5 with the time in Figure 4, it is clear that the page
performs well after applying the OnPush strategy. So, it is a good practice to use the OnPush strategy on
pages with a lot of data bound on it.

Using ChangeDetectorRef
Every component in Angular gets associated with a change detector to check for the changes. The
change detector object is available to the component through ChangeDetectorRef. It can be injected into a
component and can be used to control the change detection on the component.

While the OnPush strategy provides an efficient way to detect changes in a component with input
properties, the ChangeDetectorRef provides more control over it.

Let’s create a modified version of the PlaceComponent to play with ChangeDetectorRef. The following
snippet shows the component:

import { Component, OnInit, Input, ChangeDetectorRef } from '@angular/core';


import { Place } from '../place';

@Component({
selector: 'app-place-cd',
templateUrl: './place-cd.component.html',
styleUrls: ['./place-cd.component.css']
})
export class PlaceCdComponent {

travelCost: number;
constructor(private cdr: ChangeDetectorRef){
setInterval(() => {
this.travelCost = Math.round(Math.random() * 10000);
}, 100);
}

@Input()
placeDetails: Place;

get IsVisited(): string {


console.log('Finding if the place is visited...');
return this.placeDetails.isVisited ? 'Yes' : 'No';
}
}

Much of the component is unchanged. It has the ChangeDetectorRef injected in the constructor and the
field travelCost is added. Value of travelCost is modified after every 100 miliseconds using a random
number. The template would be modified slightly, to show the travelCost.

The following snippet shows the template:

<div class="col-md-3">
<div class="card" [ngClass]="placeDetails.isVisited ? 'visited-place': ''"
style="width: 18rem;">
<div class="card-body">
<h5 class="card-title">{{placeDetails.name}}</h5>
<div>City: {{placeDetails.city}}</div>

86 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


<div>Country: {{placeDetails.country}}</div>
<div>Visited: {{IsVisited}}</div>
<div>Rating: {{placeDetails.rating}}</div>
<div *ngIf="travelCost">Travel Cost: {{travelCost}}</div>
</div>
<button (click)="getCurrentCost()" class="btn btn-primary">Get Current Cost
</button>
<button (click)="reattach()" class="btn btn-primary">Re-attach</button>
<button (click)="detach()" class="btn btn-primary">Detach</button>
</div>
</div>

Let’s display one of the places using this component. Add the following statement to the template of
PlacesComponent:

<app-place-cd [placeDetails]="places[2]"></app-place-cd>

You will see that the travelCost gets updated on the screen for 10 times every second. Though this
behavior is fine for this example, it could cause inefficiency in larger applications. In such cases, we could
disable change detection of the component and show it when a user wants to see it. The change detection
can be disabled by calling the detach method on the ChangeDetectorRef. The ngAfterViewInit lifecycle
hook is the right place to do it, as we would have rest of the fields bound by then. This is shown here:

ngAfterViewInit() {
this.cdr.detach();
}

Now the div block containing travel cost is not shown on the page as it is not set before the change
detection is detached. Let’s add a button to show the current value of travel cost. On click of this button,
we will ask Angular to detect changes for the component. The following snippet shows the HTML of the
button:

<button (click)="getCurrentCost()" class="btn btn-primary">Get Current Cost</


button>

..and the following is the event handler of this button:


getCurrentCost() {
this.cdr.detectChanges();
}

Now you will see that the total cost value is getting updated whenever this button is clicked. This means,
value of the total cost gets updated in the component whenever the setInterval’s callback gets executed.
But the change is notified to the change detector only when the button is clicked. This means, the
detectChanges method detects the change just once and the change detector is detached again once the
job is done.

To get the change detector back into action on every event, we can re-attach the change detector. Let’s add
two more buttons to the template. Here are the buttons:
<button (click)="reattach()" class="btn btn-primary">Re-attach</button>
<button (click)="detach()" class="btn btn-primary">Detach</button>

..and following are the methods handling events on these buttons:


reattach() {
this.cdr.reattach();
}

www.dotnetcurry.com/magazine | 87
detach() {
this.cdr.detach();
}

Play with the buttons after saving these changes. After clicking Re-attach, the behavior is similar to the
default behavior. On clicking the Detach button, the changes are not detected again and now we can see
the latest value of total cost by clicking the Get Current Cost button.

Conclusion

Angular’s change detection system is quite powerful and flexible.

While it provides the best experience to both developers as well as the users of the applications, at the
same time there are possibilities of losing control over it.

The techniques discussed in this article provide you a starting point on controlling change detection
depending on the need. Hope it helps in making your applications more efficient!

Download the entire source code from GitHub at


bit.ly/dncm41-changedetection

Ravi Kiran
Author

Ravi Kiran (a.k.a. Ravi Kiran) is a developer working on Microsoft Technologies at Hyderabad. These
days, he is spending his time on JavaScript frameworks like AngularJS, latest updates to JavaScript
in ES6 and ES7, Web Components, Node.js and also on several Microsoft technologies including
ASP.NET 5, SignalR and C#. He is an active blogger, an author at SitePoint and at DotNetCurry. He
is rewarded with Microsoft MVP (ASP.NET/IIS) and DZone MVB awards for his contribution to the
community.

Thanks to Keerti Kotaru for reviewing this article.

88 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


C#

Damir Arh

The article gives an overview of the new features in C# 8


which can be tried out using Visual Studio 2019 Preview.

NEW C# 8
FEATURES IN
VISUAL STUDIO
2019 PREVIEW
90 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)
SETTING UP THE DEVELOPMENT
ENVIRONMENT
The long-awaited next major version of the C# language (C# 8.0) is nearing its final release.
It’s going to be released at the same time as .NET Core 3.0. This means that just like .NET Core 3.0 preview,
C# 8 is also included in the Visual Studio 2019 Preview versions.

If you want to try out all the already available new language features yourself, you also need to install the
latest preview version of .NET Core 3.0. That’s because some of the language features depend on .NET types
which will be a part of .NET Standard 2.1 (you can read more about .NET Standard 2.1 in my DNC Magazine
article What Was New for .NET Developers in 2018 & the Road Ahead).

At the time of writing, the only .NET platform with these types is .NET Core 3.0 (Preview 2 or later). It’s also
worth mentioning that there are currently no plans for a future .NET framework version to implement .NET
Standard 2.1 and include the new types required for some of the C# 8 features.

To create a suitable project for trying out all currently available C# 8.0 features, you can follow these steps:

1. Create a new .NET Core project of any type.

2. In the Application pane of the project Properties window, make sure that the Target framework is set to
.NET Core 3.0.

3. From the Build pane of the project Properties window, open the Advanced Build Settings dialog and
select C# 8.0 (beta) as the Language version.

NULLABLE REFERENCE TYPES


Nullable reference types were already considered in the early stages of C# 7.0 development but were
postponed until the next major version. The goal of this feature is to help developers avoid unhandled
NullReferenceException exceptions.

The core idea is to allow variable type definitions to specify whether they can have null value assigned to
them or not:

IWeapon? canBeNull;
IWeapon cantBeNull;

Assigning a null value or a potential null value to a non-nullable variable results in a compiler warning
(the developer can configure the build to fail in case of such warnings, to be extra safe):

canBeNull = null; // no warning


cantBeNull = null; // warning
cantBeNull = canBeNull; // warning

Similarly, warnings are generated when dereferencing a nullable variable without checking it for null
value first:

www.dotnetcurry.com/magazine | 91
canBeNull.Repair(); // warning
cantBeNull.Repair(); // no warning
if (canBeNull != null)
{
canBeNull.Repair(); // no warning
}

The problem with such a change is that it breaks existing code: the feature assumes that all variables
from before the change are non-nullable. To cope with that, static analysis for null-safety can be enabled
selectively with a compiler switch at the project level.

Developers can opt-in for nullability checking when they are ready to deal with the resulting warnings. Still,
this should be in their own best interest, as the warnings might reveal potential bugs in their code.
The switch is persisted as a property in the project file. There’s no user interface in Visual Studio 2019 yet
for changing its value. Therefore, the following line must be added manually to the first PropertyGroup
element of the project file to enable the feature for the project:

<NullableContextOptions>enable</NullableContextOptions>

For more granularity, the #pragma warning directives can be used to disable and re-enable individual
warnings for a block of code. As an alternative, a new #nullable directive has been added. It can be used to
enable support for nullable reference types for a block of code even if it is disabled at the project level:

#nullable enable
IWeapon? canBeNull;
IWeapon cantBeNull;

canBeNull = null; // no warning


cantBeNull = null; // warning
cantBeNull = canBeNull; // warning
#nullable restore

It’s a good idea to use #nullable restore instead of #nullable disable to disable nullable reference
types for the code that follows. This will ensure that the checks remain enabled for the rest of the file if
you later decide to enable the feature for the whole project. Using #nullable disable would disable the
checks even in that case.

IMPROVEMENTS TO PATTERN MATCHING


Some pattern matching features have already been added to C# in version 7.0 (you can read more about
them in my DNC Magazine article C# 7 - What’s New).

Several new forms of pattern matching are being added to C# 8.0.

Tuple patterns
Tuple patterns allow matching of more than one value in a single pattern matching expression:

switch (state, transition)


{
case (State.Running, Transition.Suspend):
state = State.Suspended;

92 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


break;
case (State.Suspended, Transition.Resume):
state = State.Running;
break;
case (State.Suspended, Transition.Terminate):
state = State.NotRunning;
break;
case (State.NotRunning, Transition.Activate):
state = State.Running;
break;
default:
throw new InvalidOperationException();
}

Switch expression
The switch expression allows terser syntax than the switch statement when the only result of pattern
matching is assigning a value to a single variable:

state = (state, transition) switch


{
(State.Running, Transition.Suspend) => State.Suspended,
(State.Suspended, Transition.Resume) => State.Running,
(State.Suspended, Transition.Terminate) => State.NotRunning,
(State.NotRunning, Transition.Activate) => State.Running,
_ => throw new InvalidOperationException()
};

There are several differences in the syntax if we compare it to the switch statement:

• The left-hand variable of the assignment is specified only once before the expression, instead of in the
body of each case.

• The switch keyword is placed after the tested value instead of placing it before it.

• The case keyword is not used anymore.

• The : character between the pattern and the body is replaced with a =>.

• Instead of break statements, the , character is used to separate the cases.

• For the body, expressions must be used instead of code blocks.

• For the catch-all case, a discard (_) is used instead of the default keyword.

A switch expression must always return a value. However, the code will still compile even if that’s not true
(i.e. the cases do not cover all possible values):

state = (state, transition) switch


{
(State.Running, Transition.Suspend) => State.Suspended,
(State.Suspended, Transition.Resume) => State.Running,
(State.Suspended, Transition.Terminate) => State.NotRunning,
(State.NotRunning, Transition.Activate) => State.Running
};

www.dotnetcurry.com/magazine | 93
The compiler will only emit a warning for the above code. If at runtime the tested value is not matched by
any case, an InvalidOperationException will be thrown.

Positional patterns
When testing a type with a Deconstructor method, positional patterns can be used which have syntax
very similar to tuple patterns:

if (sword is Sword(10, var durability))


{
// code executes if Damage = 10
// durability has value of sword.Durability
}

The code assumes that the Sword type contains the following Deconstruct method:

public void Deconstruct(out int damage, out int durability)


{
damage = Damage;
durability = Durability;
}

The pattern above compares the first deconstructed value and assigns the second deconstructed value to a
newly declared variable. Although I’m using this pattern in an is expression, I could also use it in a switch
expression or a switch statement.

Property patterns
Even if a type doesn’t have an appropriate Deconstruct method, property patterns can be used to achieve
the same as with positional patterns:

if (sword is Sword { Damage: 10, Durability: var durability })


{
// code executes if Damage = 10
// durability has value of sword.Durability
}

The syntax is a bit longer but also more expressive. It’s a better alternative to a positional pattern in cases
when the order of values in the Deconstruct method would not be obvious.

ASYNCHRONOUS STREAMS
C# already has support for iterators and asynchronous methods. In C# 8.0, the two are combined into
asynchronous streams. These are based on asynchronous versions of the IEnumerable and IEnumerator
interfaces:

public interface IAsyncEnumerable<out T>


{
IAsyncEnumerator<T> GetAsyncEnumerator(CancellationToken cancellationToken =
default);
}

94 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


public interface IAsyncEnumerator<out T> : IAsyncDisposable
{
T Current { get; }

ValueTask<bool> MoveNextAsync();
}

Additionally, an asynchronous version of the IDisposable interface is required for consuming the
asynchronous iterators:

public interface IAsyncDisposable


{
ValueTask DisposeAsync();
}

This allows the following code to be used for iterating over the items:

var asyncEnumerator = GetValuesAsync().GetAsyncEnumerator();


try
{
while (await asyncEnumerator.MoveNextAsync())
{
var value = asyncEnumerator.Current;

// process value
}
}

finally
{
await asyncEnumerator.DisposeAsync();
}

It’s very similar to the code we’re using for consuming regular synchronous iterators. However, it does not
look familiar because we typically just use the foreach statement instead. An asynchronous version of the
foreach statement is available for asynchronous iterators:

await foreach (var value in GetValuesAsync())


{
// process value
}

Just like with the synchronous foreach statement, the compiler will generate the required code itself.
It is also possible to implement asynchronous iterators using the yield keyword, similar to how it can be
done for synchronous iterators:

private async IAsyncEnumerable<int> GetValuesAsync()


{
for (var i = 0; i < 10; i++)
{
await Task.Delay(100);
yield return i;
}
}

www.dotnetcurry.com/magazine | 95
You might have noticed the CancellationToken parameter of the GetAsyncEnumerator method of
the IAsyncEnumerable<T> interface. As one would expect, it can be used to support cancellation of
asynchronous streams.

However, there are currently no plans to support this parameter in compiler generated code. This means
that you will need to write your own code for iterating over the items instead of using the asynchronous
foreach statement if you want to pass the cancellation token to the GetAsyncEnumerator method.

Also, when implementing an asynchronous iterator with cancellation support, you will need to implement
the IAsyncEnumerable<T> interface manually instead of using the yield keyword and relying on the
compiler to do it for you.

RANGES AND INDICES


C# 8.0 introduces new syntax for expressing a range of values.

Range range = 1..5;

The starting index of a range is inclusive, and the ending index is exclusive. Alternatively, the ending can be
specified as an offset from the end:

Range range = 1..^1;

The new type can be used as an indexer for arrays. Both ranges specified above will give the same result
when used with the following snippet of code:

var array = new[] { 0, 1, 2, 3, 4, 5 };


var subArray = array[range]; // = { 1, 2, 3, 4 }

The new syntax can also be used to define:

- An open-ended range from the beginning to a specific index

var subArray = array[..^1]; // = { 0, 1, 2, 3, 4 }

- An open-ended range from a specific index to the end

var subArray = array[1..]; // = { 1, 2, 3, 4, 5 }

- An index of a single item specified as an offset from the end.

var item = array[^1]; // = 5

The use of ranges and indices is not limited to arrays. They can also be used with the Span<T> type (you
can read more about Span<T> in my DNC Magazine article C# 7.1, 7.2 and 7.3 - New Features):

var array = new[] { 0, 1, 2, 3, 4, 5 };


var span = array.AsSpan(1, 4); // = { 1, 2, 3, 4 }
var subSpan = span[1..^1]; // = { 2, 3 }

96 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


Although that’s the full extent to which the Range type can be used with existing types in .NET Core 3.0
Preview 2, there are plans to provide overloads with Range-typed parameters for other methods as well,
e.g.:

var span = array.AsSpan(range);


var substring = "range".Substring(range);

So far, no clear information has been given if these will be a part of .NET Core 3.0 and .NET Standard 2.1.
They could be added in a later version.

USING DECLARATION
The using statement is a great way to ensure that the Dispose method will be called on a type
implementing the IDisposable interface when an instance gets out of scope:

using (var reader = new StreamReader(filename))


{
var contents = reader.ReadToEnd();
Console.WriteLine($"Read {contents.Length} characters from file.");
}

In C# 8.0, the using declaration is available as an alternative:

using var reader = new StreamReader(filename);


var contents = reader.ReadToEnd();
Console.WriteLine($"Read {contents.Length} characters from file.");

The using keyword can now be placed in front of a variable declaration. When such a variable falls out of
scope (i.e. the containing block of code is exited) the Dispose method will automatically be called.

This can be especially useful when multiple instances of types implementing the IDisposable interface
are used in the same block of code:

using var reader1 = new StreamReader(filename1);


using var reader2 = XmlReader.Create(filename2);
// process the files

The above code is much more readable and less error-prone than the equivalent code written with the
using statement:

using (var reader1 = new StreamReader(filename1))


using (var reader2 = XmlReader.Create(filename2))
{
// process the files
}

www.dotnetcurry.com/magazine | 97
STATIC LOCAL FUNCTIONS
Local functions were introduced in C# 7.0 (you can learn about them in my DNC Magazine article C# 7 -
What’s New). They automatically capture the context of the enclosing scope to make any variables from the
containing method available inside them:

public void MethodWithLocalFunction(int input)


{
Console.WriteLine($"Inside MethodWithLocalFunction, input: {input}.");
LocalFunction();

void LocalFunction()
{
Console.WriteLine($"Inside LocalFunction, input: {input}.");
}
}

In C# 8.0, you can declare a local function as static. This will prevent using the variables from the
containing method in the local function and at the same time avoid the performance cost related to
making them available. A variable from the containing method can of course still be passed to the local
function as a parameter:

public void MethodWithStaticLocalFunction(int input)


{
Console.WriteLine($"Inside MethodWithStaticLocalFunction, input: {input}.");
StaticLocalFunction(input);

static void StaticLocalFunction(int input)


{
Console.WriteLine($"Inside StatucLocalFunction, input: {input}.");
}
}

DISPOSABLE REF STRUCTS


C# 7.2 added support for structs which must be allocated on the stack (declared with the ref struct
keywords). They are primarily useful in high-performance scenarios which require non-managed access to
continuous blocks of memory (Span<T> is an example of such a type).

Such types are subject to many restrictions. Among others, they are not allowed to implement an interface.
This also includes the IDisposable interface making it impossible to implement the disposable pattern.

Although they still can’t implement interfaces in C# 8.0, they can now implement the disposable pattern by
simply defining the Dispose method:

ref struct RefStruct


{
// ...

public void Dispose()


{

98 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)


// release unmanaged resources
}
}

This is enough to allow the type to be used with the using statement (or the using declaration):

using (var refStruct = new RefStruct())


{
// use refStruct
}

Conclusion:
After a long anticipation, C# 8.0 is finally available in preview as part of Visual Studio 2019 Preview. Its final
version will be released together with .NET Core 3.0.

Unlike all the versions of the language so far, not all features of C# 8.0 will be available in the .NET
framework. Asynchronous streams and ranges depend on types which will only be added to .NET Core 3.0
and other .NET platforms implementing .NET Standard 2.1. As per the current plans, .NET framework will not
be among them.

In my opinion, the most important new features of C# 8.0 are null reference types and improvements to
pattern matching. I think so because they will help us write more reliable and readable code. Along with
several smaller features, these will also be available in the .NET framework. This is a good enough reason
to start using C# 8.0 even in .NET framework projects once it is released.

Damir Arh
Author
Damir Arh has many years of experience with Microsoft development tools; both in
complex enterprise software projects and modern cross-platform mobile applications.
In his drive towards better development processes, he is a proponent of test driven
development, continuous integration and continuous deployment. He shares his knowledge
by speaking at local user groups and conferences, blogging, and answering questions on
Stack Overflow. He is an awarded Microsoft MVP for .NET since 2012.

Thanks to Yacoub Massad for reviewing this article.

www.dotnetcurry.com/magazine | 99
100 | DNC MAGAZINE ISSUE - 41 (MAR-APR 2019)
THANK YOU
FOR THE 41st EDITION

@sravi_kiran @keertikotaru @jfversluis

@yacoubmassad @dani_djg @damirarh

@mayur_tendulkar @suprotimagarwal @saffronstroke

WRITE FOR US
mailto: suprotimagarwal@dotnetcurry.com

You might also like