Professional Documents
Culture Documents
Ask the average Windows system administrator what they think the most prominent feature of Windows 2000 is
and they will probably say "Active Directory." Why? First, Active Directory will enable them to deploy Windows-
based networks on an unprecedented scale. Second, they know that Active Directory represents a significant change
in the way they will manage network entities such as users, computers, network devices and applications. For
example, there are new user interfaces for most common management activities such as adding users and managing
printers. There are also many new system concepts, such as multi-master replication and DNS integration that
administrators must understand in order to keep their Active Directory installations healthy.
Some might ask why there are so many new concepts to learn. The answer lies in understanding both limitations of
earlier directory services and Microsoft's design goals for Active Directory. First-generation directories demonstrated
the power of standards-based repositories, but didn't support replication. By running only on a single machine,
they provided no opportunity for scale-out and became a single point of failure in the network. Second-generation
technologies added single-master replication (with read-only replicas) that scaled much better, but supported
updates only against the master. In practice, single-master models bound individual deployments to 'regions' of
continuous network connectivity. Third-generation directories added multi-master support but with important
constraints. For example, one third-generation directory service is limited, in practice, to approximately 10 update-
able replicas per partition and works best only over high-speed network connections.
Microsoft decided that, in order to scale to enterprise levels, Active Directory had to support large numbers of
geographic locations (potentially in excess of 1,000) and not be limited by slow or intermittent network connectivity.
This led to fourth-generation features in Active Directory such as sites, trees, forests, bridgehead servers, and global
catalogs. These features have enabled Active Directory to scale to unprecedented levels while remaining manageable.
At the same time, there are, as author Sean Daily notes, a lot of moving parts in the average Active Directory
deployment. Most of the time, these parts work together just fine. There will be occasions, however, when issues
arise. For example, if a replica is unable to contact any other replica for long periods of time (usually due to net-
work configuration problems) some troubleshooting will eventually be required. Then, it will be important to
understand how Active Directory's parts fit together in order to get to the root of the issue quickly.
I believe that this innovative, on-line book from NetPro and Realtimepublishers.com will prove to be a valuable
resource for any administrator who is tasked with managing an Active Directory installation. The approach to
information delivery is ideal. The book starts with important background concepts that will enable the reader to
understand the design of Active Directory and relationships between components, in a clear and concise way. Later
chapters build on this knowledge by providing step-by-step procedures for diagnosing common issues even when
the root cause of an issue may not be clear.
Most important, this book provides a methodology for proactively keeping an Active Directory installation healthy
while simultaneously enabling administrators to get to the root causes of problems quickly when they do occur.
Such a methodology can be 'worth its weight in gold' to today's systems administrators who are responsible for
keeping ever-more complex systems up and running with ever-less downtime!
Chapter 1
Chapter 2
Chapter 3
Monitoring and Tuning the Windows 2000 System and Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53
Monitoring Windows 2000 Domain Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53
Monitoring the Overall System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54
Using Task Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55
Using the Performance Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56
Event Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58
Events Tracked in Event Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58
Types of Event Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .59
Starting Event Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .59
Types of Events Logged by Event Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60
Sorting and Filtering Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61
Exporting Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61
Monitoring Memory and Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .62
Using Task Manager to View Memory on a Domain Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .63
Using the Performance Console to Monitor Memory on a Domain Controller . . . . . . . . . . . . . . . . . . . . .65
Available Memory Counters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65
Page-Fault Counters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .66
Paging File Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68
System Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69
Chapter 4
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .117
Chapter 5
Chapter 6
The book you now hold in your hands - or, in many cases - are reading on your screen, represents an entirely new
modality of book publishing and a major first in the publishing industry. The founding concept behind
Realtimepublishers.com was the idea of providing readers with high-quality books on today's most critical IT topics
-- at no cost to the reader. Although this may sound like a somewhat impossible feat to achieve, it is made possible
through the vision and generosity of corporate sponsors such as NetPro, who agree to bear the book's production
expenses and host the book on their website for the benefit of their website visitors.
It should be pointed out that the free nature of these books does not in any way diminish their quality. Without
reservation, I can tell you that the book you're about to read is the equivalent of any similar printed book you
might find your local bookstore (with the notable exception that it won't cost you $30 to $80). In addition to the
free nature of the books themselves, this publishing model also provides other significant benefits. For example, the
electronic nature of this eBook makes events such as chapter updates and additions, or the release of a new edition
of the book possible to achieve in a far shorter timeframe than is possible with printed books. Because we publish
our titles in "real-time" - that is, as chapters are written or revised by the author - you benefit from receiving the
information immediately rather than having to wait months or years to receive a complete product.
Finally, I'd like to note that although it is true that the sponsor's website is the exclusive online location of the
book, this book is by no means a paid advertisement. Realtimepublishers.com is an independent publishing company
and maintains, by written agreement with the sponsor, 100% editorial control over the content of our titles.
However, by hosting this information, NetPro has also set themselves apart from their competitors by providing
real value to their customers and by transforming their site into a true technical resource library - not just a place to
learn about their company and products. It is my opinion that this system of content delivery is not only of
immeasurable value to readers, but represents the future of book publishing.
As series editor, it is my raison d'être to locate and work only with the industry's leading authors and editors, and
publish books that help IT personnel, IT managers, and users to do their everyday jobs. To that end, I encourage
and welcome your feedback on this or any other book in the Realtimepublishers.com series. If you would like to
submit a comment, question, or suggestion, please do so by sending an e-mail to feedback@realtimepublishers.com,
leaving feedback on our website at www.realtimepublishers.com, or calling us at (707)539-5280.
Sean Daily
Series Editor
Although Microsoft’s Windows NT operating system introduced a pseudo-directory in the form of the NT Directory
Service (whose heart and soul was the Security Accounts Manager – SAM – database), this "directory" had a number
of major limitations. Among these were:
Lack of extensibility
NT’s directory stored only basic user information and couldn’t be inherently extended.
Lack of scalability
The NT directory was stored inside the NT system registry database; due to this architecture, the maximum
number of users topped out in the neighborhood of around 40,000 per domain.
In Windows 2000 (Win2K), NT’s successor OS, Microsoft set out to deliver a directory capable of addressing each
of these limitations. Win2K’s new directory service, dubbed Active Directory (AD), provides an industrial-strength
Chapter 1 1 www.netpro.com
directory service that can serve the needs of both small and very large organizations, and everyone in between.
Because it stores its data outside the system registry, AD has virtually unlimited storage capacity (AD databases can
contain hundreds of millions of entries, as compared to the tens of thousands NT is capable of ). AD allows
administrators to define physical attributes of their network, such as individual sites and their connecting WAN
links, as well as the logical layout of network resources such as computers and users. Using this information, AD is
able to self-optimize its bandwidth usage in multi-site WAN environments. AD also introduces a new administration
model that provides a far more granular and less monolithic than was present under NT 4.0. Finally, AD also
provides a central point of access control for network users, which means users can log in once and gain access to
all network resources.
Although other directories such as Banyan’s StreetTalk and Novell’s NDS have existed for some time, many
Windows NT-centric organizations have opted to wait and use Microsoft’s entry in the enterprise directory arena
as the foundation for their organization-wide directory environment. Consequently, Win2K’s Active Directory will
represent the first foray into the larger world of directories and directory management for many organizations and
network administrators.
Chapter 1 2 www.netpro.com
severe, network-wide problems including slow or failed user logon authorizations, failed convergence of directory
data, the inability to access critical applications, printing problems, and similar maladies. These problems are of
particular concern for IT shops that offer service-level agreements (SLAs) to their corporate parents or clients. To
be able to properly maintain their Win2K infrastructure, IT shops will need not only Win2K-aware monitoring
and management tools, but specific knowledge about what needs to be monitored, what thresholds must be set to
maintain acceptable levels of performance, and what needs to be done in the event that problems should occur.
As we discussed earlier, a directory service is a composite term that includes both the directory data store as well as the
services that make the information within the directory available to users and applications. Directory services are avail-
able in a variety of different types and from different sources. Operating system directories, such as Microsoft’s Active
Directory and Novell’s NDS, are general purpose directories included with the network operating system and are
designed to be accessible by a wide array of users, applications, and devices. There are also some applications, such as
enterprise resource planning systems, human resource systems, and e-mail systems (e.g. Microsoft Exchange 5.x) that
provide their own directories for storing data specific to the functionality of those applications.
Microsoft Exchange 2000 is a notable exception to this and is completely integrated with Active Directory.
Exchange 2000’s installation process extends Active Directory’s structure to accommodate Exchange-spe-
cific data and subsequently uses AD to store its own directory information.
Active Directory is Microsoft’s directory service implementation in the Win2K Server operating system. The Active
Directory is hosted by one or more Win2K domain controllers, and is replicated in a multi-master fashion between
those domain controllers to ensure greater availability of the directory and network as a whole.
The term multi-master indicates that multiple read/write copies of the database exist simultaneously, on
each Win2K domain controller computer. Thus, each Win2K domain controller is effectively an equal peer of
the other controllers, and any controller can write directory updates and propagate those updates to other
controllers. This is in notable contrast to NT 4.0’s single-master PDC/BDC replication topology wherein a
single domain controller, the PDC, houses a read/write copy of the database.
In addition to providing a centralized repository for network objects and a set of services for accessing those objects,
Active Directory also provides security in the form of access control lists (ACLs) on directory objects that protect
those objects from being accessed by unauthorized parties.
Chapter 1 3 www.netpro.com
The AD Database
At a file system level, the Active Directory uses Microsoft’s Extensible Storage Engine (ESE) to store the directory
database. Administrators familiar with Microsoft Exchange Server may recognize this as the same database technol-
ogy used in that product. Like Exchange Server, Active Directory’s database employs transactional logging files to
help ensure database integrity in the case of power outages and similar events that interfere with the successful
completion of database transactions. In addition, Active Directory also shares Exchange’s ability to perform on-line
database maintenance and defragmentation. At the file level, AD stores its database in a single database file named
Ntds.dit, a copy of which can be found on every Win2K domain controller.
Although the building blocks that make up the Active Directory are largely masked by the directory’s high-level
management interfaces and APIs, the physical aspects of the directory are nonetheless an important consideration for
Win2K administrators. For example, it is critical that all volumes on domain controllers hosting the Active Directory
database and its transaction logs maintain adequate levels of free disk space at all times. For performance reasons, it
is also important that the Active Directory databases on these machines not become too heavily fragmented.
Because Active Directory is a database, this effectively turns Win2K domain controllers into critical database servers
on the network. These servers should therefore be treated no differently than any other important database server in
terms of fault tolerance preparation (e.g. disk redundancy, backups, and power protection) and capacity planning.
Logical Architecture of AD
To gain an appreciation for and understanding of AD and AD management concepts, it’s important to first under-
stand AD’s logical architecture. In this section, we’ll discuss the most important concepts associated with AD, con-
cepts which form the foundation of all Win2K networks.
AD uses an object-oriented approach to defining directory objects. That is to say there exists a hierarchy of classes,
which define the kinds of objects one can create (or instantiate, as in "creating an instance of...") within the direc-
tory. Each class has a set of attributes that define the properties associated with that class. For example, Active
Directory has a user class with attributes like First Name, Address, etc.
There are special types of objects in AD known as container objects that you should be familiar with. Put simply,
container objects are objects that may contain other objects. This design allows you to organize a tree or hierarchy
of objects. Examples of container objects include organizational unit (OU) and domain objects. Container objects
may hold both objects and/or other container objects. For example, an OU object can contain both regular objects
such as users and computers, as well as other OU container objects.
Although it’s perfectly acceptable to say "create" in lieu of "instantiate" when referring to the generation of a
new object within the directory, we’ll use the latter more frequently in this book. The reason is that ‘instantiate’
is more appropriate when you consider the underlying event that actually occurs -- that being the "creation of
an instance of" an object. And, hey, let’s face it: saying ‘instantiate’ sounds a lot cooler and is more likely to
impress people (just kidding – if we really believed that we’d start throwing around words like orthogonal!)
Chapter 1 4 www.netpro.com
The Schema
As you might imagine, all of the object classes and attributes discussed thus far have some kind of underlying refer-
ence that describes them -- a sort of "dictionary" for Active Directory. In Win2K parlance, this "dictionary" is
referred to as the schema. The Active Directory schema contains the definitions of all object types that may be
instantiated within the directory.
Even the Active Directory schema itself is stored in the directory as objects. That is, AD classes are stored
as objects of the class "classSchema" and attributes are stored as objects of class "attributeSchema". The
schema, then, is just a number of instances of the classes "classSchema" and "attributeSchema", with
properties that describe the relationship between all classes in the AD schema.
To understand the relationship between object classes, objects, and the schema, let’s go back to the object-oriented
model upon which the AD schema is based. As is the case with object-oriented development environments (e.g.
C++, Java, etc.), a class is a kind of basic definition of an object. When I instantiate an object of a certain class, I
create an instance of that particular object class. That object instance has a number of properties associated with
the class from which it was created. For example, suppose I create a class called "motorcycle", which has attributes
like "color," "year," "enginesize," etc. I can instantiate the class "motorcycle" and create a real object called
"Yamaha YZF600R6" with properties like "red" (for the color attribute), 2000 (for the year), and 600 (for the
motorcycle engine’s size in CCs).
Similarly, an Active Directory implementation within your enterprise is just the instantiation of the Active
Directory schema classes and attributes into hundreds or thousands of different object classes and their associated
attributes. For example, I might create an object of the class user called Craig Daily, who has properties like pass-
word, address, home directory location, etc.
You can view the AD Schema through the AD Schema Microsoft Management Console (MMC) snap-in, which is
shown in Figure 1.1.
Figure 1.1: Viewing the Active Directory Schema using the AD Schema MMC snap-in.
Chapter 1 5 www.netpro.com
Viewing the AD Schema
Some of you may be curious at this point how to use the AD Schema MMC snap-in shown in Figure 1.1 to view the
AD schema. It’s not immediately obvious how to do this using the MMC console, since AD Schema isn’t available in
the default list of snap-ins that appears. One note of caution here: editing the schema is a potentially dangerous
activity—you need to know exactly what you’re doing and why you’re doing it. Before you make schema changes, be
sure to back up the current AD database contents and schema (e.g., by using ntbackup.exe or a third-party utility’s
System State backup option on an up-to-date domain controller).
To view the AD schema, use the Microsoft Management Console (MMC) Active Directory Schema snap-in, which
you’ll find among Win2K’s Support Tools. (You can install these tools from the Win2K CD-ROM’s \support folder.) To
use this snap-in, you need to manually register the snap-in by selecting Start, Run (or entering a command-prompt
session) and typing:
regsvr32 schmmgmt.dll
You’ll receive a message stating that the OS successfully registered the .dll file. You can now load and use the Active
Directory Schema snap-in through the MMC utility (i.e., mmc.exe). For example, you can open an MMC session and
choose Add/Remove Snap-in from the Console menu, then select Active Directory Schema from the Add Standalone
Snap-In dialog box (Figure 1.1 shows the Active Directory Schema snap-in’s view of the AD schema).
To modify the AD schema, you need to use a different utility: the MMC ADSI Edit snap-in. ADSI Edit is essentially a
low-level AD editor that lets you view, change, and delete AD objects and object attributes. In terms of usefulness and
potential danger, ADSI Edit is to AD what the regedit or regedt32 registry editors are to the system registry. To use
the ADSI Edit utility to make schema modifications, you first need to be a member of the Schema Admins group. The
Schema Admins group is a universal group in native-mode Win2K domains, and it’s a global group in mixed mode
Win2K domains (i.e. those which still are still running NT 4.0 domain controllers or which have no more NT domain
controller but haven’t yet been converted to Win2K’s native mode). To use the snap-in, first register the associated
adsiedit.dll file at the command line:
regsrv32 adsiedit.dll
The ADSI Edit snap-in will be available from the MMC’s Console/Add/Remove snap-in menu. Once you’ve added
the snap-in, you can use the ADSI Edit console to make changes to AD objects and attributes.
For a more advanced discussion of the AD schema and its underlying constructs, I recommend The Definitive
Guide to Win2K Administration by Sean Daily and Darren Mar-Elia. Chapter 1 of this book describes advanced
AD schema design and management issues. You can find a link to this free eBook by Realtimepublishers.com
at http://www.realtimepublishers.com.
LDAP
One of the early design decisions that Microsoft made regarding Active Directory was the use of an efficient
directory access protocol known as the Lightweight Directory Access Protocol (LDAP). LDAP also benefits from
its compatibility with other existing directory services. This compatibility in turn provides for the interoperability
of AD with these other directory services.
Chapter 1 6 www.netpro.com
Active Directory supports LDAP versions 2 and 3.
LDAP specifies that every AD object be represented by a unique name. These names are formed by combining
information about domain components, OUs, and the name of the target object, known as a common name.
Table 1.1 lists each of these LDAP name components and their descriptions.
For example, the LDAP name for the user object for a person named "David Templeton" in the realtimepublish-
ers.com domain’s Marketing OU would be as follows:
CN=David Templeton,OU=Marketing,DC=realtimepublishers,DC=com
The above form of an object’s name as it appears in the directory is referred to the object’s distinguished name
(DN). Alternately, an object can also be referred to using its relative distinguished name (RDN). The RDN is the
portion of the DN that refers to the target object within its container. In the above example, the RDN of the user
object would simply be ‘David Templeton.
Chapter 1 7 www.netpro.com
Domains, Trees, and Forests
A significant advantage of AD is that it allows for a flexible, hierarchical design. To facilitate this design, the AD
structure employs several different logical components. The first of these components is the domain. A domain
serves as the core unit in AD’s logical structure, and is defined as a collection of computers that share a common
directory database. In fact, this definition is basically identical to that of NT domains. Like NT domains, Win2K
domains have unique names. However, unlike the NetBIOS-based domain names used in NT, Win2K domains use
a DNS naming structure (e.g. realtimepublishers.com or mydomain.org).
Domains also have several other important characteristics. First, they act as a boundary for network security: each
domain has its own separate and unique security policy, which defines things such as password expiration and simi-
lar security options. Domains also act as an administrative boundary, since administrative privileges granted to
security principals within a domain do not automatically transfer to other domains within AD. Finally, domains act
as a unit of replication within Active Directory: since all servers acting as domain controllers in a Win2K domain
replicate directory changes to one another, they contain a complete set of the directory information related to their
domain.
Win2K domain names don’t have to be Internet-registered domain names ending in Internet-legal top-level
domains, such as .com, .org, .net, etc. For example, it is possible to name domains with endings such as
.priv, .msft, or some other ending of your choosing. This of course assumes that the domain’s DNS servers
aren’t participating in the Internet DNS namespace hierarchy (which is by far the most common scenario, due
to security considerations with exposing internal DNS servers to the Internet). If you do elect to use standard
Internet top-level domains in your Win2K domain names, you should register these names on the Internet
even if they don’t participate in the Internet DNS namespace. The reason for this is that most organizations
are connected to the Internet, and using unregistered internal domain names that may potentially be
registered on the Internet could cause name conflicts.
Win2K’s AD design also integrates the concepts of forests and trees. A tree is a hierarchical arrangement of Win2K
domains within AD that forms a contiguous namespace. For example, assume a domain named xcedia.com exists in
your AD structure. The two subdivisions of xcedia.com are Europe and us, which are each represented by separate
domains. Within AD, the names of these domains would be us.xcedia.com and europe.xcedia.com. These domains
would form a domain tree since they share a contiguous namespace. This arrangement demonstrates the hierarchical
structure of AD and its namespace: all of these domains are part of one contiguous related namespace in the
directory; that is to say, they form a single domain tree. The name of the tree is the root level of the tree, in this
case, xcedia.com.
Chapter 1 8 www.netpro.com
Figure 1.2: An Active Directory forest with a single tree.
A forest is a collection of one or more trees. A forest can be as simple as a single Win2K domain, or more complex
such as a collection of multi-tiered domain trees.
Let's take our single-tree example scenario from earlier a step further. Assume that within this AD, the parent
organization Xcedia also has a subsidiary company with a Win2K/DNS domain name of Realtimepublishers.com.
Although the parent company wants to have both organizations defined within the same AD forest, it wants their
domain and DNS names to be unique. To facilitate this, you would define the domains used by the two organizations
within separate trees in the same AD forest. Figure 1.3 illustrates this scenario. All domains within a forest (even
those in different trees) share a schema, configuration, and global catalog (we’ll discuss the global catalog in a later
section). In addition, all domains within a forest automatically trust one another due to the transitive, hierarchical
Kerberos trusts that are automatically established between all domains in a Win2K forest.
The Kerberos Version 5 authentication protocol is a distributed security protocol based on Internet standards,
and is the default security mechanism used for domain authentication within or across Win2K domains.
Kerberos replaces Windows NT LAN Manager (NTLM) authentication used in Windows NT Server 4.0, as the
primary security protocol for access to resources within or across Win2K Server domains. Win2K domain
controllers still support NTLM to provide backward compatibility with Windows NT 4.0 machines.
Chapter 1 9 www.netpro.com
Figure 1.3: Example of a multi-tree Active Directory forest.
In the case of a forest with multiple trees, the name of the forest is the name of the first domain created
within the forest (i.e. the root domain of the first tree created in the forest).
Although cohabitation of diffe re nt organizations within the same AD fore st is appropriate in some circum-
stance s, in othe rs it is not. For e xample , unique se curity or sche ma ne e ds may re quire two companie s to
use e ntire ly diffe re nt AD fore sts. In the se situations, Ke rbe ros trusts are n’ t e stablishe d be twe e n the two
fore sts but you may cre ate e xplicit trusts be twe e n individual domains in diffe re nt fore sts.
The re are se ve ral re source s you might find he lpful whe n planning your organization’ s AD structure and
name space . One is The De finitive Guide to Active Dire ctory De sign and Planning, anothe r fre e e Book from
Re altime publishe rs.com (a link to which can be found at http://www.re altime publishe rs.com). The re are
also se ve ral Microsoft white pape rs that contain valuable information on AD de sign and archite ctural
conce pts, including "Active Dire ctory Archite cture " and "Domain Upgrade s and Active Dire ctory." The se
and othe rs te chnical docume nts re late d to AD can be found on Microsoft’ s We b site at
http://www.microsoft.com/windows2000/se rve r.
Chapter 1 10 www.netpro.com
Organizational Units
An organizational unit (OU) is a special container object that is used to organize other objects – such as computers,
users, and printers – within a domain. OUs can contain all of these object types, and even other OUs (this type
of configuration is referred to as nested OUs). OUs are a particularly important element of Active Directory for
several reasons. First, they provide the ability to define a logical hierarchy within the directory without creating
additional domains. OUs allow domain administrators to subdivide their domains into discrete sections and
delegate administrative duties to others. More importantly, this delegation can be accomplished without necessarily
giving the delegated individuals administrative rights to the rest of the domain. As such, OUs facilitate the
organization of resources within a domain.
There are a number of models used for the design of OU hierarchies within domains, but the two most
common are those dividing the domain organizationally (e.g. by business unit) or geographically.
Only Win2K servers acting as domain controllers may be configured as global catalog servers. By default, the first
domain controller in a Win2K forest is automatically configured to be a global catalog server (this can be moved later
to a different domain controller if desired; however, every Win2K forest must contain at least one global catalog). Like
Active Directory itself, the global catalog uses replication in order to ensure updates between the various global
catalog servers within a Win2K domain or forest. In addition to being a repository of commonly queried AD
object attributes, the global catalog plays two primary roles on a Win2K network:
Chapter 1 11 www.netpro.com
Network logon authentication
In native-mode Win2K domains (networks where all domain controllers have been upgraded to Win2K and
the native mode election has been manually made by the administrator), the global catalog facilitates net
work logons for Active Directory-enabled clients. It does so by providing universal group membership infor
mation to the account sending the logon request to a domain controller. This applies not only to regular
users but also to every type of object that must authenticate to the Active Directory (including computers,
etc.). In multi-domain networks, at least one domain controller acting as a global catalog must be available
in order for users to log on. Another situation that requires a global catalog server occurs when a user
attempts to log on with a user principal name (UPN) other than the default. If a global catalog server is not
available in these circumstances, users will only be able to login to the local computer (the one exception is
members of the domain administrators group, who do not require a global catalog server in order to log on
to the network).
Note: Although mixed-mode Win2K domains do not require the global catalog for the network logon authentication
process, global catalogs are still important in facilitating directory queries and searches on these networks and
should therefore be made available at each site within the network.
Physical Structure of AD
Thus far, our discussion of AD has focused on the logical components of the directory’s architecture; that is, the
components used to structure and organize network resources within the directory. However, an AD-based network
also incorporates a physical structure, which is used to configure and manage network traffic.
Domain Controllers
The concept of a domain controller has been around since the introduction of Windows NT. As is the case with
NT, a Win2K domain controller is a server that houses a replica of the directory (in the case of Win2K, the direc-
tory being AD rather than the NT SAM database). Domain controllers are also responsible for replicating changes
to the directory to other domain controllers in the same domain. Additionally, domain controllers are responsible
for user logons and other directory authentication, as well as directory searches.
Fortunately, Win2K does away with NT’s restriction that converting a domain controller to a member server or
vice-versa requires reinstallation of the server OS. In Win2K, servers may be promoted or demoted to domain
controller status dynamically (and without reinstallation) using the Dcpromo.exe domain controller promotion
wizard.
Chapter 1 12 www.netpro.com
At least one domain controller must be present in a domain, and for fault tolerance reasons it’s a good idea to have
more than one domain controller at any larger site (e.g. a main office or large branch office). To prevent user logon
traffic from crossing over WAN links, you should consider putting at least one domain controller in even the small-
est of your branch offices and similar remote sites.
Directory Replication
As we’ve discussed, domain controllers are responsible for propagating directory updates they receive (e.g. a new
user object or password change) to other domain controllers. This process is known as directory replication, and
can be responsible for a significant amount of WAN traffic on many networks.
The Win2K Active Directory is replicated in a multi-master fashion between all domain controllers within a
domain to ensure greater availability of the directory and network as a whole. The term multi-master indicates that
multiple read/write copies of the database exist simultaneously on each Win2K domain controller computer. Thus,
each Win2K domain controller is effectively a peer of the other controllers, and any domain controller can write
directory updates and propagate those updates to other domain controllers. This is in notable contrast to NT 4.0’s
single-master PDC/BDC replication topology wherein a single domain controller, the PDC, houses a read/write
copy of the database.
AD’ s re plication de sign me ans that diffe re nt domain controlle rs within the domain may hold diffe re nt data
at any give n time – but usually only for short pe riods of time . As a re sult, individual domain controlle rs may
be te mporarily out of date at any give n time and unable to authe nticate a login re que st. Active Dire ctory’ s
re plication proce ss has the characte ristic of bringing all domain controlle rs up to date with e ach othe r; this
characte ristic is calle d "conve rge nce ".
Chapter 1 13 www.netpro.com
When you use the Active Directory Installation Wizard to create the first domain in a new forest, all five of the
]single master operations roles are automatically assigned to the first domain controller in that domain. In a small
Active Directory forest with only one domain and one domain controller, that domain controller continues to own
all the operations master roles. In a larger network, whether with one or multiple domains, you can re-assign these
roles to one or more of the other domain controllers.
Schema master
The domain controller that serves the schema master role is responsible for all updates and modifications to
the forest-wide Active Directory schema. The schema defines every type of object and object attribute that
can be stored within the directory. Modifications to a forest’s schema can only be done by members of the
Schema Administrators group, and can be done only on the domain controller that holds the schema master role.
PDC emulator
If a Win2K domain contains non-AD-enabled clients or is a mixed-mode domain containing Windows NT
backup domain controllers (BDCs), the PDC emulator acts as a Windows NT primary domain controller
(PDC) for these systems. In addition to replicating the Windows NT-compatible portion of directory
updates to all BDCs, the PDC emulator is also responsible for time synchronization on the network (which
is important for Win2K’s Kerberos security mechanism), and processing account lockouts and client pass
word changes.
RID master
The RID (relative ID) master allocates sequences of RIDs to each domain controller in its domain.
Whenever a Win2K domain controller creates an object such as a user, group, or computer, that object must
be assigned a unique security identifier (SID). A SID consists of a domain security ID (this is identical for all
SIDs within a domain), and a RID. When a domain controller has exhausted its internal pool of RIDs, it
requests another pool from the RID Master domain controller.
Infrastructure master
When an object in one domain is referenced by an object in another domain, it represents the reference by
the Globally Unique IDentifier (GUID), the Security IDentifier (SID – for objects that reference security
principals), and the Distinguished Name (DN) of the object being referenced. The infrastructure master is
the domain controller responsible for updating an object's SID and distinguished name in a cross-domain
object reference. The infrastructure master is also responsible for updating all inter-domain references any
time an object referenced by another object moves (for example, whenever the members of groups are
renamed or changed, the infrastructure master updates the group-to-user references). The infrastructure master
distributes updates using multi-master replication. Note: except where there is only one domain controller in
a domain, never assign the infrastructure master role to the domain controller that is also acting as a global
catalog server. If you use a global catalog server, the infrastructure master will not function properly.
Specifically, the effect will be that cross-domain object references in the domain will not be updated. In a
Chapter 1 14 www.netpro.com
situation where all domain controllers in a domain are also acting as global catalog servers, the infrastructure
master role is unnecessary since all domain controllers will have current data.
Because the operations masters play such critically important roles on a Win2K network, it’s essential for proper
network operation that all the servers hosting these roles are continually available.
Sites
The final, and perhaps most important component of AD’s physical structure is a site. Sites allow administrators to
define the physical topology of a Win2K network, something that wasn’t possible under Windows NT. Sites can be
thought of as areas of fast connectivity (e.g. individual office LANs), but are defined within AD as a collection of
one or more IP subnets. When you look at the structure of the IP protocol, this begins to make sense – different
physical locations on a network are typically going to be connected by a router, which in turn necessitates the use
of different IP subnets on each network. It’s also possible to group multiple, non-contiguous IP subnets together to
form a single site.
So, why are sites important? The primary reason is that the definition of sites makes it possible for AD to gain
some understanding of the underlying physical network topology, and tune replication frequency and bandwidth
usage accordingly (under NT, this could only be done via manual adjustments to the replication service). This
"intelligence" conferred by the knowledge of the network layout has numerous other benefits. For example, it
allows AD-enabled computers hosting users who are logging into the network to automatically locate their closest
domain controller and use that controller to authenticate. This helps prevent a situation commonly seen under NT,
wherein logon authentication requests often traverse low-bandwidth WAN connections to remote domain con-
trollers in situations where the local domain controller is temporarily busy and the client computer has erroneously
cached the faraway controller as the default controller to use for logons. In a similar fashion, sites also give other
components within a Win2K network new intelligence. For example, a client computer connecting to a server
running the Distributed File System (Dfs) feature in Win2K can use sites to locate the closest DFS replica server.
It’s important to remember that sites are part of the physical structure of AD and are in no way related to the logical
constructs we’ve already discussed, such as domains and OUs. It’s possible for a single domain to span multiple
sites, or conversely for a single site to encompass multiple domains. The proper definition of sites is an essential
aspect of AD and Win2K network design planning.
For sites that house multiple domains (e.g. an organization that divides business units into domains rather
than OUs, thus hosting multiple business unit domains on a single site), it’s important to remember to place
at least one, and possibly two, domain controllers for each domain that users will authenticate to from that
site. This outlines the biggest disadvantage of the business unit domain model: the potential for requiring
many domain controllers at each and every site.
Chapter 1 15 www.netpro.com
AD’s Backbone: DNS
The TCP/IP network protocol plays a far larger role in Win2K than with previous versions of Windows NT.
Although other legacy protocols such as IPX and NetBEUI continue to be supported, most of the internal
mechanics of Win2K networks and Active Directory are based on TCP/IP.
In Win2K, as with all TCP/IP-based networks, the ability to resolve names to IP addresses is an essential service. A
bounded area within which a given name can be resolved is referred to as a namespace. In Windows NT-based
networks, NetBIOS is the primary namespace, and WINS the primary name-to-IP address resolution service. With
Win2K, Microsoft has abandoned the use of NetBIOS as the primary network namespace and replaced it with
DNS, which is also used on the Internet.
Like Active Directory, DNS provides a hierarchical namespace. Both systems also make use of domains, although
they define them somewhat differently. Computer systems (called "hosts") in a DNS domain are identified by their
fully qualified domain name (FQDN), which is formed by appending the host’s name to the domain name within
which the host is located. Multi-part domain names (i.e. domains that are several levels deep in the hierarchy of the
DNS namespace) are listed with most important domain division (e.g. .com, .org, .edu, etc.) at right and the least
important – the host name – at left. In this way, a host system’s FQDN indicates its position within the DNS
hierarchy. For example, the FQDN of a computer named ‘mercury’ located in the domain ‘realtimepublishers.com’
would be mercury.realtimepublishers.com.
Although it is possible to incorporate a DNS namespace within a Windows NT network for name-to-IP address
resolution, the use of DNS is optional and mainly of interest to enterprises running in heterogeneous environments
or Internet-based applications. However, DNS plays a far more critical role in the Win2K Active Directory. In
Win2K, DNS replaces NetBIOS as the default name resolution service (however, it is still possible to continue
using a NetBIOS namespace and WINS servers on a Win2K network for legacy systems and applications). In
addition, Win2K domains use a DNS-style naming structure (e.g. a Win2K domain might have a name such as
‘santarosa.realtimepublishers.com’ or ‘mydomain.net’), which means the namespace of Active Directory domains is
directly tied to that of the network’s DNS namespace.
This namespace duplication will normally be limited to the internal DNS namespace for companies using the
Microsoft-recommended configuration of separate DNS configurations for the internal LAN and the Internet.
Chapter 1 16 www.netpro.com
Finally, Win2K and Active Directory use DNS as the default locator service; that is, the service used to convert
items such as Active Directory domain, site, and service names to IP addresses.
It’s important to remember that although the DNS and Active Directory namespaces in a Win2K network are
identical in regards to domain names, the namespaces are otherwise unique and used for different purposes. DNS
databases contain domains and the record contents (e.g. host address/A records, server resource/SRV records, mail
exchanger/MX records, etc.) of the DNS zone files for those domains, whereas the Active Directory contains a wide
variety of different objects including domains, organizational units, users, computer, and group policy objects.
Another notable connection between DNS and the Active Directory is that Win2K DNS servers can be configured
to store their DNS domain zone files directly within the Active Directory itself, rather than in external text files.
Although DNS doesn’t rely on the Active Directory for its functionality, the converse is not true: Active Directory
relies on the presence of DNS for its operation.
Win2K includes an implementation of Dynamic DNS (defined by RFC 2136) that allows AD-enabled clients to
locate important Win2K network resources, such as domain controllers, through special DNS resource records
called SRV records. The accuracy of these SRV records is therefore critical to the proper functioning of a Win2K
network (not to mention the availability of the systems and services they reference).
The network monitoring procedures employed by most organizations tend to fall into one of the following categories:
Chapter 1 17 www.netpro.com
Existing monitoring procedures in place with full-featured network monitoring tools.
The third category is organizations with network monitoring routines built on sophisticated, full-featured
network monitoring software. In addition to many of the basic services provided by the tools that come with
Windows NT/2000, the Resource Kits, and freeware/shareware utilities, these utilities typically include intel
ligent scripting to provide sophisticated testing as well as corrective actions in the event of failure. In addi
tion, many network-monitoring tools include a knowledge base that helps administrators understand why a
problem is happening and offer suggestions as to how to resolve it. For organizations running large or multi-
site Win2K networks, this type of tool is highly recommended.
For administrators of Windows NT (or other operating systems) networks that have existing monitoring tools and
procedures, the migration to Win2K will mainly involve an upgrade of existing tools and staff knowledge about the
vulnerabilities of the new environment. However, if your organization has employed a more reactive stance (i.e. fix
it only when it breaks) with regards to resolving network problems, you’ll quickly find that this methodology can
be especially troublesome in a Win2K environment.
Although it is true that Win2K provides a far greater level of reliability and performance than its predecessors, it
also involves a higher number of "moving parts" and dependencies that need to be accounted for. Although legacy
Windows NT networks have their own set of dependencies and vulnerabilities, they are far fewer in number due to
NT’s simpler (and less capable) network architecture. Let’s quickly review the primary monitoring considerations in
a Windows NT environment are as follows:
Name servers
Another aspect of NT networks requiring continual monitoring is the availability of network name servers.
For the majority of Windows NT-based networks (including those with Windows 95/98/ME/2000 clients),
NetBIOS is the predominant namespace and Windows Internet Name Service (WINS) the predominant
name-to-IP address resolution service. WINS databases and replication are also notoriously fragile elements
of NT networks, and must be regularly monitored to ensure their functionality. Even for networks using
DNS as the primary name resolution service, the availability of the DNS name servers is equally important
as it is with WINS.
Chapter 1 18 www.netpro.com
Network browser service
Windows NT, Windows Me, Windows 98, Windows 95, and other members of the Windows product family
rely on a network browsing service to build lists of available network resources (e.g. servers, shared directories,
and shared printers). The architecture of this service, which calls for each eligible network node to participate
in frequent elections to determine a browse master and backup servers for each network segment is another
infamously unreliable aspect of Microsoft networks and which requires frequent attention and maintenance.
It’s no surprise that the primary monitoring consideration in Win2K is Active Directory and its related services and
components. This includes responsiveness to DNS and LDAP queries, AD inter-site and intra-site replication, and
a special Win2K service called the Knowledge Consistency Checker (KCC). In addition, the health and availability
of services such as DNS, the global catalog, and DFS are also important.
The Knowledge Consistency Checker (KCC) is a special Win2K service that automatically generates Active
Directory’s replication topology, and ensures that all domain controllers on the network participate in replication.
However, knowing what metrics to monitor is only a first step. By far, the most important and complex aspect of
monitoring network health and performance isn’t related to determining what to monitor, but rather how to digest
the raw data collected from the array of metrics and make useful determinations from that data. For example,
although it would be possible to collect data on several dozen metrics (e.g. via Performance Monitor) related to
Active Directory replication, simply having this information at hand doesn’t tell you how to interpret the data, or
what you should consider acceptable tolerance ranges for each metric. A useful monitoring system not only collects
raw data, but also understands the inter-relation of that data and how to use the information to identify problems
on the network. This kind of artificial intelligence represents the true value of network monitoring software.
Chapter 1 19 www.netpro.com
In order to ensure the health and availability of the Active Directory as well as other critical Win2K network services,
organizations will need to regularly monitor a number of different services and components, which are listed in
Table 1.2.
Replication Failed replication (e.g. due to domain controller or network connectivity problems)
Slow replication
Replication topology invalid/incomplete (lacks transitive closure/consistency)
Replication using excessive network bandwidth
Too many properties being dropped during replication
Update Sequence Number (USN) update failures
Other miscellaneous replication-related failure events
Chapter 1 20 www.netpro.com
DNS Missing or incorrect SRV records for domain controllers
Operation Masters (FSMOs) Inaccessibility of one or more operation master (FSMO) servers (PDC emulator,
WINS server query or replication failures (for legacy NetBIOS systems and
applications)
Naming context lost + found items exist
Application or service failures or performance problems
In addition to providing problem identification and alerting features, many third-party products also provide auto-
matic problem resolution features. For example, it is possible to configure many products to take specific corrective
actions when a problem is detected, such as restarting a particular service when it is found to be unresponsive.
Chapter 1 21 www.netpro.com
Many tools use scripting and/or the ability to call external utilities to accomplish these tasks. The most comprehensive
utilities base their decisions on rule sets derived from an internal database and/or intelligent escalation routines that
emulate what an administrator might do. For example, you might configure a system such that on the first failure
of a given service, that service is restarted; the computer is restarted in the event the service restart fails; a different
machine is promoted to replace that system in the event the computer restart attempt fails, and so on.
Other Considerations
There are several considerations you should keep in mind when creating a Win2K network monitoring and
troubleshooting solution. One is the overall architecture of the application(s) being used in the solution. It’s
important to understand how the product collects its data and what impact this collection will have on your
network and servers. For example: Does the product employ local agents to gather metrics or does it use remote
queries? Do throttling features exist to control network bandwidth and system resource usage? Is there a
machine/site/domain hierarchy that allows data to be passed to the central collection database in an efficient
manner? Does the product provide web-based management? All of these questions are important since they can
have a significant impact on your network environment and your overall satisfaction with the product.
Another differentiating feature about network monitoring software packages is whether or not they provide a support
knowledgebase of common problems and solutions. This kind of knowledge is invaluable from both a technical
and financial standpoint, since it serves to reduce the learning curve of the supporting IT staff, as well as the
amount of time and money administrators must expend researching and resolving problems. Some utilities augment
this capability by allowing administrators to add their own experience to the knowledgebase or a problem tracking
and resolution database, thereby leveraging internal IT staff expertise and creating a comprehensive problem
resolution system.
A final feature provided by some applications, and one that may be of interest to IT shops engaged in Service Level
Agreements (SLAs), is the ability to generate alerts and reports that address exceptions to, or compliance with SLA
obligations.
Summary
Although Win2K and Active Directory represent a quantum leap forward in the NT product line, they also introduce
new levels of network infrastructure complexity that must be properly managed in order to maintain an efficient and
highly available network. Real-time, proactive monitoring and management of the Active Directory and other critical
services is an essential part of managing Win2K-based networks. In this chapter, we discussed the most important
features and components of Win2K and Active Directory, their roles within the enterprise, differences between
managing Windows NT 4.0-based networks and Win2K Active Directory-based networks, and some of the basic
metrics and statistics that Win2K network administrators need to watch to help them ensure high availability on
their networks.
In the remaining chapters of this book, we’ll drill down and explore each of the vital areas of Active Directory and
Win2K networks in detail, providing the information, tools, and techniques you’ll need to employ to maintain a
healthy and highly available Win2K network.
Chapter 1 22 www.netpro.com
Chapter 2
The best way to troubleshoot AD problems is to avoid AD problems in the first place. To do that, you need to start
with a good design. The design of AD not only includes the layout of the forests, trees, domains, and Organizational
Units (OUs) but also the site and site links that represent the physical network.
In this chapter, I’ll give you a solid understanding of how to design an AD for your environment and network.
Because the information in AD can be distributed across the network, there may be unique aspects of your design
and implementation that only apply to your site. However, my goal is to give you enough information to ensure
that the design serves the needs of your users and administrators.
Logical Structures
A list of logical structures used in AD is shown in Table 2.1.
Table 2.1: The logical structures of AD, which are used to design and build the object hierarchy.
Chapter 2 24 www.netpro.com
Two important logical structures that you need understand to design an AD are the namespace and naming context.
Although these two concepts seem to mean the same thing, they’re actually different. To help you understand how
they differ, I’ll give you a quick overview of each. These structures are also discussed throughout the chapter.
AD depends on DNS and the DNS-type namespace that names and represents the domains in the forest. It’s
important to design your domain tree in a DNS-friendly way and to provide clients with reliable DNS services.
Although AD uses DNS to create its structure, DNS and AD are totally separate namespaces.
Namespace
Another term for a directory is namespace. A namespace refers to a logical space in which you can uniquely resolve
a given name to a specific object in the directory. AD is a namespace because it resolves a name to the object name
and the set of domain servers that stores the object itself. Domain Name System (DNS) is a namespace because it
translates easy-to-remember names (such as www.company.com) into an IP number address (for example,
124.177.212.34).
Naming Context
The naming context represents a contiguous subtree of AD in which a given name is resolved to an object. If you
look at the internal layout of AD, you see a structure that looks similar to a tree with branches. If you expand the
tree, you see the containers, the objects that reside in them, and the attributes associated with the objects.
In AD, a single domain controller always holds at least three naming contexts.
Domain
Contains the object and attribute information for the domain of which the domain controller is a member
Configuration
Contains the rules for creating the objects that define the logical and physical structure of the AD forest
Schema
Contains the rules for creating new objects and attributes.
Chapter 2 25 www.netpro.com
Physical Structure Description
Object and attributes An object is defined by the set of attributes or characteristics assigned to it.
Objects include users, printers, servers, groups, computers, and security policies.
Domain controller A network server that hosts AD in a domain.
Directory server role A server that takes the role of Flexible Single Master Operation (FSMO).
Directory server roles are single-master servers that perform special roles for
AD, such as managing domains, managing schemas, and supporting down-level
clients (Windows NT).
Site A location on the physical network that contains AD servers. A site is defined as
one or more well-connected Transmission Control Protocol/Internet Protocol
(TCP/IP) subnets.
Global catalog server Stores the global catalog information for AD.
Table 2.2: The physical structures of AD, which are used to implement the logical directory structures on the network
Physical Structures
In addition to the logical structures in AD, several physical structures help you implement the logical structures on
your network. These physical structures are listed in Table 2.2.
From the list of logical and physical structures that you have to work with, four structures are critical to the design
of AD: forests and trees, domains, OUs, and sites. Designing and implementing each of these four structures builds
on the previous one. Implementing these structures properly is crucial to a successful design. Design your AD
structure in the following order:
Chapter 2 26 www.netpro.com
Figure 2.1: Two companies or organizations named company1.com and company2.com can form a forest in AD.
The forest serves two main purposes. First, it simplifies workstation interaction with AD because it provides a global
catalog where the client can perform all searches. Second, the forest simplifies administering and managing multiple
trees and domains. A forest has the following key characteristics or components:
Global Schema
The directory schema for the forest is a global schema, meaning that it’s exactly the same for each domain controller
in the forest. The schema exists as a naming context and is replicated to every domain controller. The schema
defines the object classes and the attributes of object classes.
Complete Trust
AD automatically creates bi-directional transitive trust relationships among all domains in a forest. This allows the
security principals, such as users and groups of users, to authenticate from any computer in the forest. However,
this is only true if the users’ access rights have been set up correctly.
Global Catalog
The global catalog contains a copy of every object from every domain in the forest. However, it only stores a select
set of attributes from the objects. By default, the global catalog isn’t placed on every domain controller in the forest;
instead, you determine which domain controllers should hold a copy.
Chapter 2 27 www.netpro.com
When designing the forest, you need to consider both the users and the administrators. For example, consider an
organization that has just acquired another company. If the two forests are merged into a single forest, all the users
can view the entire AD. However, the forests might not be merged because the two autonomous administrative
groups might not agree on how to manage the forests. The winner of this dispute depends on your priority: Do
your users have a higher priority than your administrators?
If the administrators win, the users inherit two forests and no longer have a single, consistent view of AD. Each
administrative group manages its own forest independently.
The answer also depends on which type of organization your company is. If it isn’t important for the users to have
a consistent view of AD, it might be appropriate to have multiple forests with separate administrators. For example,
consider an application service provider (ASP) company, which hosts AD services on behalf of other companies. The
users from those companies have no reason to view the host company’s information. In addition, each administrative
group wants its independence.
An environment with a single forest is simple to create and maintain. All users view one AD using the global catalog.
Maintaining a single forest is easy because you only need to apply configuration changes once to affect all the domains
in the forest. For example, when you add a domain to the forest, all the trust relationships are set up automatically.
In addition, the new domain receives any additional changes made to the forest.
When deciding on the number of forests you need, remember that a forest has shared elements: the schema,
configuration container, and global catalog. Thus, all the administrators need to agree on their content and
management. Managing these elements becomes more complicated when you add a forest because it incurs a
management cost. The following is a brief list of the many management issues surrounding multiple forests:
• Each additional forest must contain at least one domain, domain controller, and someone to manage it.
• Each additional forest creates a schema. Maintaining consistency among schemas is difficult and creates
overhead.
• Each additional forest creates a configuration container. Maintaining consistency among configuration
containers when the network configuration changes is difficult and creates overhead.
• If you want user access among forests, you must create and maintain explicit one-way trusts for every
relationship you establish.
• Users wanting access to resources outside their forest need to make explicit queries; this is difficult for the
ordinary user.
Chapter 2 28 www.netpro.com
• Any synchronization of components among multiple forests has to be done manually or using a
metadirectory service or other synchronization solution.
• Users cannot easily access the network resources contained in other forests.
One situation in which you may conside r managing multiple fore sts occurs whe n two organizations me rge or
participate in a joint ve nture . This puts the administration of your ne twork into the hands of two autonomous
groups. For this reason, multiple trees are typically more costly to manage. To reduce this cost, organizations
such as partne rships and conglome rate s ne e d to form a ce ntral group that can drive the administrative
proce ss. On the othe r hand, in short-live d organizations like joint ve nture s, it might not be re alistic to e xpe ct
administrators from e ach organization to collaborate on fore st administration.
There is another good example of a situation where multiple forests may be required. Many enterprise
organizations elect to maintain separate, parallel AD forests for testing purposes. Other organizations maintain
multiple forests because they have a disjointed organizational structure with no common infrastructure among
business units. Although you’ll certainly want to keep your network and AD design as simple as possible, your
forest structure should follow the organizational, administrative, and geographical structure of your organization.
In other situations, your company might have multiple forests, but you want to have a central administrative
group. To set up central management of multiple forests, you need to add administrators to the Enterprise and
Schema Administration groups of each forest. Because there is only one Enterprise and Schema Administration
group per forest, you must agree on a central group of administrators who can be members of these groups.
As mentioned previously, it’s difficult to manage user access between two or more forests. The simplest method is
to create explicit one-way trusts among the domains that must trust one another. The one-way trust only allows
access among the domains in the direction that it’s set up. This approach of connecting the forest is shown in
Figure 2.2.
Chapter 2 29 www.netpro.com
Figure 2.2: Two forests can allow user access between its domains by establishing explicit one-way trusts.
Only the domains connected by the trusts can allow access between the forests.
Figure 2.3: The namespace of the domain tree shows the hierarchical structure of the tree.
Chapter 2 30 www.netpro.com
Explicit one-way trusts aren’t transitive. The one-way trusts in Win2K are the same as the one-way trusts that existed
in Windows NT. Creating one-way trusts among multiple forests or trees can be complicated, so it’s important to
keep it simple by limiting the domains that trust one another.
In a single forest in which all domains trust one another, the tree relationship is defined by the namespace that is
necessary to support the domain structure. For example, the root domain called company.com can have two
subdomains (or child domains) named seattle.company.com and chicago.company.com. The relationship between
the root domain and the two child domains is what forms the tree, or namespace.
In the previous section, I emphasized that multiple forests in an organization are generally not recommended.
However, this isn’t to say that there aren’t situations in which multiple trees are appropriate or even recommended.
For one, multiple trees allow you to have multiple namespaces that coexist in a single directory. Multiple trees give
you additional levels of separation of the namespaces, something that domains don’t provide. Although multiple
trees work well in most situations, I recommend that you start by creating one tree until the circumstances arise
that call for more.
You may be wondering if there are any extraordinary benefits to having multiple trees. For example, do multiple
trees reduce the replication or synchronization that occurs among domain servers? The answer is no because the
schema and configuration container are replicated to all domain controllers in the forest. In addition, the domain
partitions are only replicated among the domain controllers that are in the domain. Having multiple trees doesn’t
reduce replication.
Likewise, you may be wondering whether multiple trees cause any problems. For example, does having multiple
trees require you to establish and maintain explicit one-way trust relationships? Again, the answer is no because the
transitive trust relationships are automatically set up among all domains in the forest. This includes all domains
that are in separate trees but in the same forest.
A domain can also be called a partition, which is a physical piece of AD that contains the object information.
Figure 2.4 shows a domain structure with its contents of network resources.
Chapter 2 31 www.netpro.com
Figure 2.4: The domain structure is a piece of AD. It contains the users, groups, computers, printers, servers, and other resources.
The purpose of domains is to logically partition the AD database. Most large implementations of AD need to
divide the database into smaller pieces. Domains enable you to partition AD into smaller, more manageable units
that you can distribute across the network servers.
Domains are the basic building blocks of AD and Windows 2000. Domains can be connected to form the trees
and forests, and what connects them are the trust relationships, which are automatically established in AD. These
trusts allow the users of one domain to access the information contained in the other domains. When multiple
domains are connected by trust relationships and share a common schema, you have a domain tree. Every AD
installation consists of at least one domain tree.
It’s your role as an administrator to decide the structure of domains and which objects, attributes, groups, and
computers are created. The design of a domain includes DNS naming, security policies, administrative rights, and
how replication is handled. When you design domains, follow these steps:
• Place the domain controllers for fault tolerance and high network availability
Chapter 2 32 www.netpro.com
Determining the Number of Domains
I recommend that you start with one domain in your environment. This is true even if there are two or more physical
locations. This design keeps the layout of your domain simple and easy to maintain. You can then add other
domains as needed. The mistake some people make is to create a bunch of domains and not know what to do with
them. One domain will be adequate for most companies, especially smaller ones.
Although one domain will work in most circumstances, other circumstances necessitate having more than one
domain for an entire organization. Some of these are described below; you must decide which of these fit your
needs.
Administrative Rights
If your organization has multiple administrative groups that want guaranteed autonomy, you may need to
create additional domains and give each group its individual rights. For example, if two companies merge
together, one group may need to operate and maintain autonomous activities.
International Setting
If your organization is international, you may need to create additional domains to support another
languages. (Administrators, users, and others may need to access AD in their first language, and the schema
contains language-specific attribute display names.)
Replication Traffic
Because the AD database can be distributed, you may want to create additional domains to hold the
distributed partitions. The need to create additional domains typically arises when you have a single
domain trying to replicate across wide area network (WAN) links. If the replication is too slow, you can
alleviate the problem by splitting the domain into two. You can then place the two domains on each side of
the WAN so that they can be closest to the users.
Determining the number of domains for your organization is an individual effort. No one can tell you definitively
how many domains to have and how to split them without knowing more about your company’s organization and
network. However, using these simple guidelines, you can establish parameters that enable you to effectively design
domains and determine the appropriate number for your company.
Chapter 2 33 www.netpro.com
Choosing a Forest Root Domain
The first domain that you create becomes the forest root domain (or root domain), which is the top of the forest.
The forest root domain is extremely important because it determines the beginning of the namespace and establishes
the forest. Because the AD forest is established with the first domain, you need to make sure that the name of the
root domain matches the top level in the namespace. For example, root domains are domains with names such as
company.com and enterprise.com. These domain names are the roots of the DNS structures and the root of AD.
Any subsequent domains you create or add to the tree form the tree hierarchy.
The first domain you create in an AD forest contains two forest-wide groups that are important to administering
the forest, the Enterprise Administrators group and the Schema Administrators group. Containing these two
groups makes the root domain special. You cannot move or re-create these groups in another domain. Likewise,
you cannot move, rename, or reinstall the root domain. In addition to these groups, the root domain contains the
configuration container, or naming context, which also includes the schema naming context.
After you install the root domain, I recommend that you back it up often and do everything you can to protect it.
For example, if all the servers holding a copy of the root domain are lost in a catastrophic event and none of them
can be restored, the root domain is permanently lost. This is because the permissions in the Enterprise
Administrator and Schema Administrator groups are also lost. There is no method for reinstalling or recovering the
root domain and its groups in the forest other than completely backing it up and restoring it.
For more information on determining the number of domains, see "Determining the Number of Domains"
earlier in this chapter.
For a larger implementation with multiple locations around the world, however, you’ll probably want to use a
dedicated root domain.
A dedicated root domain is a root domain that is kept small, with only a few user account objects. Keeping the
root domain small allows you to replicate it to other locations at low cost (that is, with little impact on network
usage and bandwidth). Figure 2.5 illustrates how you can replicate a dedicated root domain to the other locations
on your network.
Chapter 2 34 www.netpro.com
Figure 2.5: A dedicated root domain is small enough to efficiently replicate copies to
the other locations on your network
A dedicated root domain focuses on the overall operations, administration, and management of AD. There are at
least two advantages to using a dedicated root domain in a larger implementation of AD.
• By keeping the user and printer objects out of the root domain, you enhance security by restricting access
to only a few administrators
• By keeping the root domain small, you can replicate it to other domain controllers on the network. This
approach helps increase the availability of the network.
Because domain administrators can access and change the contents of the Enterprise Administrators and Schema
Administrators groups, having a dedicated root domain limits normal access. Membership in these built-in groups
should only be given to the enterprise administrators, and they should only access the domain when doing official
maintenance. In addition, membership in the Domain Administrators group of the root domain should be granted
only to the enterprise administrators. Taking these steps allows you to avoid any accidental changes to the root
domain. You should also create a regular user account for each of your administrators so that they don’t carry
administrative privileges while doing regular work.
As I mentioned earlier in "Choosing a Forest Root Domain," always replicate the root domain to multiple servers
in an effort to provide fault tolerance for this domain. Because a dedicated root domain is small (no user or printer
objects), it can be replicated across the network more quickly and easily. In addition to replicating the root domain
across the local area network (LAN), you can replicate the root domain across the WAN to reduce the trust-traversal
traffic among trees.
Chapter 2 35 www.netpro.com
Assigning a DNS Name to Each Domain
After you’ve determined the number of domains and installed the root domain, you need to determine the DNS
names for each domain. DNS is a globally recognized, industry-standard system for naming computers and network
services that are organized in a hierarchy. AD clients make queries to DNS in an attempt to locate and log on to
domains and domain controllers.
Network users are better at remembering name-based addresses, such as www.company.com, than they are at
remembering number-based addresses, such as 124.177.212.34. DNS translates an easy-to-remember name address
(www.company.com) into a number address (124.177.212.34).
As I’ve mentioned, the domain is identified by a DNS name. You use DNS to locate the physical domain controller
that holds the objects and attributes in the domain. DNS names are hierarchical (like AD domains). In fact, the
DNS name for a domain indicates the position of the domain in the hierarchy. For example, in the domain name
company.com, the DNS name tells us that the domain has to be at the top of the forest and is the root domain.
Another example is:
marketing.chicago.company.com
From this domain name, we know that the domain is the Marketing Department’s domain in the Chicago location
of the company. The domain is two levels from the root domain, or top of the tree. The Chicago domain is a child
domain of the root domain, or company, and the Marketing domain is a child domain under Chicago.
When you create DNS names for the domains in AD, I recommend that you follow these guidelines:
• The domain names of .com, .net, and .org cannot exceed 67 characters
Chapter 2 36 www.netpro.com
• Relative domain names (that is, the components between the dots in a fully qualified domain name) can
not exceed 22 characters (this doesn’t include any extensions)
Figure 2.6: The first layer of domains directly under the root domain is named after the physical, or
WAN, locations on the network.
isn’t nearly as fluid or adaptable as the business itself. Once you create and name domains, you cannot move or
rename them easily. In addition, you cannot move or rename the root domain.
Using locations to name child domains is more flexible because physical locations on a network seldom change.
The organization at the specific site may change, but not the physical location itself. This design allows the tree to
be more flexible to everyday changes. However, if the physical location is changed or removed, the resources are
moved. This includes the physical resources, such as domain controllers, printers, and other equipment supporting
the site.
Chapter 2 37 www.netpro.com
If your company is smaller and contained in one physical location, you could name domains after the company or
organization. These domains then hold all the objects and attributes for your company. This is an easy and efficient
design. However, if your company has multiple physical locations, with network resources spread across them,
you’ll want to create a second layer of domains (under the root domain) and give the domains location names. The
organizational structures of business units, divisions, and departments will then be placed under each of these
location domains.
• The partitions contain all the information for the naming context.
In AD, the basic unit of partitioning is the domain. So when you create your first partition, you’re actually creating
a child domain under the root domain. The domains in AD act as partitions in the database. This means that each
domain represents a partition in the overall AD database. Partitioning this database increases its scalability. As you
partition AD, you break it into smaller, more manageable pieces that can be distributed across the domain
controllers, or network servers. Figure 2.7 illustrates how you can divide the AD database into smaller pieces that
can be distributed to the domain controllers.
Chapter 2 38 www.netpro.com
Figure 2.7: You can partition the AD database into smaller pieces, then distribute them among net-
work servers or domain controllers.
Breaking AD into smaller pieces and distributing them among multiple servers places a smaller load and less
overhead on any one server. This approach also allows you to control the amount and path of traffic generated to
replicate changes among servers. Once you create a partition, replication occurs among servers that hold copies.
In AD, you can create many partitions at multiple levels in the forest. In addition, copies of the domain can be
distributed to many different servers on the network. Although AD is distributed using partitions, any user can
access the information completely transparently. Users can access the entire AD database regardless of which server
holds which data. Of course, users must have been granted the proper permissions.
Although a single domain controller may not contain the entire AD database, users can still receive whatever
information they request. AD queries the global catalog on behalf of a user to identify the requested object, then
resolves the name to a server (domain controller) address using DNS. Again, this process is entirely transparent to
the user.
The availability of domain information is strictly determined by the availability of the domain controllers. It’s
obvious that the domain controllers must be available so that users can log on and access AD information. For this
purpose, never have only one domain controller for any domain. I recommend that you have at least two domain
controllers for each domain to provide redundancy and fault tolerance for every domain in your organization.
Chapter 2 39 www.netpro.com
Determining Trust Relationships
Trust relationships are logical connections that combine two or more domains into one administrative unit. Trust
relationships allow permissions to be associated and passed from one domain to another. Without some sort of
trust among domains, users cannot communicate or share resources. In this section, I’ll describe the advantages of
using bi-directional trusts, one-way trusts, and cross-link trusts.
Figure 2.8: Each domain has a bi-directional transitive trust relationship between itself and each of its
child domains.
One of the advantages of these new trusts is that they’re automatically established among all domains; this allows
each domain to trust all the other domains in the forest. Another advantage is that these bi-directional trusts,
which are
automatically established using Win2K’s Kerberos security mechanism, are much easier to set up and administer
than Windows NT–style trusts. Having bi-directional trusts also reduces the total number of trust relationships
needed in a tree or forest. For example, if you tried to accomplish the same thing in NT, you’d have to create two-
ways trusts between one domain and every other domain. This would increase the total number of trusts exponen-
tially with the number of domains.
If you have experience with Windows NT domains, you may know something of trust relationships. However, the
trusts in AD differ from NT trusts because they’re transitive. To help you understand what this means, I’ll provide
an example. Win2K transitive trusts work much like a transitive equation in mathematics. A basic mathematical
transitive equation reads as follows:
Chapter 2 40 www.netpro.com
When applying this transitive concept to trust relationships, you get an understanding of how transitive trusts work
among domains. For example, if Domain A trusts Domain B, and Domain B trusts Domain C, Domain A trusts
Domain C. Figure 2.9 illustrates this idea. Transitive trust relationships have been set up between Domain A and
Domain B and between Domain B and Domain C. Thus, Domain A trusts Domain C implicitly.
Figure 2.9: A domain tree viewed in terms of its transitive trust relationships. Because transitive
trust relationships have been set up between Domain A and Domain B and between Domain B
and Domain C, Domain A trusts Domain C implicitly.
In Windows NT, trusts were non-transitive, so they didn’t allow this implicit trust to exist. For one domain to trust
another domain, an explicit trust relationship had to be created between them.
When domains are created in an AD forest, bi-directional trust relationships are automatically established. Because
the trust is transitive and bi-directional, no additional trust relationships are required. The result is that every domain
in the forest trusts every other domain. Transitive trusts greatly reduce your overhead and the need to manually
configure the trusts. Because trusts are automatically set up, users have access to all resources in the forest as long as
they have the proper permissions.
Transitive trusts are a feature of the Kerberos authentication protocol. The protocol is used by AD and provides
distributed authentication and authorization. The parent-child relationship among domains is only a naming and
trust relationship. This means that the trust honors the authentication of the trusted domain. However, having all
administrative rights in a parent domain doesn’t automatically make you an administrator of a child domain.
Policies set in a parent don’t automatically apply to child domains because the trust is in place.
Chapter 2 41 www.netpro.com
Using One-Way Trusts
One-way trusts aren’t transitive and are used among domains that aren’t part of the same forest. If you’re familiar
with the one-way trusts in Windows NT, the one-way trusts that exist in Win2K are just the same. However,
they’re only used in a handful of situations.
First, one-way trusts are often used when new trust relationships must be established among domains of different
forests. You can use them among domains to isolate permissions. For example, you can use one-way trusts to allow
access among forests and among the domains of the same tree. Figure 2.10 shows how you can create a one-way
trust between two domains in two different forests. Setting up a one-way trust allows users to access network
resources in the direction of the trust. The actual user rights depend on the access control lists (ACLs) governing
the domains.
Figure 2.10: A one-way trust is established between a domain in Forest 1 and a domain in Forest
2. The trust allows access to network resources in each domain.
The second use of one-way trusts is to create a relationship from an AD domain to backward-compatible domains,
such as Windows NT. Because Windows NT domains cannot naturally participate in AD transitive trusts, you
must establish a one-way trust to them. You have to manage one-way trusts manually, so try to limit the number
you use.
In both these situations, you can create two one-way trusts among the domains. However, two one-way trusts don’t
equal a bi-directional transitive trust in AD.
Chapter 2 42 www.netpro.com
When a user needs to authenticate to a resource that doesn’t reside on its own domain, the client first has to
determine where the resource is and locate it. If the resource isn’t in the local domain, the domain controller will
pass back a referral list of other domain controllers that might have the resource. The workstation then contacts the
appropriate servers in the referral list to find the resource. This process continues until the requested resource is
found. This process is often referred to as chasing referrals and can take time, especially on large or complex AD
networks.
Walking up and down the domain tree branches lengthens the time it takes to query each domain controller and
respond to the user. To speed this process up, you can establish a cross-link, or shortcut, trust relationship between
two domains. If you decide to use a cross-link trust, I recommend that you place it between the two domains that
are farthest from the root domain.
For example, suppose you have a domain tree that has domains 1, 2, 3, 4, and 5 in one branch and domains 1, A,
B, C, and D in another branch. Domains 5 and D are located farthest from the root domain. Let’s say that a user
in Domain 5 needs to access a resource in Domain D. To accomplish this request, the authentication process must
traverse up the first branch and down the second branch while talking to each domain controller. Continuous
authentications like this create a significant amount of network traffic. To alleviate this problem, you can establish a
cross-link between Domain 5 and Domain D. Figure 2.11 illustrates the layout of the domain tree with the cross-link
established between the two domains.
Figure 2.11: The domain tree has two branches, Domains 1, 2, 3, 4, and 5 are one branch, and domains 1, A, B, C, and D
are the second branch. The cross-link trust is established between domains 5 and D.
The cross-link that has been established serves as an authentication bridge between the two domains. The result is a
better authentication performance between the domains.
Chapter 2 43 www.netpro.com
Designing Organizational Units for Each Domain
An OU is a container object that allows you to organize your objects and tie a Group Policy Object (GPO) to it.
Using the OU, you can group similar objects into logical structures in a domain. OUs can also be nested to build a
hierarchy in a domain. This hierarchy of containers is typically named after divisions, departments, and groups in
your company. When you’re designing and creating the hierarchical structure in each domain, it’s important to
understand the following characteristics of OUs:
Now that you understand a few of the basic characteristics for OUs, you need to consider the following guidelines
for designing an efficient and effective OU structure:
engineering.chicago.company.com
After you’ve created the new OU and placed all the objects and resources in it, you can grant explicit permissions
to the administrators of the Engineering Department so that they can control their own objects. Figure 2.12
illustrates how you can create the Engineering OU in the Chicago domain.
Chapter 2 44 www.netpro.com
Figure 2.12: You can create the Engineering OU in the Chicago domain, then assign permissions
o the Engineering Department administrators to manage all the objects.
Another nice feature that I mentioned earlier (see "Designing Organizational Units for Each Domain") is that OUs
can be nested. This enables you to build a hierarchy in each domain. For example, let’s say the Testing Group in
the Engineering Department wants full administrative control over all its resources, such as users, printers, and
computers. To accommodate this request, you simply create a new OU directly under the Engineering OU in the
Chicago domain. The hierarchical structure now looks like the following:
testing.engineering.chicago.company.com
After you’ve created the new OU and placed the resources, you can give full privileges to the Testing group’s
administrator. If an OU is nested, it inherits the properties of the parent OU by default. For example, if the
Engineering OU has certain security or Group Policy objects set, they’re passed down to the Testing OU. The
Testing OU is considered nested under the Engineering OU.
Be careful to limit the number of OU layers you create. Creating too many layers can increase the administrative
overhead. Limiting the number of OU layers also increases user logon performance. When a user logs on to AD,
the security policies take effect. To find all these policies, the workstation must search all layers of the OU structure.
Having fewer OU layers allows the client to complete this search more quickly.
Chapter 2 45 www.netpro.com
One of the many benefits of creating OUs in a domain is the organization of this flat layout. OUs allow you to
create an organization that reflects your company’s divisions, departments, and groups. In fact, you can use your
company’s organizational chart or a similar document to help you. Figure 2.13 illustrates how you can create OUs
based on an organizational chart.
For more information on Group Policy, I re comme nd that you che ck out anothe r fre e e Book from
Re altime publishe rs: The De finitive Guide to Windows 2000 Group Policy by Darre n Mar-Elia, a link to
which can be found at http://www.re altime publishe rs.com.
The ability to set Group Policy on OUs allows you to control a large set of users and computers from a central
point. If you have a special need for certain users and computers, you can create an OU and establish Group
Policy. For example, if the Accounting Department needs specific settings on its desktops, you can create an
OU=Accounting and establish the specific policy. Group Policy will then apply to every user and computer in the
new OU.
Chapter 2 46 www.netpro.com
As I mentioned above, GPOs can be associated with OUs as well as the domain and site objects in AD. Because
GPOs can be associated with each of these objects, you can create implementations using GPOs to generate various
combinations. If you aren’t careful, these combinations can become very complicated and cause you headaches.
Users who don’t have the right to read an object, can normally still see it in AD. This may be a problem if you have
highly secure network resources that you don’t want anyone else to see. You can restrict and hide the resources by
creating a new OU in the domain and limiting access to only the few who need it.
Designing sites and site links in AD takes advantage of the physical network layout. (For an explanation of site
links, see "Creating Sites and Site Links Based on Network Topology" below.) The basic assumption is that servers
and workstations with the same subnet address are connected to the same network segment and have LAN speeds.
Defining a site as a set of subnets allows administrators to easily configure AD access and replication topology to
take advantage of the physical network. Sites also help you locate network servers so that they’re physically close to
the users who depend on them.
It’s your role as an administrator to design the site objects and site links for your tree or forest that assure the best
network performance. It’s also your job to determine what speed assures this performance and reduces server
downtime as a result of network outages. Establish site objects and site links based on network and subnet speed.
While many subnets can belong to a single site, a single subnet can’t span multiple physical sites. To help you
establish a design for the sites in your forest, you need to consider the following guidelines:
Chapter 2 47 www.netpro.com
About Sites
Sites are groups of computers (or subnets) that share high-speed-bandwidth connections on one or more TCP/IP
subnets. Subnets are groups of local segments on the network that are physically located in the same place.
Multiple site objects create a site topology. Figure 2.14 portrays a site with TCP/IP subnets that exist between the
servers and workstations. A LAN always connects a site.
Figure 2.14: A site is one or more TCP/IP subnets or LAN networks that exist between the servers and workstations.
One domain can span more than one site, and one site can contain multiple domains. However, for design
purposes, it’s important to remember that sites define how replication occurs among domain controllers and
which domain controller a user’s workstation contacts for initial authentication. Normally, the workstation first
tries to contact domain controllers in its site.
Chapter 2 48 www.netpro.com
The site link object has four settings.
Cost
Helps the replication process determine the path of the communication among domain controllers
Replication Schedule
Determines what time of day the replication process can execute
Replication Interval
Helps the replication process determine how often to poll the domain controllers on the other side of the link
Transport
Helps the replication process determine which transport protocol to use during communications.
Site and site link objects are stored in a special container called the configuration container. The configuration
container is stored and replicated to every AD domain controller, providing each server with complete details of
the physical network topology. A change to any of the information in the site or site link objects causes replication
to every domain controller in the forest.
Figure 2.15: The site topology is cre ate d from the site obje cts and the site links. The site topology he lps the re plication proce ss
de te rmine the path, cost, and protocol among domain controlle rs.
When you create the site topology, it’s useful to have a complete set of physical LAN and WAN maps. If your
company has campus networks at one or more locations, you’ll need to have the physical maps of those locations.
These maps should include all the physical connections, media or frame types, protocols, and speed of connections.
Chapter 2 49 www.netpro.com
When defining the sites, begin by creating a site for every LAN or set of LANs that are connected by high-speed
bandwidth connections. If there are multiple physical locations, create a site for each location that has a LAN
subnet. For each site that you create, keep track of the IP subnets and addresses that comprise the site. You’ll need
this information when you add the site information to AD.
Site names are registered in DNS by the domain locator, so they must be legal DNS names. You must also
use Internet standard characters—letters, numbers, and hyphens. (For more information, see "Using
Internet Standard Characters" earlier in this chapter.)
After you’ve created the sites, you need to connect them with site links to truly reflect the physical connectivity of
your network. To do this, you need to first assign each site link a name. Site links are transitive, just like trust
relationships in Win2K. This means that if Site A is connected to Site B and Site B is connected to Site C, it’s
assumed that Site A can communicate with Site C.
The process of generating this site topology is automatic, and it’s handled by a special Win2K service called the
Knowledge Consistency Checker (KCC). If you don’t like the topology that the KCC generates for you, you can
create it manually.
The purpose of creating the site topology is to ensure rapid data communications among AD servers. The site
topology is used primarily when setting up replication of AD. However, the placement of the domain controllers
and partitions govern when and how replication takes place.
Your responsibility is to determine where to place the domain controllers on the network to best suit the needs of
the users. I recommend that they be located on or near the users’ subnet or site. When a workstation connects to
the network, it typically receives a TCP/IP address from DHCP. This TCP/IP address identifies the subnet or site
to which the workstation is attached. If the workstation has a statically assigned IP address, it’ll also have statically
configured subnet information.
In either case, when users log on to the network, their workstations can reach the closest domain controller site by
knowing the assigned address and subnet information. Because computers in the same site are physically close to
each other in, communication among them is reliable and fast. Workstations can easily determine the local site at
logon because they already know what TCP/IP subnet they’re on, and subnets translate directly to AD sites.
If no domain controller is available in the local site, user traffic will cross the WAN links and sites to find other
servers. To place the domain controller for best overall connectivity, select the site where the largest numbers of
users are located. All the users in that site will authenticate to the local domain controller. This approach guarantees
that the users will retrieve their object information from the global catalog partition. The location of the server is
important because users are required to access a global catalog server when they log on.
Chapter 2 50 www.netpro.com
Using Sites to Determine the Placement of DNS Servers
I’ve already mentioned that DNS and AD are inseparably connected. AD uses DNS to locate the domain
controllers. The DNS service enables users’ workstations to find the IP addresses of the domain controllers. The
DNS server is the authoritative source for the locator records of the domains and domain controllers on the network.
To find a particular domain controller, the workstation queries DNS for the appropriate service (SRV) and address
(A) resource records. These records from DNS provide the names and IP addresses of the domain controller.
The availability of DNS directly affects the availability of AD and its servers. As mentioned, users rely on DNS as a
service. To guarantee DNS as a service, I recommend that you place or have available at least one DNS server for
every site on your network. This allows all users to access the DNS service locally. You don’t want users to have to
query DNS servers that are offsite to locate the domain controllers that are on their own subnet.
The AD domain controllers query DNS to find each other during replication. A new domain controller
participates in replication by registering its locator records with DNS. Likewise, each domain controller must
be able to look up these records. This is the case even if the domain controllers are on the same subnet.
If you depend on an outside DNS service, you may need to adjust the number of DNS servers and physical placement,
if possible. You’ll also need to verify that the outside DNS service supports the required SRV resource record. If it
doesn’t, you may need to install and configure your own implementation of Microsoft’s DNS to support AD.
If you don’t want to depend on an existing DNS service or a DNS service that is offsite, you may want to install
the Microsoft DNS service that is integrated into AD. The Microsoft DNS service stores the locator records for the
domain and domain controllers in AD. You can then have one or more domain controllers provide the DNS service.
Again, I recommend that you place at least one DNS server for each site object in your environment. Using the
Microsoft DNS service is an optional configuration, and storing the locator records in AD may have a negative
impact on replication traffic on large networks.
Summary
My first recommendation for troubleshooting AD was to make sure that its components are designed and
implemented correctly. In addition, the efficiency of AD depends on the design and implementation of key structures
— forests, trees, domains, and OUs. I also recommended that the sites and site links be properly established to
support the distribution and replication of the system. I also discussed the placement of other supporting servers,
such as domain controllers, global catalog servers, and DNS servers. The design and implementation of these
structures is strictly your responsibility as network administrators. Before you can effectively troubleshoot AD, make
sure you feel confident about your design.
Chapter 2 51 www.netpro.com
Chapter 3
When you consider how you’ll monitor your domain controllers, first remember that no one domain controller
contains all of the directory information. In any well-built Win2K network, each domain partition in the directory
typically has two or more domain controllers hosting the domain to provide fault tolerance for directory services.
With this kind of redundancy in place, you might initially be fooled into thinking that monitoring each domain
controller for performance and downtime isn’t all that important.
However, each domain controller plays a role in supporting your users. For example, if two domain controllers in
the same directory partition are placed in separated sites or subnets, users in each site will use the domain controller
nearest them. However, if one of the domain controllers goes down, users in that location must traverse the wide
area network (WAN) to log on and access the directory. This is usually undesirable, especially if there are too many
users and/or if the WAN link is slow.
Another example of why you need to monitor domain controllers is that some domain controllers on a Win2K
network (no matter how many domain controllers it may have) are unique. For example, some Win2K domain
controllers perform special duties called Flexible Single-Master Operation (FSMO) roles. Although the replication
of AD is multimaster, the FSMO roles held by these domain controllers are single-master (much like a Windows
NT 4.0 primary domain controller, or PDC). This essentially means that these domain controllers don’t have
additional copies or replicas to provide fault tolerance if the domain controller hosting a particular role is down.
As I discussed in Chapter 1, these FSMO domain controllers perform special roles for AD, such as managing the
domain, managing the schema, and supporting down-level clients. If any of these critical domain controllers go
down, the directory loses functionality and can no longer update or extend the schema, or add or remove a domain
from the directory.
Chapter 3 53 www.netpro.com
Failing to monitor domain controllers can adversely affect a network’s performance and availability. For example, if
an entire department is unable to access the domain controller or directory, users lose time, and the company loses
money. To help you ensure that your domain controllers are available, you can, and should, monitor and analyze
Win2K in the five following areas:
• Overall system
• Memory and cache
• Processor and thread
• Disk
• Network.
I’ll discuss each of these areas, and the reason for their importance, in the following sections. I’ll discuss monitoring
AD itself in Chapter 4.
Monitoring domain controllers means watching for problems or bottlenecks in the OS and its subsystems. A simple
example of a bottleneck occurs when a domain controller’s processor is running at 100 percent usage because one
application has tied up the central processing unit (CPU). Almost every NT/Win2K administrator has seen this
occur at some point or another.
Win2K provides several utilities that can assist you in monitoring your domain controllers and their subsystems.
These tools provide features that will help you search for bottlenecks and other problems. They’re described below.
Task Manager
Gives you a quick view of which applications and processes are running on the domain controllers. This
utility allows you to view a summary of the overall CPU and memory usage for each of these processes and
threads.
Performance console
Allows you to view the current activity on the domain controller and select the performance information
that you want collected and logged. You can customize Win2K’s performance-counter features and architec
ture to allow applications to add their own metrics in the form of objects and counters, which you can then
monitor using the Performance console. By default, the Performance console has two applications, System
Monitor and Performance Logs and Alerts.
System Monitor enables you to monitor nearly every aspect of a domain controller’s performance and estab
lish a baseline for the performance of your domain controllers. Using System Monitor, you can see the per
formance counters graphically logged and set alerts against them. The alerts will appear in Event Viewer.
Chapter 3 54 www.netpro.com
Performance Logs and Alerts enable you to collect information for those times when you can’t detect a
problem in real time. Performance Logs and Alerts allows you to collect domain controller performance
data for as long as you want—days, weeks, or even months.
Event Viewer
Allows you to view the event logs that gather information about a domain controller and its subsystems.
There are three types of logs: the application log, the system log, and the security log. Although the event
logs start automatically when you start the domain controller, you must start Event Viewer manually.
When monitoring domain controllers using the Performance console’s logging feature, make sure you don’t
actually create a problem by filling the computer’s disk with large log files.
First, be sure to only include those statistics in the logging process that you absolutely need. Keep the
sampling period to the minimum required to evaluate domain controller performance and usage. To select
an appropriate interval for your computer, establish a baseline of performance and usage. Also, take into
account the amount of free disk space on your domain controller when you begin the logging process.
Finally, make sure that you have some application in place (such as the Performance console) that continu-
ally monitors the domain controller to ensure that it has plenty of free disk space.
In addition to monitoring the local domain controller, you can use the Performance console to monitor
domain controllers remotely and store the log files on a shared network drive. This enables you to monitor
all the domain controllers in a directory from one console or utility.
At the heart of the Performance console and Task Manager are the performance counters that are built into the
Win2K OS. I’ll introduce each of these monitoring utilities briefly in the upcoming sections and demonstrate how
they can help you monitor specific subsystems. Keep in mind that this chapter isn’t intended to be an in-depth
study of all the capabilities of these utilities. Instead, my intention is to provide a general introduction to them and
show you how you can use them to assist you in monitoring your domain controllers.
Chapter 3 55 www.netpro.com
Figure 3.1: Win2K’s Task Manager allows you to view and manage the applications
and processes running on a domain controller and manage their performance.
Task Manager supplies three pages of information: Applications, Processes, and Performance. Each of these pages
will help you understand more about a domain controller’s processes and memory. I’ll discuss each of these screens
in greater detail later in this chapter.
In Windows NT, the Performance console was known as Performance Monitor, and like most NT administration
utilities, it was a standalone utility rather than an MMC snap-in.
The Performance console helps you accurately pinpoint many of the performance problems or bottlenecks in your
system. It monitors your Win2K domain controller by capturing the selected performance counters that relate to
the system hardware and software. The performance counters are programmed by the developer of the related sys-
tem. The hardware-related counters typically monitor the number of times a device has been accessed. For example,
the physical disk counters indicate the number of physical disk reads or writes and how fast they were completed.
Software counters monitor activity related to application software running on the domain controller. To launch the
Performance console, choose Start>Programs>Administrative Tools>Performance.
Chapter 3 56 www.netpro.com
The first application that starts in the Performance console is System Monitor. Using System Monitor, you can
view the current activity on the domain controller and select information to be collected and logged for analysis.
You can also measure the performance of your own domain controller as well as that of other domain controllers
on your network. System Monitor is shown in Figure 3.2.
Figure 3.2: The Performance console includes both System Monitor and Performance Logs and Alerts.
When it starts up, System Monitor isn’t monitoring any counters or performance indicators for the system. You
determine which counters System Monitor tracks and displays. To add a counter, click the Plus (+) tool on the
toolbar or right-click anywhere in the System Monitor display area and choose Add Counters from the shortcut
menu. Using either approach, the Add Counters dialog box appears, as shown in Figure 3.3, where you can choose
which counters to monitor.
Chapter 3 57 www.netpro.com
Figure 3.3: In System Monitor, you can choose which counters you want to
track and monitor on the display.
Once you choose the counter that you want to view, System Monitor tracks its performance in real time. When you
first start using System Monitor, the number of counters that are available seems overwhelming because there are
counters for almost every aspect of the computer. However, in the spirit of the age-old 80/20 rule, you’ll probably
find that you tend to use about 20 percent of the available counters 80 percent of the time (or more), using the
other counters only when you need specific monitoring or troubleshooting information.
If you don’t understand the meaning of a particular Performance console counter, highlight it and click Explain.
The informational dialog box that appears provides a description of the selected counter (and, in some cases,
what the various values or ranges might indicate).
In later sections of this chapter, I’ll discuss how you can use System Monitor to monitor memory, view processes,
and monitor network components on a domain controller as well as monitor the disk subsystem.
Event Viewer
As with its NT predecessor, Win2K uses an event logging system to track the activity of each computer and its
subsystems. The events that are logged by the system are predetermined and tracked by the OS. In addition,
Win2K provides Event Viewer, which allows you to view the events that have been logged.
Chapter 3 58 www.netpro.com
In addition to stop errors, if a Win2K domain controller restarts, these events are also recorded in the system log
section of the Event Log. The reasons for a restart could include OS crashes, OS upgrades, and hardware maintenance.
Another type of event that a domain controller tracks in the Event Log is application crashes. Win2K uses the Dr.
Watson utility (Drwtsn32.exe) to record problems and failures in applications running on the domain controller.
Failures are recorded in the application log section of the Event Log. Again, you can use Event Viewer and the
information in the Event Log to analyze problems with an application.
Application Log
Contains events logged by applications or programs such as Exchange or Microsoft Internet Information
Server (IIS) that are running on the computer. The developer of an application decides which events to
record.
System Log
Contains events logged by the subsystems and components of the domain controller. For example, if a disk
driver has problems or fails, it records the events in the system log. You can use this log to determine the
general availability and uptime of the domain controller.
Security Log
Records security events, such as when a user successfully logs on or attempts to log on. This log also records
events that relate to file access. For example, an event is recorded when a file is created, opened, or deleted.
By default, the security log can only be seen by system administrators.
Chapter 3 59 www.netpro.com
Figure 3.4: The startup screen or display for Event Viewer.
Only a user with administrative privileges can view the security log. Regular users can only view the application
and system logs.
Error
Signifies that a severe problem has occurred. This means that data was lost or functionality was lost. For
example, if a service fails to load during startup or stops abruptly, an error is logged.
Warning
Less significant than an error and indicates that a problem could occur in the future. For example, a warn
ing is logged if disk space becomes too low.
Information
Describes important situations that need noting. This event is typically used to notify when an operation is
successful—for example, a disk driver loaded successfully and without errors.
Success audit
Logs successful access to a secured system resource such as a file or directory object. A success audit event is
a successful security-access attempt. For example, if a user attempts to log on to the system and is success
ful, a success audit event is logged.
Chapter 3 60 www.netpro.com
Failure audit
Is the opposite of the success audit event. For example, if a user attempts to log on to the system or access a
secured resource and fails, a failure audit is logged.
In addition to selecting the sort order for events, you can filter them. Filtering events allows you to select and view
only the events that you want to analyze. To set a filter for events in Event Viewer, choose View>Filter Events.
Figure 3.5 illustrates the dialog box that appears to help you specify the filter characteristics.
Figure 3.5: Events can be filtered in Event Viewer to restrict the list of events that are displayed.
Exporting Events
In addition to sorting and filtering events in Event Viewer, you can export events in a variety of formats to use with
applications such as Microsoft Excel. To export events, choose Action>Export List. When the Save As dialog box
appears (shown in Figure 3.6), you can type a file name with the .xls extension, or choose a file type, such as Text
(Comma Delimited) (*.csv).
Chapter 3 61 www.netpro.com
Figure 3.6: The events in Event Viewer can be exported for use with various applications.
Before I give you the details of monitoring your domain controller’s memory, I’ll first briefly introduce the memory
model for Win2K. Memory in Win2K provides a page-based virtual memory management scheme (called Virtual
Memory Manager, or VMM) that allows applications to address 4 gigabytes (GB) of memory. Memory in Win2K
is able to do exactly that by implementing virtual addresses. Each application is able to reference a physical chunk
of memory, at a specific virtual address, throughout its life. VMM takes care of whether the memory should be
moved to a new location or swapped to disk completely independently of the application.
Because everything in the system is realized using pages of physical memory, it’s easy to see that pages of memory
become scarce rather quickly. VMM uses the hard disk to store unneeded pages of memory in one or more files called
paging files. Paging files represent pages of data that aren’t currently being used but may be needed spontaneously
at any time. By swapping pages to and from paging files, VMM is able to make pages of memory available to
applications on demand and provide much more virtual memory than the available physical memory.
One of the first monitoring or troubleshooting tasks you’ll carry out is to verify that your domain controller has
enough physical memory. Table 3.1 shows the minimum memory requirements for a Win2K domain controller.
Minimum Installation 64 MB
Server running a basic set of services 128 MB
Server running an expanded set of services 512 MB
Table 3.1: Minimum memory requirements for a Win2K domain controller.
Chapter 3 62 www.netpro.com
These recommendations are minimum physical memory requirements. Your physical memory requirements for
actual production servers will typically be much higher. Because your Win2K domain controllers will at least be
running AD, I recommend that you always start with at least 512 megabytes (MB) RAM. If you want to load other
applications that come with their own memory requirements, you’ll need to add memory to support them. If there
isn’t enough memory on your domain controller, it will start running slower as it pages information to and from its
hard drive. When physical memory becomes full and an application needs access to information not currently in
memory, VMM moves some pages from physical memory to a storage area on the hard drive called a paging file.
As the domain controller pages information to and from the paging file, the application must wait. The wait occurs
because the hard drive is significantly slower than physical RAM. This paging also slows down other system activities
such as CPU and disk operations. As I mentioned earlier, problems caused by lack of memory often appear to be
problems in other parts of the system. To maximize the performance and availability of your domain controller servers,
it’s important for you to understand and try to reduce or eliminate wherever possible the performance overhead
associated with paging operations.
Fortunately, there are a couple of utilities that you can use to track memory usage. Two of the most common are
utilities I’ve already introduced: Task Manager and the Performance console.
Chapter 3 63 www.netpro.com
Figure 3.7: The Performance page of Win2K’s Task Manager allows you to view a
domain controller’s memory usage.
The Performance page in Task Manager contains eight informational panes. The first two are CPU Usage and
CPU Usage History. These two panes and the Totals pane all deal with usage on the CPU, or processor. The
remaining panes can be used to analyze the memory usage for the domain controller and include the following:
MEM Usage
A bar graph that shows the amount of virtual memory your domain controller is currently using. This pane
is one of the most useful because it can indicate when VMM is paging memory too often and thrashing.
Thrashing occurs when the OS spends more time managing virtual memory than it does executing applica
tion code. If this situation arises, you need to increase the amount of memory on the system to improve
performance.
Physical Memory
Tells you the total amount of RAM in kilobytes (K) that has been installed on your domain controller. This
pane also shows the amount of memory that is available for processes and the amount of memory used for
system cache. The amount of available memory will never go to zero because the OS will swap data to the
hard drive as the memory fills up. The system cache is the amount of memory used for file cache on the
domain controller.
Chapter 3 64 www.netpro.com
Commit Charge
Shows three numbers, which all deal with virtual memory on the domain controller: Total, Limit, and
Peak. The numbers are shown in kilobytes K. Total shows the current amount of virtual memory in use.
(You’ll notice that this number matches the number shown in MEM Usage.) Limit is the maximum possi
ble size of virtual memory. (This is also referred to as the paging limit.) Peak is the highest amount of
memory that has been used since the domain controller was started.
Kernel Memory
Shows you the total amount of paged and non-paged memory, in kilobytes, used by the kernel of the OS.
The kernel provides core OS services such as memory management and task scheduling.
I mentioned that you can easily and quickly check the memory usage on your domain controller using Task
Manager. Task Manager allows you to see the amount of virtual memory in use.
You can use any of these three counters to understand your domain controller’s memory commitment. I recommend
that you reserve at least 20 percent of available memory for peak use.
To view one or all of the available memory counters, either click the Plus (+) tool on the toolbar or right-click any-
where in the display area and choose Add Counters from the shortcut menu. Once the Add Counters dialog box
appears, choose Performance Object>Memory, then choose one of the available memory counters. Figure 3.8 shows
the Available Bytes counter of the memory Performance Object.
Chapter 3 65 www.netpro.com
Figure 3.8: Using the Available Bytes memory counter to monitor or track how much memory is left for
users or applications.
The Available Bytes counter shows the amount of physical memory available to processes running on the domain
controller. This counter displays the last observed value only; it isn’t an average. It’s calculated by summing space
on three memory lists.
Free
Memory that is ready or available for use.
Zeroed
Pages of memory filled with zeros to prevent later processes from seeing data used by a previous process.
Standby
Memory removed from the working set of a process and en route to disk, but still available to be recalled.
If Available Bytes is constantly decreasing over a period of time and no new applications are loaded, it indicates
that the amount of working memory is growing, or it could signal a memory leak in one or more of the running
applications. A memory leak is a situation where applications or processes consume memory but don’t release it
properly. To determine the culprit, monitor each application or process individually to see if the amount of memory
it uses constantly increases. Whichever application or process constantly increases memory without decreasing it is
probably the culprit.
Page-Fault Counters
When a process or thread requests data on a page in memory that is no longer there, a domain controller issues a
page fault. Here, the page has typically been moved out of memory to provide memory for other processes. If the
requested page is in another part of memory, the page fault is a soft page fault. However, if the page has to be
retrieved from disk, a hard page fault has occurred. Most domain controllers can handle large numbers of soft page
faults, but hard page faults can cause significant delays.
Chapter 3 66 www.netpro.com
Page-fault counters help you determine the impact of virtual memory and page faults on a domain controller.
These counters can be important performance indicators because they measure how VMM handles memory.
Figure 3.9 illustrates how you can use System Monitor to track page-fault counters.
Figure 3.9: The Page Faults/sec, Page Reads/sec, and Pages Input/sec counters determine the impact of
virtual memory and paging.
If the numbers recorded by these counters are low, your domain controller is responding quickly to memory requests.
However, if the numbers are high and remain consistently high, it’s time to add more RAM to the domain controller.
Chapter 3 67 www.netpro.com
Paging File Usage
Another important set of counters helps you determine the size of virtual memory. These counters are related to
paging file usage. Before I discuss how you can effectively use these counters, it’s important that you better understand
the paging file and its function. The paging file is the space on a domain controller that enables the OS to swap
out memory to the hard drive. As the domain controller loads more applications than it can run in actual memory,
it pages some memory to the hard drive to create room for the new applications.
You can see how much the paging file is being used by watching two counters under the Paging File object.
% Usage
Indicates the current usage value that was last recorded.
% Usage Peak
Indicates the high-water mark for the paging file.
If a domain controller was perfect, the OS would have enough memory for every application that was loaded and
would never page memory out. Both the % Usage counter and the % Usage Peak counter would be at zero. The
opposite is that the domain controller is paging memory as fast as possible, and the usage counters are high. An
example of a bad situation is one in which your domain controller has 128MB of memory, the % Usage Peak
counter is at 80 percent, and the % Usage counter is above 70 percent. In this situation, it’s fairly certain that your
domain controller will be performing poorly.
By default, Win2K automatically creates a paging file on the system drive during installation. Win2K bases the size
of the paging file on the amount of physical memory present on the domain controller (in most cases, it’s between
768MB and 1536MB). In addition to this paging file, I recommend that you create a paging file on each logical
drive in the domain controller. In fact, I recommend that you stripe the paging file across multiple physical hard
drives, if possible. Striping the paging file improves performance of both the file and virtual memory because
simultaneous disk access can occur on multiple drives simultaneously.
The recommendation for using disk striping on the paging file works best with Small Computer System Interface
(SCSI) drives rather than those based on Integrated Device Electronics (IDE) interfaces. This is because SCSI
handles multiple device contention more efficiently than IDE and tends to use less CPU power in the process.
Also, I don’t recommend that you spread the paging file across multiple logical drive volumes (partitions) located
on the same physical drive. This won’t generally aid paging file performance—and it may actually hinder it.
To change or set the virtual memory setting on your domain controller, right-click My Computer, then choose
Properties from the shortcut menu. In the System Properties dialog box, click the Advanced tab, then click
Performance Options. Notice that the Performance Options dialog box allows you to see the current setting for
Virtual Memory. Next, click Change to display more information and to change the paging file settings.
Changing the paging file size or location is unfortunately one of those rare setting changes in Win2K that
requires you to restart the domain controller before the change takes effect. So, if you decide to change any
settings for the paging file, do it during a scheduled maintenance time when it’s safe to take the domain
controller down and doing so won’t affect your users.
Chapter 3 68 www.netpro.com
System Cache
In addition to tracking the amount of memory and virtual memory in the domain controller, you need to keep an
eye on the computer’s system cache settings. The system cache is an area in memory dedicated to files and appli-
cations that have been accessed on the domain controller. The system cache is also used to speed both file system
and network input/output (I/O). For example, when a user program requests a page of a file or application, the
domain controller first looks to see if it’s in memory (system cache). That’s because a page in cache responds more
quickly to user requests. If the requested information isn’t in cache, the OS fulfills the user request by reading the
file page from disk.
If the system cache isn’t large enough, bottlenecks will occur on your domain controller. The Cache object in
System Monitor and its counters help you understand caching in Win2K. In addition, several counters under the
Memory object help you determine the amount of file cache. Two of the counters that best illustrate how the file
cache is responding to requests are described below.
When you’re considering using the Copy Read Hits % counter to assess file-cache performance, you might
also consider tracking the Copy Reads/sec counter, which measures the total number of Copy Read operations
per second. By assessing these numbers together, you’ll have a better sense of the significance of the data
provided by the Copy Reads Hits % counter. For example, if it were to spike momentarily without a corresponding
jump (or perhaps even a decrease) in the number for overall Copy Reads/sec, the data might not mean much.
Ideally, you can identify a cache bottleneck when there is a steady decrease in the Copy Read Hits % counter
with a relatively flat Copy Reads/sec figure. A steady increase in both counters, or an increase in Copy Read
Hits % and a relatively flat Copy Reads/sec, indicates good file cache performance.
Thus, the Copy Read Hits % counter records the percentage of successful file-system cache hits, and the Cache
Faults/sec counter tracks the number of file-system cache misses. Figure 3.10 shows these counters in System
Monitor. Remember that one of the counters is a percentage and the other is a raw number, so they won’t exactly
mirror each other.
Chapter 3 69 www.netpro.com
Figure 3:10: The Copy Read Hits % and the Cache Faults/sec counters show how the domain
controller’s cache is responding.
Generally speaking, I recommend that a domain controller have at least an 80 percent cache hit rate over time. If
these two counters show that your domain controller has a low percentage of cache hits and a high number of
cache faults (misses), you may want to increase the total amount of RAM. Increasing the RAM allows the domain
controller to allocate more memory for system cache and should increase the cache hit rate.
A process is an executable program that follows a sequence of steps. Each process requires a cycle from the domain
controller’s processor as it runs. A thread is the component of a process that is being executed at any time. Thus, a
process must contain at least one thread before it can perform an operation. A single process executing more than
one thread is referred to as being multithreaded. Win2K is a multithreaded OS that is capable of running multiple
processor threads simultaneously—even, when they’re present, across multiple CPUs.
Chapter 3 70 www.netpro.com
When an application is developed, the developer determines the number of threads each process will use. In a
single-threaded process, only one thread is executed at one time. In a multithreaded process, more than one thread
can be executed concurrently. Being multithreaded allows a process to accomplish many tasks at the same time and
avoid unnecessary delay caused by thread wait time. To change threads, the OS uses a process called context
switching, which interrupts one thread, saves its information, then loads and runs another thread.
In addition to the multithreaded and multitasking approach to handling processes and threads, Win2K allows pri-
orities to be assigned to each process and thread. The kernel of the Win2K OS controls access to the processor
using priority levels.
Figure 3.11: The Process Viewer utility allows you to view the processes and threads
running on your domain controller.
Using this utility, you can view the name of each process, the amount of time it’s been running, the memory allocated
to it, and its priority. You can also view each thread that makes up a selected process. For each thread, you can see
how long it’s been running, its priority, context switches, and starting memory address.
Chapter 3 71 www.netpro.com
In addition to the information you see on the main screen, you can display the memory details for a process. Figure
3.12 illustrates the Memory Details dialog box that is shown when you select a process, then click Memory Details.
Figure 3.12: Memory details for each process are displayed by clicking Memory Details in
Process Viewer’s main window.
When using Process Viewer, you can stop or kill a process that is running on a domain controller by selecting
it and clicking Kill Process. However, be sure you understand the function and impact of killing a process
before doing so—it may be vital to your domain controller’s functionality. Worse yet, by killing a process, you
can irrecoverably lose or corrupt the data.
Chapter 3 72 www.netpro.com
Figure 3.13: Win2K’s Task Manager allows you to view and manage processes that are
currently running on the system.
This view provides a list of the processes that are running—their names, their process identifiers (PIDs), the per-
centage of CPU processing they’re taking up, the amount of CPU time they’re using, and the amount of memory
they’re using. Notice the System Idle Process, which always seems to be toward the top of the process list. This is a
special process that runs when the domain controller isn’t doing anything else. You can use the System Idle Process
to tell how the CPU isn’t loaded because it’s the exact opposite of the CPU Usage value on the Performance tab.
For example, if the CPU Usage value is 5, the System Idle Process value will be 95. A high value for the System
Idle Process means that the domain controller isn’t heavily loaded, at least at the moment you checked.
Task Manager also allows you to customize the columns, thereby receiving additional information about the
processes and being able to assess as many as 23 parameters. To customize the columns, on the Processes page,
choose View>Select Columns. As shown in Figure 3.14, notice that many additional columns of information can
be displayed. These additional columns will help you monitor and tune each process more completely. For more
information about each of the additional columns, refer to the Help menu in Task Manager.
Chapter 3 73 www.netpro.com
Figure 3.14: The Select Columns dialog box in Task Manager allows you to monitor additional
important statistics about the processes that are running on your domain controller.
In addition, you can see which of these processes belong to an application. To do so, click the Applications tab,
right-click one of the applications on the Applications page, then click Go To Process. This will take you to the
associated application’s process on the Process tab. This feature helps you associate applications with their processes.
As you may know, highlighting a process in Task Manager, then clicking End Process, stops that process from
running. This is a useful feature because it allows you to stop processes that don’t provide any other means
of being stopped. However, I recommend that you only use this method as a last resort because the process
stops immediately and doesn’t have a chance to clean up its resources. Using this method to stop processes
may leave domain controller resources unusable until you restart. It may also cause data to be lost or corrupted.
Chapter 3 74 www.netpro.com
Figure 3.15: The Computer Management utility allows you to view the processes that are running on your domain
controller as well as the path and file name information associated with each process.
The % Processor Time counter is a primary indicator of processor activity. This counter is calculated by measuring
the time that the processor spends executing the idle process, then subtracting that value from 100 percent. It can
be viewed as the percentage of useful work that the processor executes. To view the % Processor Time counter, you
use System Monitor. Figure 3.16 shows the % Processor Time counter in System Monitor.
Chapter 3 75 www.netpro.com
Figure 3.16: The % Processor Time counter give you the ability to view the amount of time that the proces-
sor is doing real work.
If the % Processor Time counter is consistently high, there may be a bottleneck on the CPU. I recommend that
this counter consistently stay below 85 percent. If it pushes above that, you need to find the process that is using a
high percentage of the processor. If there is no obvious CPU "hog," you may want to consider adding another
processor to the domain controller or reducing that domain controller’s workload. Reducing the workload might
involve stopping services, moving databases, removing directory services, and so on.
The Interrupts/sec count doesn’t include deferred procedure calls; they’re counted separately. Instead, this counter
tracks the activity of hardware devices that generate interrupts, such as the system clock, mouse, keyboard, disk
drivers, network interface cards, and other peripheral devices. (For example, the system clock interrupts the CPU
every 10 milliseconds.) When an interrupt occurs, it suspends the normal thread execution until the CPU has
serviced the interrupt.
During normal operation of the domain controller, there will be hundreds or thousands of interrupts per second.
System Monitor displays the counter as a percentage of the real number. This means that if the domain controller
has 560 interrupts in one second, the value is shown as 5.6 on the graph. Figure 3.17 displays the Interrupts/sec
counter using System Monitor.
Chapter 3 76 www.netpro.com
Figure 3.17: The Interrupts/sec counter allows you to view the impact the hardware I/O devices have on
the performance of the domain controller.
In System Monitor, you can make changes to the graph display. To do this, right-click anywhere on the graph, then
choose Properties from the shortcut menu. The System Monitor Properties dialog box appears, containing several
tabs to change the display and effect of the graph and data. For example, if you want to change the graph scale,
click the Graph tab and change the Vertical Scale parameters. To confirm the change, click Apply, then OK.
Unfortunately, it’s difficult to suggest a definite threshold for this counter because this number depends on the
particular processor type in use and the exact role and use of the domain controller. I therefore recommend that
you establish your own baseline for this counter and use it as a comparison over time. This will help you know
when a hardware problem occurs. For example, a network interface card installed in the domain controller could
go bad and cause an excessive number of hardware interrupts. By having an established baseline, you can quickly
identify that there is a problem.
Chapter 3 77 www.netpro.com
Figure 3.18: The Processor Queue Length counter indicates how congested the processor is.
I recommend that your domain controller not have a sustained Processor Queue Length of greater than two
threads. If the number of threads goes above two, performance slows down, as does responsiveness to the users. The
domain controller shown in the figure could be in trouble, especially if this type of activity is sustained. There are
several ways to alleviate the domain controller slowing down. You can replace the CPU with a faster processor, add
more processors, and reduce the workload. In some situations, the Processor Queue Length counter will increase if
the system is paging heavily, so adding memory or RAM could be needed. To determine if you need more RAM,
monitor the paging counters.
By nature, a hard drive is massive and cheap. The disk subsystem can contain hundreds of GB storing millions or
billions of files. In turn, memory is relative small and expensive. Therefore, the architects of Win2K designed the
virtual memory system to store pieces of memory on the hard drive, thereby allowing more room for users and
applications. However, as I discussed earlier (see "Paging File Usage"), you pay a performance price for paging.
Chapter 3 78 www.netpro.com
Using the Performance Console to Monitor the Disk Subsystem
Because system performance depends so heavily on the disk subsystem, it’s important that you understand how to
monitor it. To properly monitor the disk subsystem, you need to monitor disk usage and response time, which
includes the number of actual reads and writes plus the speed with which the disk accomplishes each request. The
primary utility you use to monitor these attributes is System Monitor in the Performance console. Using System
Monitor, you can view key counters that apply to physical device usage and the logical volumes on the drives.
Using these counters, you can monitor the physical activity of the hard drives in each computer. Figure 3.19
illustrates the % Disk Time and % Idle Time counters in System Monitor.
Figure 3.19: The % Disk Time counter allows you to view how busy a physical disk drive is, and the % Idle
Time counter tracks the percentage of time a drive is idle.
The figure shows that as you might expect, % Disk Time and % Idle Time basically mirror each other. I recommend
that if the value for % Disk Time is consistently above 70 percent, you consider reorganizing the domain controller
to reduce the load. However, if the domain controller is a database server, the threshold can go as high as 90 percent.
The threshold value depends on the type of server that has been implemented and what has caused the disk I/O.
For example, if VMM is paging heavily, it can drive up the % Disk Time counter. The simplest solution here is to
add memory.
Chapter 3 79 www.netpro.com
Disk Reads/ sec and Disk Writes/ sec Counters
In addition to the percentage of time the disk is busy, you can also see what the disk is doing. You can monitor this
using two counters under the PhysicalDisk object in System Monitor.
Normally, a domain controller will perform twice as many (if not more) read operations than write operations; it
can also service a read request at least twice as fast. This is because the write request has to write the data, then verify
that it was written. You can see the Disk Reads/sec and Disk Writes/sec counters in System Monitor, as shown in
Figure 3.20.
Figure 3.20: The Disk Reads/sec and the Disk Writes/sec counters show how the domain controller is
handling the disk requests that come to it.
Using these counters, watch for spikes in the number of disk reads when your domain controller is busy. If you
have the appropriate amount of memory on your domain controller, most read requests will be serviced from the
system cache instead of hitting the disk drive and causing disk reads. You want at least an 80 percent cache hit rate;
this means that only 20 percent of read requests are forced to the disk. This is valid unless you have an application
that reads a lot of varying data at the same time—for example, a database server is by nature disk-intensive and
reads varying data. Obtaining a high number of cache hits with a database server may not be possible.
Chapter 3 80 www.netpro.com
Current Disk Queue Length Counter
The Current Disk Queue Length counter represents the number of requests outstanding on the disk at any one
time. The disk has a queue, or list, that can hold the read and write requests in order until they can be serviced by
the physical device. This counter shows the number of requests in service at the time the sample is taken. Most
disk devices installed on your domain controller are single-spindle disk drives. However, disk devices with multiple
spindles, such as some Redundant Array of Independent Disks (RAID) disk systems, can have multiple reads and
writes active at one time. Thus, a multiple-spindle disk drive can handle twice the rate of requests of a normal device.
Figure 3.21 displays System Monitor tracking the length of the disk queue.
Figure 3.21: The Current Disk Queue Length counter represents the number of outstanding read and write
requests. Using this counter, you can monitor the performance of the queue for the disk drives.
If the disk drive is under a sustained load, this counter will likely be consistently high. In this case, the read and
write requests will experience delays proportional to the length of this queue, divided by the number of spindles on
the disks. For decent performance, I recommend that the value of the counter average less than 2.
Because gathering disk counters can cause a modest increase in disk-access time, Win2K doesn’t
automatically activate all the disk counters when it starts up. By default, the physical disk counters are on,
and the logical disk counters are off. The physical disk counters monitor the disk driver and how it relates to
the physical device. The logical disk counters monitor the information for the partitions and volumes that have
been established on the physical disk drives.
Chapter 3 81 www.netpro.com
To start the domain controller with the logical disk counters on, you use the DISKPERF utility. At the command
prompt, type DISKPERF –YV. This sets the domain controller to gather counters for both the logical disk
devices and the physical devices the next time the system is started. For more information about using the
DISKPERF utility, type DISKPERF /? at the command prompt.
This counter allows you to monitor the performance of the disk drives as they start to fill up. This task is important
because as a disk drive starts to run out of space, each write request becomes tougher to perform and slows down
overall disk performance. The reason for this is that as the drive fills up, each write takes longer to search for space.
The longer it takes the disk to write the data, the less it does, so performance slows. Thus, as the drive fills up, it
works harder to service requests; this is often called thrashing. I recommend that you always leave at least 10 percent
of the disk free to minimize thrashing.
In real-time mode, Network Monitor allows you to monitor and test network traffic for a specific set of conditions.
If the conditions are detected, it displays the events and prompts you for the appropriate action. In post-capture
analysis, network traffic is saved in a proprietary capture file and can be parsed by protocol to pick out specific
network frame types.
Chapter 3 82 www.netpro.com
Network Monitor is a complex tool that allows you to monitor all kinds of network traffic and troubleshoot a
variety of network problems. Thus, a detailed explanation of it is beyond the scope of this chapter and book.
The advantage of the last two counters is that they break out the values for traffic sent and received.
I recommend that once you’ve monitored these counters, you compare the results to your domain controller’s total
network throughput. To do this, I suggest that you establish a baseline of data rates and averages. Establishing a
baseline allows you to know what to expect from the domain controller. If a potential problem or bottleneck in
network throughput occurs, you can recognize it immediately because you can compare it against the baseline
you’ve established.
You can also make some estimates as to where a bottleneck exists if you know the network and bus speeds of the
domain controller. If the data rate through the card is approaching the network limit, segmenting and adding a
card may help. If the aggregate data rate is approaching the bus speed, it may be time to split the load for the
domain controller and add another one or go to clustering.
Chapter 3 83 www.netpro.com
previous section have similar names, but they display the amount of traffic for the entire domain controller, regardless
of the actual number of interface cards installed. Using the counters assigned to each network adapter allows you to
drill down and see how each performs individually.
Figure 3.22 illustrates how you can use the Bytes Total/sec, Bytes Sent/sec, and Bytes Received/sec counters in
System Monitor to monitor the domain controller’s network adapter.
Figure 3.22: The Bytes Total/sec, Bytes Sent/sec, and Bytes Received/sec counters allow you to monitor
the domain controller’s network adapter.
Chapter 3 84 www.netpro.com
Summary
As a network administrator, a critical part of your job is making sure that each and every domain controller hosting
AD is functioning properly. To accomplish this task, you need to properly monitor each of these Win2K domain
controllers; this in turn means watching over the critical OS components and hardware subsystems. To help you
monitor a domain controller and its subsystems, Win2K provides several utilities, and this chapter discussed the
most important ones: Task Manager, the Performance console, and Event Viewer. Using these utilities, you can
watch server resources and subsystems in real time while they work to support the requests by users, applications,
and other servers.
Chapter 3 85 www.netpro.com
Chapter 4
AD also has a complex infrastructure containing many different components. To ensure the health of the directory
as a system, you must monitor all of these components. You also need to understand AD’s internal processes, such
as replication.
In this chapter, I’ll describe which infrastructure components you need to continually monitor to ensure AD
availability as well as some of the built-in and third-party utilities that are available to help you do so. It’s always a
good idea to have a sound understanding of one’s tools before using them, so I’ll start by introducing the tools in
our monitoring tool set.
Third-Party Tools
In this section, I’ll discuss NetPro’s DirectoryAnalyzer and NetIQ’s AppManager.
DirectoryAnalyzer monitors the individual structures and components of AD—replication, domains, sites, Global
Catalogs (GCs), operations master roles, and DNS (inasmuch as it relates to AD). Each of these components is
Chapter 4 87 www.netpro.com
vital to the operation of AD. DirectoryAnalyzer can monitor and alert on specific conditions and problems in each
of the individual structures. The alerts are then recorded at the DirectoryAnalyzer client or console for viewing.
Alerts have two levels of severity—warning and critical. Warning alerts indicate that a predetermined threshold has
been met in one of the directory structures. Warning alerts help you identify when and where problems may occur.
Critical alerts indicate that a predetermined error condition has been met. Critical alerts are problems that need
your immediate attention; if you ignore them, AD could lose functionality or the directory altogether.
By clicking Current Alerts under View Status in the sidebar, you can display all of the alerts with their associated
type, time, and description. Figure 4.1 shows the Current Alerts screen in DirectoryAnalyzer. The alerts have been
recorded for the AD domain controllers, directory structures, and directory processes.
Figure 4.1: DirectoryAnalyzer allows you to monitor the entire directory for problems.
You can also send alerts to enterprise management systems using Simple Network Management Protocol (SNMP).
This allows you to integrate DirectoryAnalyzer alerts with management consoles such as HP OpenView and Tivoli.
Alerts can also be recorded in the Event Log of the Win2K system and viewed using the Event Viewer utility. (See
"Event Viewer" later in this chapter.)
DirectoryAnalyzer logs all alert activity to a history database. You can export the database and analyze alert activity
over time using a variety of formats, such as Microsoft Excel, Hypertext Markup Language (HTML), Dynamic
HTML (DHTML), and Rich Text Format (RTF). You can also identify trends in the data, finding cycles or periods
of high and low alert activity.
Chapter 4 88 www.netpro.com
AppManager from NetIQ
AppManager from NetIQ Corporation is a suite of management products that manages and monitors the
performance and availability of Win2K. One of these management products allows you to monitor the performance
of AD. For example, AppManager verifies that replication is occurring and up-to-date for the directory by monitoring
the highest Update Sequence Number (USN) value for each domain controller. The USN is discussed in more
detail later in this chapter (see "Monitoring Replication"). In addition, inbound and outbound replication statistics
are tracked, as are failed synchronization requests for the directory.
AppManager also allows you to monitor the number of directory authentications per second and monitor the cache
hit rate of name resolution. Using this tool, you can monitor and track errors and events for trust relationships.
You can also log errors and events to enterprise management systems using SNMP. This means that SNMP traps
are generated and routed to a configured network manager.
In addition, you can use or run a set of prepackaged management reports that allow you to further analyze current
errors and events. You can also set up this utility to send e-mail and pager alerts when an event is detected.
Built-In Tools
In this section, I’ll discuss System Monitor, Event Viewer, and REPADMIN.
System Monitor
For the domain controller in AD, one of the main monitoring utilities is System Monitor. This utility allows you
to watch the internal performance counters that relate to the directory on the domain controller. The directory
performance counters are software counters that the developers of AD have programmed into the system.
Using System Monitor, you can monitor current directory activity for the domain controller. Once you’ve installed
AD on a server, several performance counters—for replication activity, DNS, address book, LDAP, authentication,
and the database itself—measure the performance of the directory on that computer.
I discussed how to launch and use System Monitor in Chapter 3, so I won’t repeat that information here. Instead,
I’ll focus on how to use some of the more important performance counters that are available for AD. Remember,
System Monitor tracks all of its counters in real time. For this reason, I recommend that you always establish a
baseline or normal operation that you can compare the real-time values against. When adding AD counters to
System Monitor, if you don’t understand the meaning of any counter, highlight it, then click Explain. The Explain
Text dialog box appears and provides a description of the counter.
You can also graph the performance counters and set alerts against them. The alerts will appear in the Event Viewer.
Chapter 4 89 www.netpro.com
Event Viewer
To view and analyze the events that have been generated by a Win2K domain controller, you can use the Event
Viewer. This utility allows you to monitor the event logs generated by Win2K. By default, there are three event
logs: the application log, the system log, and the security log. (These three logs were described in the "Event
Viewer" section of Chapter 3.) In addition, after you install AD, three more logs are created.
Depending on how you configure your AD installation, you may have one or all of these logs on your domain
controller. Figure 4.2 shows the Event Viewer startup screen on a domain controller after you’ve installed AD with DNS.
Chapter 4 90 www.netpro.com
Figure 4.2: The Event Viewer startup screen lists additional event logs that have been created for AD.
During normal replication, the Knowledge Consistency Checker (KCC) manages and builds the replication topology
for each naming context on the domain controller. The replication topology is the set of domain controllers that
shares replication responsibility for the domain. REPADMIN allows you to view the replication topology as seen
by the domain controller. If needed, you can use REPADMIN to manually create the replication topology, although
this isn’t usually beneficial or necessary because it’s generated automatically by the KCC.
You can also view the domain controller’s replication partners, both inbound and outbound, and some of the internal
structures used during replication, such as the metadata and up-to-dateness vectors.
You can install the REPADMIN.EXE utility from the Support\Tools folder on the Microsoft Windows 2000 CD.
Running the SETUP program launches the Win2K Support Tools Setup wizard, which installs this tool along with
many other useful support tools to the Program Files\Support Tools folder. Figure 4.3 shows the interface for
REPADMIN.
Chapter 4 91 www.netpro.com
Figure 4.3: The REPADMIN utility allows you to view the replication process and topology.
The first task in troubleshooting AD is to constantly monitor critical areas of the directory deployment. I
recommend that you continuously monitor at least the following directory structures and components:
Chapter 4 92 www.netpro.com
Domain controllers
These are critical to the proper operation of AD. If one domain controller isn’t functioning properly, the
directory and some users will lose performance and possibly functionality. If the domain controller that is
having problems has also been assigned additional vital roles (such as being a DNS or GC server), the
directory may become unavailable to all users. Thus, it’s critical to monitor and track the performance of all
domain controllers on the network at all times.
Domain partition
Stores AD objects and attributes that represent users, computers, printers, and applications. The domain
partition is also used to accomplish a number of management roles, which include administration and
replication. You must monitor the performance and availability of the domain partition so that the services
it supports are constantly available.
GC partition
Stored on domain controllers throughout the network. Only a few domain controllers store a copy of the GC
partition, and they need to be monitored for the GC. GCs are specialized domain controllers whose
availability is necessary for clients to be able to log on to the network. The GC streamlines directory searches
because it contains all of the objects in the forest but only a few of their key attributes.
Operations masters
These (or FSMO role holders) are single-master domain controllers that perform special roles for AD. It’s
important that you monitor and track the performance of each operations master so that the service it
performs is maintained. If any operations master stops functioning, its functionality is lost in the directory.
For example, if an administrator has changed a Group Policy but the change hasn’t been synchronized to all
copies, users using the older copies may access the wrong information. In addition, once the synchronization
among directory replicas is lost, it’s very difficult and time-consuming to get back. Thus, it’s critical to
constantly monitor the replication process and topology for problems.
Chapter 4 93 www.netpro.com
Using DirectoryAnalyzer
Many third-party tools (such as those I discussed earlier) provide you with an easy way to monitor all of the
domain controllers in your forest from one management console. For example, in DirectoryAnalyzer, click Browse
Directory By Naming Context; the directory hierarchy is displayed. If you expand the naming contexts, you see all
of the associated domain controllers. To see the alerts for just one domain controller, select a domain controller
object, then click Current Alerts. The alerts that are displayed have exceeded a warning or critical threshold and
show the severity, subject, associated type, time, and description. Figure 4.4 shows an example of using
DirectoryAnalyzer to view all alerts for each domain controller.
Figure 4.4: DirectoryAnalyzer allows you to monitor all the domain controllers in your forest for problems and see the
alerts that have been recorded for each domain controller.
To see the alerts and other information for each domain controller, you can also use the Browse Directory
By Site option. It allows you to browse the directory layout according to sites and their associated domain
controllers. In addition, it permits you to view the status of each site and the site links.
DirectoryAnalyzer is an extremely useful utility because it monitors all of the domain controllers in the AD forest
as a background process and allows you to periodically view the results. It also monitors the most critical directory
structures and processes—for example, the configuration and activity for the domain partitions, GC partitions,
operations master roles, sites, DNS, the replication process, and the replication topology.
Chapter 4 94 www.netpro.com
In addition to viewing the alerts from the domain controllers, you can click any alert and see a more detailed
description of the problem. If you don’t understand the alert, you can double-click it; the Alert Details dialog box
will appear and provide more description, as shown in Figure 4.5.
Figure 4.5: DirectoryAnalyzer provides more information about an alert in the Alert Details dialog box.
Once you’ve been notified of the alert and viewed more information about it in the Alert Details dialog box, you
can use the integrated knowledge base to help resolve the problem. The knowledge base provides you with a
detailed explanation of the problem, helps you identify possible causes, then helps you remedy or repair the problem.
To access the knowledge base, click More Info in the Alert Details dialog box or choose Help>Contents in the console.
Figure 4.6 shows an example of the information available in the knowledge base.
Chapter 4 95 www.netpro.com
Figure 4.6: DirectoryAnalyzer’s in-depth knowledge base helps you find solutions to problems in AD.
As you know by now, domain controllers are the workhorses of AD. They manage and store the domain information
and accept special functions and roles. For example, a domain controller can store a domain partition, store a GC
partition, and be assigned as a FSMO role owner. Domain controllers, in turn, allow the directory to manage user
interaction and authentication and oversee replication to the other domain controllers in the forest.
In addition to displaying alerts for each domain controller, DirectoryAnalyzer displays detailed configurations. For
example, when you choose Browse Directory By Naming Context, you see several icons for each domain controller.
An icon that includes a globe indicates that the domain controller stores a GC partition. When an icon displays
small triangles, it indicates that the domain controller is also providing the DNS service. An icon that displays both
a globe and small triangles indicates that the domain controller has both a GC and a DNS.
Chapter 4 96 www.netpro.com
If you select a domain controller, then click the DC Information tab, you can view detailed information about how
the domain controller is operating and handling the directory load. Figure 4.7 shows the DC Information pane in
DirectoryAnalyzer.
Figure 4.7: You can view detailed information about a domain controller using the DC Information pane in DirectoryAnalyzer.
DirectoryAnalyzer provides a high-level summary of how each domain and its associated domain controllers are
functioning. Click Browse Directory By Naming Context to see a high-level status of all the domain controllers in a
domain. To view the status for a particular domain, select it, then click the DC Summary tab. Figure 4.8 illustrates
the DC Summary pane, which uses green, yellow, and red icons to indicate the status of each domain controller in
a domain.
Chapter 4 97 www.netpro.com
Figure 4.8: The DC Summary pane in DirectoryAnalyzer provides a high-level status of all domain controllers in a domain.
You can also quickly view where the domain controller resides, if it’s a GC, and who manages the computer. If any
of the domain controllers aren’t showing a green (clear) status icon, there is a problem that you need to investigate
and fix.
DRA Inbound Tracks the total Indicates the total amount of inbound replication traffic over
Bytes Total/sec number of bytes time. If a small number of bytes are being sent, either the
per second received network or the server is slow. Other issues that might limit the
on the server during number of bytes being sent include few changes being made
replication with other to the naming contexts hosted by the domain controller,
domain controllers. replication topology problems, and connectivity failures.
Of course, you need to check this value against a baseline
of activity.
Chapter 4 98 www.netpro.com
This Counter Does This How to Use It
DRA Inbound Tracks the number Indicates that the server is receiving changes but is taking a
Object Updates of object updates long time to apply them to the AD database. The value of this
Remaining in Packets received in the AD counter should be as low as possible. A high value indicates
replication update that the network is slow during replication or the domain
packet but not applied controller is receiving updates faster than it can apply them.
to the local domain Other issues that can affect speed of update are high domain
controller. controller load, insufficient hardware (memory, disk, or CPU),
the disk becoming full or fragmented, other applications
using too many resources, and so on.
DRA Outbound Tracks the number Indicates the total amount of outbound replication traffic
Bytes Total/sec of bytes that are sent over time. If this value remains low, it can indicate a slow
from the server server or network or few updates on this domain controller.
during replication In the latter case, it can mean that clients are connecting to
to other domain other domain controllers because this one is slow or that
controllers. there are topology problems. For best results, test the current
value against an established baseline value.
DRA Pending Tracks the number of Indicates the backlog of directory synchronizations for the
Replication pending requests from selected server. This value should be as low as possible. A
Synchronizations replication partners for high value could indicate a slow server or a problem with
this domain controller the server’s hardware.
to synchronize with
them. Synchronizations
are queued, ready for
processing by the
domain controller.
DS Threads in Use Tracks the current Indicates how the directory service on the server is responding
number of threads to client requests. When a client requests information, AD
that are being used spawns a thread to handle the request. If the number of
by the directory threads remains constant, Win2K clients may experience
service running on the a slow response from the domain controller.
domain controller.
Kerberos Tracks the current Indicates how the domain controller is responding to client
Authentications/sec number of authentica- requests for authentications. If this counter doesn’t show
tions per second for activity over time, clients could be having a problem contacting
the domain controller. the domain controller.
LDAP Bind Time Tracks the amount of This counter tracks only the last successful bind for an
time (in milliseconds) LDAP client. The value of this counter should be as low as
required to process the possible to indicate that the domain controller was quick to
last LDAP bind authenticate the LDAP client. If the value is high, the
request from the client. domain controller was slow to authenticate LDAP. A high
A bind is described as value can indicate a server problem, the domain controller is
authenticating the too busy, insufficient hardware (memory or CPU), or other
LDAP client. applications using too many resources.
Chapter 4 99 www.netpro.com
This Counter Does This How to Use It
LDAP Client Tracks the current If your domain controller has LDAP clients trying to connect,
Sessions number of LDAP the value of this counter should show activity over time. If the
sessions on the selected value remains constant, the server or client may have problems,
domain controller. the domain controller may be too busy running other applica-
tions, or there is insufficient hardware (memory or CPU).
LDAP Searches/sec Tracks the number Indicates how many LDAP search requests the domain
of LDAP search controller is servicing per second. You typically view different
operations that were search rates depending on the domain controller’s hardware,
performed on the the number of clients connected to the domain controller,
selected domain and what sorts of things the clients are doing.
controller per second.
LDAP clients connect-
ing to the server
perform the LDAP
search operations.
LDAP Successful Tracks the number of Indicates how the domain controller responds to authentications
Binds/sec LDAP binds per from the clients. This value allows you to view the number of
second that occur successful binds per second for LDAP clients. Again, if this value
successfully. remains constant over time, there can be a network, client, or
server problem. For example, there is a bad network component,
the client is too busy, or the server is too busy.
NTLM Tracks the total Allows you to see whether there are authentications from
Authentications number of Windows Windows 98 and NT clients for this domain controller. If
NT LAN Manager you’re supporting Windows 98 and NT and the value remains
(NTLM) authentica- constant over time, there is a network problem. For example,
tions per second the network could have a bad or poorly configured component,
serviced by the or the client could be too busy.
domain controller.
Table 4.1: A few of the NTDS performance counters that allow you to track how a domain controller is responding to replication
traffic, LDAP traffic, and authentication traffic.
NTDS counters enable you to monitor the performance of AD for the selected domain controller. You can view
these counters under the NTDS object in System Monitor (see Figure 4.9). By default, System Monitor is started
when you choose Start>Administrative Tools>Performance Console.
Using DirectoryAnalyzer
DirectoryAnalyzer allows you to monitor the alerts for each domain in AD and the associated domain controllers.
These alerts monitor the domain controllers, replicas, group policies, trust relationships, DNS, and other activity
for a domain. If you see any critical alerts, you need to investigate and fix the problems.
To view the alerts for a domain, click Browse Directory By Naming Context. Select a domain, then click the
Current Alerts tab. The display shows the current alerts for that domain (see Figure 4.10).
In addition to displaying alerts for each domain, DirectoryAnalyzer allows you to view configuration information.
Using the Naming Context Information tab, you can view the current number of alerts that are active for the
following areas: Naming Context (or Domain), Replica, DNS Server, and DC Server.
The Naming Context Information tab also displays the number of domain controllers for the domain and whether
the domain supports mixed mode. When a domain supports mixed mode, it allows replication and communication
with down-level domain controllers and clients to occur. In addition, you can see which domain controllers in the
domain are performing the operations master roles and an operations master consistency check. And finally, you
can view all the trust relationships that exist for the domain. Figure 4.11 shows the Naming Context Information
pane in DirectoryAnalyzer.
To further monitor the domain, DirectoryAnalyzer provides a high-level summary of each domain controller. Click
Browse Directory By Naming Context, then click the DC Summary tab. (The DC Summary pane is shown in
Figure 4.8 earlier in this chapter.)
If necessary, you can relocate the NTDS.DIT database on a domain controller using the NTDSUTIL utility.
Using this database engine, AD provides a set of database performance counters that allow you to monitor the
domain in depth. These counters provide information about the performance of the database cache, database files,
and database tables, and they help you monitor and determine the health of the database for the domain controller.
By default, database performance counters aren’t installed on the domain controllers. (For instructions on installing
them, see "Installing the Counters" below.)
You can view and monitor database counters using the System Monitor utility. Table 4.2 gives you a general
description of the more useful database performance counters and how to use them to track the activity of the
low-level database for each domain.
Cache % Hits Tracks the percentage Indicates how database requests are performing. The
of database page requests value for this counter should be at least 90%. If it’s lower
in memory that were than 90%, the database requests are slow for the domain
successful. A cache hit is controller, and you should consider adding physical
a request that is serviced memory to create a larger cache.
from memory without
causing a file-read
operation.
Cache Page Tracks the number of Indicates how the database cache is performing. I recommend
Faults/sec requests (per second) that the computer have enough memory to always cache
that cannot be serviced the entire database. This means that the value of this counter
because no pages are should be as low as possible. If the value is high, you need
available in cache. If to add more physical memory to the domain controller.
there are no pages, the
database cache manager
allocates new pages for
the database cache.
File Operations Tracks the number of Indicates how the OS handles the read/write requests to
Pending pending requests issued the AD database. I recommend that the value for this
by the database cache counter be as low as possible. If the value is high, you
manager to the database need to add more memory or processing power to the
file. The value is the domain controller. This condition can also occur if the
number of read and write disk subsystem is bottlenecked.
requests that are waiting
to be serviced by the OS.
File Operations/sec Tracks the number of Indicates how many file operations have occurred for the
requests (per second) AD database. I recommend that this value be appropriate
issued by the database for the purpose of the domain controller. If you think
cache manager to the that the number of read and write operations is too high,
database file. The value is you need to add memory or processing power to the
the read and write computer. However, adding memory for the file system
requests per second that cache on the computer reduces file operations.
are serviced by the OS.
Table Open Cache Tracks the number of Indicates how the AD database is performing. The value
Hits/sec database tables opened for this counter should be as high as possible for good
per second. The database performance. If the value is low, you may need to add
tables are opened by the more memory.
cached schema
information.
Table 4.2: Some of the more useful database performance counters, which allow you to monitor the database for the domain
partition that stores all of the AD objects and attributes.
1. Copy the %System%\System32\ESENTPRF.DLL file to a different directory. For example, you can
create a directory named C:\Perfmon, then copy the file to it.
2. Run the REGEDT32.EXE or REGEDIT.EXE Registry Editor and create the following Registry subkeys
if they don’t already exist:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\ESENT
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\ESENT\
Performance
3. Under the Performance subkey that you added in Step 2, add and initialize the data of the following
Registry values:
Open: REG_SZ: OpenPerformanceData
Collect: REG_SZ: CollectPerformanceData
Close: REG_SZ: ClosePerformanceData
Library: REG_SZ: C:\Performance\esentprf.dll
5. Load the counter information into the Registry by executing the following statement:
LODCTR.EXE ESENTPERF.INI
Once you’ve installed the database performance counters, you can use them to track and monitor the database on
the domain controller. As mentioned earlier in "Using NTDS Performance Counters," you can view and track each
counter using the System Monitor utility in the Performance Console.
The GC has been designed to support two crucial functions in an AD forest: user logons and forest-wide queries or
searches. It does this by storing all of the objects in the forest and the key attributes for each. It doesn’t store all the
attributes for each object; instead, it stores only the attributes it needs to perform queries and support the logon
process. One of these attributes is the distinguished name of the object.
You can use DirectoryAnalyzer to monitor the GC partition and how it’s performing. It monitors and tracks the
following conditions:
Figure 4.12 shows how DirectoryAnalyzer monitors and tracks alerts for the GC.
Figure 4.12: DirectoryAnalyzer allows you to monitor the GC partition that exists on various domain controllers throughout the forest.
There are currently five types of operations masters in AD. The directory automatically elects the operations master
servers during the creation of each AD forest and domain. (For more detail on these FSMOs, see Chapter 1.)
Two operations masters manage forest-wide operations, so have forest-specific FSMO roles.
Schema master
Responsible for schema extensions and modifications in the forest
Three operations masters manage domain operations and so have domain-specific FSMO roles.
Infrastructure master
Updates group-to-user references in a domain
RID master
Assigns unique security IDs in a domain
PDC emulator
Provides primary domain controller support for down-level clients in a domain.
The three domain-specific FSMO roles exist in every domain. Thus, an AD forest with a total of 3 domains
would have 11 FSMO roles in all: 9 domain-specific roles and 2 forest-wide roles.
Because there is only one of each of the forest-specific FSMO roles, it’s extremely important that you constantly
monitor and track the activity and health of the operations masters. If any of them fail, the directory loses functionality
until the computer is restarted or another appropriate domain controller is assigned the role.
To monitor operations masters, you can use DirectoryAnalyzer. It monitors, checks the status of, and alerts on several
types of conditions and situations relating to operations masters, such as which domain controllers are holding the
operations masters. Click Browse Directory By Naming Context, and click the Naming Context Information tab.
Under Operations Master Status, you see which domain controller is holding which FSMO. Figure 4.13 shows the
status of operations masters in the Naming Context Information pane.
You can also use the Naming Context Information pane (shown in Figure 4.13 above) to check the consistency of
the operations masters across all of the domain controllers on the network. DirectoryAnalyzer monitors what each
domain controller reports for the FSMO assignments. If not all of the domain controllers report the same values
for all of the operations masters, the word No appears beside Operations Master Consistent.
To investigate the problem, click Details. The Operations Master Consistency dialog box appears, indicating that
operations master information is inconsistent. It displays the names of the domain controllers and which domain
controller holds each operations master. In Figure 4.14 below, the domain controller COMP-DC-04 has inconsistent
information about the true owner of the PDC operations master because it shows domain controller COMP-DC-01
as the owner when it should be COMP-DC-03. Thus, the owner of the PDC operations master is inconsistent.
In addition to showing the status and consistency checks, DirectoryAnalyzer monitors and displays alerts for each
operations master. The alerts that are monitored and tracked provide information about the availability of the FSMOs.
To monitor the availability of the operations masters, you can click Current Alerts in the sidebar on the main screen.
To display the alerts for a domain or each domain controller, click Browse Directory By Naming Context.
The alerts indicate that the domain controller that holds the operations master isn’t responding. This could mean that
the domain controller and AD are down and not responding. It could also mean that the domain controller no longer
has network connectivity, and this could indicate DNS or Internet Protocol (IP) addressing problems. Finally, this
alert could simply mean that the domain controller or the directory that is installed is overloaded and responding too
slowly. Figure 4.15 shows how DirectoryAnalyzer monitors and tracks alerts for each operations master.
Monitoring Replication
AD is a distributed directory made up of one or more naming contexts, or partitions. Partitions are used to distribute
the directory data on the domain controllers across the network. The process that keeps partition information up
to date is called replication. Monitoring replication is critical to the proper operation of the directory. Before I
discuss how to monitor replication, however, I need to describe what it is and how it works.
In AD, replication is a background process that propagates directory data among domain controllers. For example,
if an update is made to one domain controller, the replication process is used to notify all of the other domain
controllers that hold copies of that data. In addition, the directory uses multimaster replication; this means that
there is no single source (or master) that holds all of the directory information. Using multimaster replication,
changes to the directory can occur at any domain controller; the domain controller then notifies the other servers.
Because AD is partitioned, not every domain controller needs to communicate or replicate with each other. Instead,
that system uses a set of connections that determines which domain controllers need to replicate to ensure that the
appropriate domain controllers receive the updates. This approach reduces network traffic and replication latency
(the time to replicate a change to all replicas). The set of connections used by the replication process is the
replication topology.
Schema Partition
The schema partition contains the set of rules that defines the objects and attributes in AD. This set of rules is used
during creation and modification of the objects and attributes in the directory. The schema also defines how the
objects and attributes can be manipulated and used in the directory.
The schema partition is global; this means that every domain controller in the forest has a copy, and these copies
need to be kept consistent. To provide this consistency, the replication process in the directory passes updated
schema information among the domain controllers to the copies of the schema. For example, if an update is made
to the schema on one domain controller, replication propagates the information to the other domain controllers, or
copies of the schema.
Configuration Partition
The configuration partition contains the objects that define the logical and physical structure of the AD forest.
These objects include sites, site links, trust relationships, and domains. Like the schema partition, the configuration
partition exists on every domain controller in the forest and must be exactly the same on each one.
Because the configuration partition exists on every domain controller, each computer has some knowledge of the
physical and logical configuration of the directory. This knowledge allows each domain controller to efficiently
support replication. In addition, if a change or update is made to a domain controller and its configuration partition,
replication is started, which propagates the change to the other domain controllers in the forest.
Domain Partition
The domain partition contains the objects and attributes of the domain itself. This information includes users,
groups, printers, servers, organizational units (OUs), and other network resources. The domain partition is copied,
or replicated, to all of the domain controllers in the domain. If one domain controller receives an update, it needs
to be able to pass the update to other domain controllers holding copies of the domain.
A read-only subset of the domain partition is replicated to GC servers in other domains so that other users can
access its resources. This allows the GC to know what other objects are available in the forest.
A write request from a directory client is called an originating write. When an update that originates on one
domain controller is replicated to another domain controller, the update is called a replicated write. Using this
approach, AD can distinguish update information during replication.
In addition to maintaining USNs, AD maintains an up-to-dateness vector, which helps the domain controllers
involved in replication track updates. The up-to-dateness vector is a table containing one entry per naming context,
which are the high-watermark USNs for each replication partner. During replication, the requesting domain
controller sends the up-to-dateness vector with its replication request so that the originating domain controller
sends only those updates that the requesting domain controller doesn’t already have.
The up-to-dateness vector also helps with the problems of multiple replication paths among domain controllers.
AD allows multiple replication paths to exist so that domain controllers can use more than one path to send and
receive replication traffic. When multiple replication paths exist, you might expect redundant traffic and endless
looping during replication, but the directory allows domain controllers to detect when replication data has already
been replicated. This method is called propagation dampening.
AD prevents these potential problems by using the up-to-dateness vector and the high-watermark vector. The
up-to-dateness vector contains server–USN pairs and represents the latest originating update. The high-watermark
vector holds the USNs for attributes that have been added or modified in the directory and that are stored in the
replication metadata for that attribute. Using both vectors, propagation dampening can occur and unnecessary
directory updates avoided.
As I’ve mentioned, the values in the up-to-dateness vector can determine which updates need to be sent to the
destination domain controller. For example, if the destination domain controller already has an up-to-date value
for an object or attribute, the source domain controller doesn’t have to send the update for it. To view the contents
of the up-to-dateness vector for any domain controller, type the following command at a command prompt:
To help resolve conflicts during replication, AD attaches a unique stamp to each replicated value. Each stamp is
replicated along with its corresponding value. To ensure that all conflicts can be resolved during replication, the
stamp is compared with the current value on the destination domain controller. If the stamp of the value that was
replicated is larger than the stamp of the current value, the current value (including the stamp) is replaced. If the
stamp is smaller, the current value is left alone.
The KCC automatically generates replication connections among domain controllers in the same site. This local
replication topology is called an intra-site topology. If you have multiple wide area network (WAN) locations, you
can configure site links among the sites, then the KCC can automatically create the respective replication connection
objects. The replication topology that is created among remote locations is called an inter-site topology. The sets of
domain controllers that replicate directly with each other are called replication partners. Each time the KCC runs,
these replication partners are automatically added, removed, or modified.
Although you can disable the KCC and create connection objects by hand, I strongly recommend that you
use the KCC to automatically generate the replication topology. The reason is that the KCC simplifies a
complex task and has a flexible architecture, which reacts to changes you make and any failures that
occur. However, if your organization has more than 100 sites, you may need to manually create the
replication topology; above this number, the KCC doesn’t scale well.
The KCC uses the following components to manage the replication topology:
Connections
The KCC creates connection objects in AD that enable the domain controllers to replicate with each other.
A connection is defined as a one-way inbound route from one domain controller to another. The KCC
manages the connection objects and reuses them where it can, deletes unused connections, and creates new
connections if none exist.
Servers
Each domain controller in AD is represented by a server object. The server has a child object called NTDS
Setting. This setting stores the inbound connection objects for the server from the source domain controller.
Connection objects are created in two ways—automatically by the KCC or manually by an administrator.
Sites
The KCC uses sites to define the replication topology. Sites define the sets of domain controllers that are
well connected in terms of speed and cost. When changes occur, the domain controllers in a site replicate
with each other to keep AD synchronized. If the domain controllers are local (intra-site topology), replication
starts as needed with no concern for speed or cost—within five minutes of an update occurring. If the two
domain controllers are separated by a low-speed network connection (inter-site topology), replication is
scheduled as needed. Inter-site replication occurs only on a fixed schedule, regardless of when updates occur.
Subnets
Subnets assist the KCC to identify groups of computers and domain controllers that are physically close or
on the same network.
Bridgehead servers
The KCC automatically designates a single server for each naming context, called the bridgehead server, to
communicate across site links. You can also manually designate bridgehead servers when you establish each
site link. Bridgehead servers perform site-to-site replication; in turn, they replicate to the other domain
controllers in each site. Using this method, you can ensure that inter-site replication occurs only among
designated bridgehead servers. This means that bridgehead servers are the only servers that replicate across
site links, and the rest of the domain controllers are updated within the local sites.
Using DirectoryAnalyzer
DirectoryAnalyzer allows you to monitor replication among domain controllers and report any errors or problems.
It allows you to track the following problems and issues:
Replication Cycle
The time during which the requesting domain controller receives updates from one of its replication
neighbors. You can view the successful replication cycle as well as any errors that occurred during that time.
Replication Latency
The elapsed time between an object or attribute being updated and the change being replicated to all the
domain controllers that hold copies. If replication latency is too high, DirectoryAnalyzer issues an alert.
Replication Topology
The paths among domain controllers used for replication. If the replication topology evaluates that the
topology is transitively closed (meaning that it doesn’t matter on which domain controller an update
occurs), the topology will provide for that update to be replicated to all other domain controllers.
Replication Failures
Occur when a domain controller involved in replication doesn’t respond. Each time there are consecutive
failures from the same domain controller, an alert is issued. Many things can cause failures—for example, a
domain controller may be too busy updating its own directory information from a bulk load.
Replication Partners
Sets of domain controllers that replicate directly with each other. DirectoryAnalyzer monitors domain
controllers and pings them to make sure that each is still alive and working. If a replication partner doesn’t
respond, an alert is issued.
Replication Conflict
Occurs when two objects or attributes are created or modified at exactly the same time on two domain
controllers on the network. AD resolves this conflict automatically, and DirectoryAnalyzer issues an alert so
that you’ll know that one of the updates was ignored by replication.
Figure 4.16: DirectoryAnalyzer allows you to view the replication cycle and replication partners for each domain controller.
Using DirectoryAnalyzer, you can monitor and track the replication process for errors. If a problem occurs, the utility
will issue an alert to indicate what type of problem has occurred. You can double-click the alert to see more detailed
information, then use the knowledge base to find troubleshooting methods to help you solve the problem. The
Current Alerts screen displays the more recent alerts that have been logged for replication (see Figure 4.17 below).
You can also view the replication-related alerts that have been stored in the Alert History file in DirectoryAnalyzer.
To display these alerts, on the Current Alerts screen, choose Reports>Alert History. On the Report page, select one
of the report options to specify what alerts you want to include. Then select Preview to display the report on the
screen. You can print the report or export it to a file. Figure 4.18 illustrates an Alert History report.
Summary
Before you can accurately troubleshoot AD, you must be able to effectively monitor it for problems. This means
that you must be able to monitor the directory that has been distributed across domain controllers on the network.
You can do this by using the monitoring tools described in this chapter. These tools allow you to watch the directory
components individually and as they interact with each other. For example, you can monitor the domain controllers,
the domain partition, the GC partition, the operations masters, and the replication process and topology. Monitoring
these components ensures the health of the directory as a system.
The troubleshooting process primarily involves isolating and identifying a problem. Few problems are difficult to
solve once you know exactly what is going wrong and where. Troubleshooting, in general, is more an art born out
of experience than an exact science. Your approach to solving a problem can depend largely on the specifics of your
directory, system, and network. This chapter outlines some common techniques and approaches that you can use to
help troubleshoot and maintain your implementation of AD.
When a problem doesn’t exhibit the characteristics of a typical failure, and when monitoring tools fail to provide
enough information to isolate the problem, the next step is to try to eliminate pieces of the system until you end
up with a small, predictable failure. As I mentioned earlier, use the process of elimination to rule out as many
technologies and dependencies as possible. Even if the problem seems overly complex at first, you can simplify it
by eliminating all of the possibilities—one by one.
Figure 5.1: A red X on the Local Area Connection icon indicates that the network cable is disconnected from your domain controller.
To view a computer’s TCP/IP configuration, type the following command in a Command Prompt window on the
domain controller or workstation:
IPCONFIG /ALL
The default display shows only the IP address, subnet mask, and default gateway for each adapter bound to
TCP/IP. Figure 5.2 shows an unsuccessful TCP/IP configuration and network connection.
Listing 5.1 shows a well-connected LAN. Notice that the IP addresses are displayed with appropriate values.
If you want to save the results of running IPCONFIG for further analysis, you can capture the results in a text file.
At the command line, enter the following command:
There are many advanced features and switches available with IPCONFIG. To view the available switches,
enter:
IPCONFIG /?
If you’re wondering why Windows 2000 (Win2K) doesn’t contain a graphical TCP/IP configuration utility similar
to the WINIPCFG.EXE file provided with Windows 95/98/Me, you’re not alone. A graphical user interface
(GUI)-based TCP/IP configuration utility (WNTIPCFG.EXE) is included in the Windows 2000 Professional and
Windows 2000 Server resource kits, but by default, the ResKit Setup utility doesn’t install it. To install it,
you need to manually extract the WNTIPCFG.EXE file from the NETMGMT.CAB file, included in the Resource
Kit’s main installation folder, onto your hard disk. (A similar situation existed with NT 4.0. The Windows NT
Server Resource Kit included WNTIPCFG.EXE but didn’t install it by default.) What’s more, the Windows
2000 Resource Kit Supplement 1 doesn’t contain the WNTIPCFG.EXE file, so to obtain it, you need the
original Resource Kit.
When a domain controller fails to connect to the targeted computer, the PING utility returns a "Request timed
out" or "Destination host unreachable" message. This message is repeated at least four times as PING retries the
connection. In addition, the utility shows statistics gathered during the test.
You can use the PING utility to perform a series of steps to troubleshoot connectivity problems among domain
controllers. The first test is called the loop-back address test, which verifies whether TCP/IP is working on the local
computer. To perform this test on the local computer, type the following command in the Command Prompt window.
(Instead of using 127.0.0.1, you can use the keyword localhost.)
PING 127.0.0.1
If the PING command fails on the loop-back address test, check the TCP/IP configuration settings and restart the
local domain controller.
After you verify that TCP/IP is configured properly and the PING loop-back address test succeeds, you need to test
the local TCP/IP address of the local domain controller. To do this, type the following command:
PING <local_TCP/IP_address>
If the PING test for the local address fails, restart the domain controller and check the routing tables using the
ROUTE PRINT command at a command prompt on the computer. The ROUTE PRINT command displays the
current IP address assigned to the local computer plus all of the active and persistent network routes. This command
allows you to view and troubleshoot the network configurations that exist at the time that the command is executed.
After you’ve verified that the local address is working properly, use the PING command to check the communication
PING <domain_controller1_address>
PING <domain_controller2_address>
PING <domain_controller3_address>
In the PING statements, the domain controller address is represented as the domain name (that is, COMPANY.COM)
or the IP address of the domain controller (that is, 10.0.0.10). If communication among the domain controllers on
the local subnet fails, you need to check that each computer is operational and that the network hubs and switches
are working properly. If the domain controllers are separated by a wide area network (WAN) connection, you need
to ping the default gateways that route the TCP/IP traffic among WAN locations.
Start by pinging the IP address of your default gateway. If the PING command fails for the gateway, you need to
verify that the address for the default gateway is correct and that the gateway (router) is operational.
Next, ping the IP address of the remote domain controllers on the remote subnet as follows:
PING <remote_domain_contoller1_address>
PING <remote_domain_contoller2_address>
PING <remote_domain_contoller3_address>
In the PING statements, the remote domain controller address is represented as the domain name (that is,
REMOTE.COMPANY.COM) or the IP address of the domain controller (that is, 20.0.0.20). If the PING
command fails, verify the address of each remote domain controller and check whether each remote domain
controller is operational. In addition, check the availability of all of the gateways or routers between your
domain controller and the remote one.
In addition to pinging the domain controllers, you need to ping the IP address of the DNS server. If this command
fails, verify that the DNS server is operational and the address is correct.
Each performs a different type of connectivity test among the domain controllers in specific AD domains and sites.
After the test is completed, the results are displayed at the bottom of the dialog box.
Figure 5.3: Running the Domain Controller Connectivity test to troubleshoot the communication path among domain
controllers in the forest.
Destination
Shows the name of each destination domain controller you selected.
Test
Shows the type of test that was performed. The type of test varies according to the services that have
been assigned to the domain controller.
Time
Shows the amount of time (in milliseconds) it took to perform each test. If a test is performed in
less than 10 milliseconds, it’s displayed as < 10 ms; otherwise, the actual time is displayed.
Result
Shows whether a test was successful. If the test failed, this column displays a brief description of why.
First, select the source domain/domain controller from the Source list. Next, select the destination domain(s) that
the source domain/domain controller will communicate with during the test by clicking the check box to the left
of each domain in the Destination list. Then click Start Test. Figure 5.4 shows the results of running the Domain
Connectivity test.
Figure 5.4: Running the Domain Connectivity test to troubleshoot the communication between the domain controller in
the source domain and the domain controllers in the destination domain.
After the test is completed, the results are displayed at the bottom of the dialog box.
Destination
Shows the name of each destination domain/domain controller you selected.
Test
Shows the type of test that was performed. The type of test varies according to the services that have
been assigned to the domain controller.
Time
Shows the amount of time (in milliseconds) it took to perform each test. If a test is performed in less than
10 milliseconds, it’s displayed as < 10 ms; otherwise, the actual time is displayed.
Result
Shows whether a test was successful. If the test failed, this column displays a brief description of why.
First, select the source site/domain controller from the Source list. Next, select the destination site(s) that the source
domain/domain controller will communicate with during the test by clicking the check box to the left of each name
in the Destination list. Then click Start Test. Figure 5.5 shows the results of running the Site Connectivity test.
Figure 5.5: Running the Site Connectivity test to troubleshoot the communication between a site/domain controller and
the domain controllers in the destination site.
After the test is completed, the results are displayed at the bottom of the dialog box.
Destination
Shows the name of each destination site/domain controller you selected.
Test
Shows the type of test that was performed. The type of test varies according to the services that have been
assigned to the domain controller.
Time
Shows the amount of time (in milliseconds) it took to perform each test. If a test is performed in less than
10 milliseconds, it’s displayed as < 10 ms; otherwise, the actual time is displayed.
Result
Shows whether a test was successful. If the test failed, this column displays a brief description of why.
The DNS resource records (RRs) registered by the domain controllers in AD include multiple service (SRV)
records, address (A) records, and CNAME (canonical name) records, all of which identify the domain controllers’
location in a domain and forest. When the domain controller is started, the Netlogon service registers these records.
It also sends DNS dynamic-update queries for the SRV records, A records, and CNAME records every hour to
ensure that the DNS server always has the proper records.
When you use AD-integrated zones, the DNS server stores all of the records in the zone in AD. To run AD-integrated
zones, the DNS service must be running on the domain controller. It’s possible that a record is updated in AD but
hasn’t replicated to all DNS servers loading the zone. This might cause consistency problems. By default, all DNS
servers that load zones from AD poll the directory at set intervals (every five minutes, but you can change this) to
update the directory’s representation of the zones.
Figure 5.6: Using Event Viewer to track DNS errors that occur on the selected domain controller.
If the domain controller is a DNS server, an additional log tracks all of the DNS basic events and errors for the
DNS service on the server. For example, the DNS Server log monitors and tracks the starts and stops for the
DNS server. It also logs critical events, such as when the server starts but cannot locate initializing data—for example,
zones or boot information stored in the Win2K Registry or (in some cases) AD. Figure 5.7 shows how you can
access the DNS Server log in Event Viewer.
Using PING
Another simple method for checking whether DNS records have been registered is to determine whether you can
look up the names and addresses of network resources using the PING utility. For example, you can check the
names using PING as follows:
PING COMPANY.COM
If this command works, the DNS server can be contacted using this basic network test.
Using NSLOOKUP
Next, you need to verify that the DNS server is able to listen to and respond to basic client requests. You can do
this using NSLOOKUP, a standard command-line utility provided in most DNS-service implementations, including
Win2K. NSLOOKUP allows you to perform query testing of DNS servers and provides detailed responses as its
output. This information is useful when you troubleshoot name-resolution problems, verify that RRs are added or
updated correctly in a zone, and debug other server-related problems.
To test whether the DNS server can respond to DNS clients, use NSLOOKUP as follows:
NSLOOKUP
Once the NSLOOKUP utility loads, you can perform a test at its command prompt to check whether the host
name appears in DNS. Listing 5.2 shows entering a host name and the output you can receive:
Server: ns1.company.com
Address: 250.45.87.13
Name: company.com
Address: 250.65.123.65
The output of this command means that DNS contains the A record and the server is responding with an answer:
250.65.123.65. Next, verify whether this address is the actual IP address for your computer.
You can also use NSLOOKUP to perform DNS queries, examine the contents of zone files on the local and remote
DNS servers, and start and stop the DNS servers. If the record for the requested server isn’t found in DNS, you
receive the following message:
You can use DNSCMD for most tasks that you can perform from the DNS console, such as:
You can install DNSCMD by copying it from the \Support\Tools folder located on the Windows 2000 CD-ROM.
For help in using the command, enter the following at a command prompt:
DNSCMD /?
1. The resolver sends the query to the first server on the preferred adapter’s list of DNS servers and waits
one second for a response.
2. If the resolver doesn’t receive a response from the first server within one second, it sends the query to the
first DNS servers on all adapters that are still under consideration and waits two seconds for a response.
3. If the resolver doesn’t receive a response from any server within two seconds, it sends the query to all
DNS servers on all adapters that are still under consideration and waits another two seconds for a response.
4. If the resolver still doesn’t receive a response from any server, it sends the query to all DNS servers on all
adapters that are still under consideration and waits four seconds for a response.
5. If it still doesn’t receive a response from any server, the resolver sends the query to all DNS servers on all
adapters that are still under consideration and waits eight seconds for a response.
6. If the resolver receives a positive response, it stops querying for the name, adds the response to the cache,
and returns the response to the client. If it doesn’t receive a response from any server by the end of the
eight seconds, it responds with a time-out. Also, if it doesn’t receive a response from any server on a
specified adapter, it responds for the next 30 seconds to all queries destined for servers on that adapter
with a time-out and doesn’t query those servers.
The resolver also keeps track of which servers answer queries more quickly, and it might move servers up or down
on the search list based on how quickly they respond.
When working with Windows 2000 Server and DNS entry changes, you may notice that the DNS server has stale
RRs because they haven’t been updated recently. This means that if there have been previous lookups or name-resolution
activity, the DNS server doesn’t see the changes to the RRs. (The server caches DNS information from previous
lookups so that subsequent lookups are fast.) The typical method of fixing this problem is to restart the server.
You can also fix this problem using the IPCONFIG command. Entering the following command allows you to
view the current list of DNS entries that the server has cached:
IPCONFIG /displayDNS
Entering the following command allows you to refresh all DHCP leases and re-register DNS names. (Wait five
minutes for the DNS entries in the cache to be reset and updated with the RRs in the server’s database.)
IPCONFIG /registerDNS
IPCONFIG /flushDNS
It’s worth noting that the DNS server should eventually refresh the cache because each entry has a Time-To-Live
(TTL) associated with it. TTL indicates a length of time used by other DNS servers to determine how long to
cache information for a record before discarding it. For example, most RRs created by the DNS Server service
inherit the minimum (default) TTL of 1 hour from the start-of-authority (SOA) RR; this prevents overly long
caching by other DNS servers. TTL is automatically decremented and eventually expires and disappears or is
flushed from the cache.
For an individual RR, you can specify a record-specific TTL that overrides the minimum (default) TTL inherited
from the SOA RR. You can also use TTL values of zero (0) for RRs that contain volatile data not to be cached for
later use after the current DNS query is completed.
Another problem that may occur is that the DNS server doesn’t resolve names for computers or services outside
your immediate network. For example, the DNS server may not resolve names for computers located on an external
network or the Internet. If a DNS server fails to resolve a name for which it’s not authoritative, the cause is usually
a failed recursive query. Recursion is used in most DNS configurations to resolve names that aren’t located in the
configured DNS domain.
For recursion to work correctly, all DNS servers used in the path of the recursive query must be able to respond to
and forward correct data. If the DNS server fails a recursive query, you need to review the server’s configuration.
By default, all Win2K DNS servers have recursion enabled. You can disable recursion using the DNS console to modify
advanced server options. In addition, recursion might be disabled if the DNS server is configured to use forwarders.
The AD database is implemented on an indexed sequential access method (ISAM) table manager that has been
referred to as "Jet." The table manager is called the Extensible Storage Engine (ESE). The ESE database is managed
on each domain controller by the ESE.DLL file. The database is a discrete transaction system that uses log files to
ensure integrity; it uses support rollback to ensure that the transactions are committed to the database.
The following files are associated with AD:
NTDS.DIT
The main database file; it grows as the database fills with objects and attributes. On the other hand, the log
files have a fixed size of 10 megabytes (MB). Any changes made to the database are also made to the current
log file and to the DIT file in the cache. Eventually the cache is flushed. If a computer failure occurs before
the cache is flushed, ESE uses the log file to complete the update to the DIT file.
EDB.CHK
Stores the database checkpoint, which identifies the point where the database engine needs to
replay the logs. This file is typically used during recovery and initialization.
To manage the database, Win2K provides a garbage-collection process designed to free space in the AD database.
This process runs on every domain controller in the enterprise with a default lifetime interval of 12 hours. The
garbage-collection process first removes "tombstones" from the database. Tombstones are remains of objects that
have been deleted. (When an object is deleted, it’s not actually removed from the AD database. Instead, it’s marked
for deletion at a later date. This information is then replicated to other domain controllers. When the time expires
for the object, the object is deleted.) Next, the garbage collection-process deletes any unnecessary log files. Finally, it
launches a defragmentation thread to claim additional free space.
Above the directory database is a database layer that provides an object view of the database information by applying
the schema to the database records. The database layer isolates the upper logical layers of the directory from the
underlying database system. All access to the database occurs through this layer instead of allowing direct access to
the database files. The database layer is responsible for creating, retrieving, and deleting the individual database
records or objects and associated attributes and values.
In addition to the database layer, AD provides a directory service agent (DSA), an internal process in Win2K that
manages the interaction with the database layer for the directory. AD provides access using the following protocols:
• Lightweight Directory Access Protocol (LDAP) clients connect to the DSA using the LDAP protocol
• Messaging Application Programming Interface (MAPI) clients connect to the directory through the DSA
using the MAPI remote procedure call (RPC) interface
• Windows clients that use Windows NT 4.0 or earlier connect to the DSA using the Security Account
Manager (SAM) interface
• AD domain controllers connect to each other during replication using the DSA and a proprietary RPC
implementation.
In this example, you can determine whether both domain controllers agree on the contents of the
OU=SALES,DC=COMPANY,DC=COM subtree. DSASTAT detects objects in one domain and not the other
(for example, if a creation or deletion hasn’t replicated) as well as differences in the values of objects that exist in
both. This example specifies a base search path at a subtree of the domain. In this case, the OU name is SALES.
The filter specifies that the comparison is concerned only with user objects, not computer objects. Because computer
objects are derived from user objects in the class hierarchy, a search filter specifying OBJECTCLASS = USER
returns both user and computer objects.
DSASTAT also allows you to specify the target domain controllers and additional operational parameters using the
command line or an initialization file. DSASTAT determines whether domain controllers in a domain have a
consistent and accurate image of their own domain.
DSASTAT also compares the attributes of replicated objects. You can use it to compare two directory trees across
replicas in the same domain or, in the case of a Global Catalog (GC), across different domains. You can also use
it to monitor replication status at a much higher level than monitoring detailed transactions. In the case of GCs,
DSASTAT checks whether the GC server has an image that is consistent with the domain controllers in other
domains. DSASTAT complements the other replication-monitoring tools, REPADMIN and REPLMON, by
ensuring that domain controllers are up to date with one another.
DCDIAG is intended to perform a fully automatic analysis with little user intervention. This means that you
usually don’t need to provide too many parameters to it on the command line. DCDIAG doesn’t work when
run against a Win2K workstation or server, and it’s limited to working only with domain controllers.
DCDIAG allows you to run the following tests to diagnose the status of a domain controller:
Connectivity Test
Verifies that DNS names for the domain controller are registered. It also verifies that the domain controller
can be reached using TCP/IP and the domain controller’s IP address. DCDIAG checks the connectivity to
the domain controller using LDAP and checks that communications can occur using an RPC.
Replication Test
Checks the replication consistency for each of the target domain controllers. For example, this test checks
whether replication is disabled and whether replication is taking too long. If so, the utility reports these
replication errors and generates errors when there are problems with incoming replica links.
For more information on the Windows 2000 Resource Kit’s NETDOM utility, refer to the Resource Kit
documentation or The Definitive Guide to Windows 2000 and Exchange 2000 Migration
(Realtimepublishers), a link to which can be found at http://www.realtimepublishers.com.
Using NTDSUTIL
The Directory Services Management utility (NTDSUTIL.EXE) is a command-line utility included in Win2K that
you can use to troubleshoot and repair AD. Although Microsoft designed NTDSUTIL to be used interactively via
a command-prompt session (launched simply by typing NTDSUTIL at any command prompt), you can also run
it using scripting and automation. NTDSUTIL allows you to troubleshoot and maintain various internal components
of AD. For example, you can manage the directory store or database and clean up orphaned data objects that were
improperly removed.
You can also maintain the directory service database, prepare for new domain creations, manage the control of the
Flexible Single Master Operations (FSMOs), purge metadata left behind by abandoned domain controllers (those
removed from the forest without being uninstalled), and clean up objects and attributes of decommissioned or
demoted servers. At each NTDSUTIL menu, you can type help for more information about the available options.
(See Figure 5.8.)
• Reports the free space for all disks installed on the domain controller
• Reads the Registry keys and associated location of the AD database files
• Reports the size of each of the database files, log files, and other associated files.
Before you perform this check, you have to either run NTDSUTIL after having booted the domain controller via
the special ‘Directory Service Restore Mode’ mode Safe Boot option or set the environment variable
SAFEBOOT_OPTION to a value of DSREPAIR under a normal boot of Windows 2000 (e.g. via the command
SET SAFEBOOT_OPTION=DSREPAIR).
To execute the Info command:
3. At the ntdsutil: prompt, enter the word files. The utility responds by displaying a ‘file maintenance’ prompt.
The following commands have been entered and displayed to this point:
C:\>SET SAFEBOOT_OPTION=DSREPAIR
C:\>NTDSUTIL
ntdsutil: files
file maintenance:
4. At the File Maintenance prompt, enter the word info to display the location and sizes of AD database
files, log files, and other associated files.
Figure 5.9: Using the Info command in NTDSUTIL to display the location and size of AD database files.
Using NTDSUTIL, you can relocate or move AD database files from one location to another on the disk or
move the database files from one disk drive to another in the same domain controller. You can also move
just the log files from one disk to another to free up space for the data files. (See "Moving the Active
Directory Database or Log Files" later in this chapter.)
Before you perform a low-level database-integrity check, you need to start the domain controller in Directory
Service Restore mode. To do this:
1. Restart the domain controller. When you’re prompted, press F8 to display the Windows 2000
Advanced Option menu.
3. Log on using the Administrator account and password that you assigned during the DCPROMO
process.
3. At the ntdsutil: prompt, enter the word files. The utility responds by showing you the File Maintenance
category.
The commands to this point appear in the Command Prompt window as follows:
I:>NTDSUTIL
ntdsutil: files
file maintenance:
4. At the File Maintenance prompt, enter the word integrity to start the low-level database check on the
domain controller. (The Integrity command reads every byte of the directory data file and displays the
percentage of completion as a graph. Depending on the size of your database and the type of hardware
you’re using for the domain controller, this process can take a considerable amount of time.)
Figure 5.10 shows the results of examining the low-level database structures in AD.
Figure 5.10: Using the Integrity option in NTDSUTIL to examine the AD database on a domain controller.
When you run the Semantic Checker, it performs the following checks:
Ancestor Check
Ensures that the DN tag is equal to the ancestor list of the parent. This could also be stated as a check that
the distinguished name of the object minus its RDN is equal to its parent’s DN.
Replication Check
Verifies that there is an up-to-dateness vector in the directory partition and checks to see that every object
has metadata.
Like the Integrity option described above, you can run the Semantic Checker option only when the domain
controller is in Directory Service Restore mode. To run in this mode:
1. Restart the domain controller. When you’re prompted, press F8 to display the Windows 2000
Advanced Option menu.
3. Log on using the administrator account and password that you assigned during the DCPROMO process.
3. At the ntdsutil: prompt, type semantic database analysis, then press Enter.
5. To start the Semantic Checker without having it repair any errors, type go. To start it and have it repair
any errors that it encounters in the database, enter go fixup.
The commands to this point appear in the Command Prompt window as follows:
I:>NTDSUTIL
ntdsutil: semantic database analysis
semantic checker: verbose on
Verbose mode enabled.
semantic checker: go
Figure 5.11 shows the results of using the NTDSUTIL Semantic Checker.
Figure 5.11: Using the NTDSUTIL Semantic Checker option to check the consistency of the contents of the directory database.
CN=NTDS
Settings,CN=<server_name>,CN=Servers,CN=<site_name>,CN=Sites,
CN=Configuration,DC=<domain>...
Before you manually remove the NTDS Settings object for any server, check that replication has occurred
after the domain controller has been demoted. Using the NTDSUTIL utility improperly can result in partial
or complete loss of AD functionality. (For a description of how to check whether replication has occurred,
see Chapter 4.)
3. At the ntdsutil prompt, type metadata cleanup, then press Enter. Based on the options returned to the
screen, you can use additional configuration parameters to ensure that the removal occurs correctly.
4. Before you clean up the metadata, you must select the server on which you want to make the changes.
To connect to a target server, type connections, then press Enter.
5. If the user who is currently logged on to the computer running NTDSUTIL doesn’t have administrative
permissions on the target server, alternate credentials need to be supplied before making the connection.
To supply alternate credentials, type the following command, then press Enter:
6. Type connect to server <server_name>, then press Enter. You should receive confirmation that the
connection has been successfully established. If an error occurs, verify that the domain controller you
specified is available and that the credentials you supplied have administrative permissions on the server.
7. When a connection has been established and you’ve provided the right credentials, type quit, then press
Enter, to exit the Connections menu in NTDSUTIL.
8. When the Metadata Cleanup menu is displayed, type select operation target and press Enter.
9. Type list domains, then press Enter. A list of domains in the forest is displayed, each with an associated
number. To select the appropriate domain, type select domain <number>and press Enter (where <number> is
the number associated with the domain of which the domain controller you’re removing is a member).
The domain you select determines whether the server being removed is the last domain controller of
that domain.
10. Type list sites, then press Enter. A list of sites, each with an associated number, is displayed.
12. Type list servers in site and press Enter. A list of servers in the site, each with an associated number, is
displayed.
13. Type select server <number> and press Enter (where <number> is the number associated with the server
you want to remove). You receive a confirmation, listing the selected server, its DNS host name, and
the location of the server’s computer account that you want to remove.
14. After you’ve selected the proper domain and server, type quit to exit the current NTDSUTIL sub
menu. When the Metadata Cleanup menu is displayed, type remove selected server and press Enter.
You should receive confirmation that the server was removed successfully. If the NTDS Settings object
has already been removed, you may receive the following error message:
15. Type quit at each menu to quit the NTDSUTIL utility. You should receive confirmation that the
connection disconnected successfully.
• The following error message may occur when you start AD on a domain controller:
Directory Services could not start because of the following error: There is
not enough space on the disk.Error Status: 0xc000007f. Please click OK to
shutdown this system and reboot into Directory Service Restore Mode, check the
event logs for more detailed information.
When this error occurs, the following events are recorded in the Event Log for the directory service
on the domain controller and can be viewed using Event Viewer:
Event ID 2013:
The D: disk is nearing Capacity. You may need to delete some files.
If the disk drive runs out of disk space, AD won’t start up. Win2K attempts to avoid this situation, but it can occur
if you ignore warnings about low disk space in the System Log or if you run large scripts against AD for mass
directory imports.
To resolve the problem of having no disk space, you can either make space available on the same disk drive or
move AD to a separate drive. The first method requires you to simply reduce the number of files or folders on the
same disk drive as the directory database.
If you want to move the AD database to another drive on the domain controller, you can use the NTDSUTIL utility
to move either the database file or the database log files. This method is ideal when you cannot move data to
another drive to free up space. If all drives are at capacity, you may need to install an additional hard disk in the
domain controller.
Before you move the directory database file or log files, you need to start the domain controller in Directory
Service Restore mode. To do this:
1. Restart the domain controller. When you’re prompted, press F8 to display the Windows 2000 Advanced
Option menu.
3. Log on using the administrator account and password that you assigned during the DCPROMO process.
1. Locate the drive containing the directory and log files. The directory database (NTDS.DIT) and log files
are located in the NTDS folder on the root drive by default. (However, the administrator may have
changed their locations during the DCPROMO process.)
4. At the ntdsutil: prompt, enter the word files. The utility displays the File Maintenance category.
The commands to this point should appear as follows:
I:>NTDSUTIL
ntdsutil: files
file maintenance:
5. At the File Maintenance prompt, enter the word info to display the location of the AD database files, log
files, and other associated files. Note the location of the database and log files.
7. To move the log files to a target disk drive, type the following command at the ntdsutil: prompt. (The
target directory where you move the database file or log files is specified by the %s parameter. The Move
command moves the files and updates the Registry keys on the domain controller so that AD restarts
using the new location.)
8. To quit NTDSUTIL, type quit twice to return to the Win2K command prompt, then restart the
domain controller normally.
I highly recommend that you completely back up AD on the domain controller before you execute the Move
command. In addition, I recommend that you back up AD after you move the directory database file and
log files; restoring the directory database will then retain the new file location.
3. At the ntdsutil: prompt, enter the word files. The utility displays the File Maintenance category.
I:>NTDSUTIL
ntdsutil: files
file maintenance: repair
5. As soon as the repair operation has completed, run the NTDSUTIL Semantic Checker on the database.
(For instructions, see "Cleaning Up the Metadata" earlier in this chapter.)
Figure 5.12: Using NTDSUTIL as a last resort to repair the directory database files.
Although Win2K and the Windows 2000 Resource Kit provide some basic tools for performing troubleshooting
tasks, they aren’t especially easy to use. NetIQ Corporation provides an excellent—and free—utility for
performing a host of AD diagnostic and troubleshooting tasks. ADcheck provides five essential categories
of Win2K diagnostics:
4. Test Replication
Checks domain replication topology and displays diagnostic information about replication partners
ADcheck is also capable of generating some very detailed reports, each of which shows potential causes
for problems as well as the problems themselves. For some reports, it also compares the current configuration
to an internal best practices guideline and may recommend changes. Given that it’s completely free, this
tool is something that no Win2K network administrator should be without. You can download ADcheck
from http://www.netiq.com/adcheck/download.asp.
There are many reasons why a domain controller cannot communicate on a secure channel. For example, the user
or domain controller may not have the appropriate access permissions or trust relationships. You can test the status
of secure channels and trust-relationship links using the Windows 2000 Resource Kit’s NLTEST command-line utility.
The mechanism for establishing a secure channel is very similar to the normal user-logon process. That is, the trusting
domain controllers send out logon requests to all known domain controllers in the trusted domain. The trusting
domain controllers then set up a secure channel with the first trusted domain controller that responds to this request.
Normally, this method is preferred because the first domain controller to respond to a logon request is typically the
controller that is located across the fastest communication link. However, if that link is down or the "fast" domain
controller is unavailable, a domain controller over a slower link may respond first, and all pass-through authentications
occur over the slow link.
There is a built-in mechanism in Windows 2000 that tracks how long authentication takes over the existing secure
channel. If pass-through authentication takes longer than 45 seconds, that fact is noted. If two such authentications
exceed that limit, a rediscovery process begins, the current secure channel is broken, and the trusting domain’s PDC
once again sends out logon requests to all known trusted domain controllers. However, because this mechanism
tracks only those communications that take longer than 45 seconds, users may see a 40-second delay every time
they attempt to use a resource without a secure-channel reset taking place.
You can run the NLTEST utility on the trusting domain controller to break and re-initialize a secure channel (for
example, when the secure-channel password was last changed) and obtain information about an existing trust
relationship. You can also use NLTEST to restart the discovery process for a new trusted domain controller. The
syntax of NLTEST is:
NLTEST /sc_query:<account_domain>
Where <account_domain> is the name of the trusted domain. This command returns the name of the trusted
domain controller with which the trusting domain controller has a secure channel. If that domain controller is
unacceptable, use the following syntax:
NLTEST /sc_reset:<account_domain>
Because the operations masters are assigned to specific domain controllers in the forest and domains and are critical
to the operation of AD, your first step to troubleshoot each operations master is to use the domain-controller
troubleshooting techniques described in "Troubleshooting the Domain Controllers" earlier in this chapter. Once
you’re assured that the domain controller itself is operating properly, you can turn your attention to the operations masters.
Schema Master
If the domain controller holding the forest-wide schema master role fails, you or your directory administrators won’t
be able to modify or extend the AD schema. Schema modifications typically occur when you install directory-enabled
applications such as management utilities that rely on the directory for information. These applications try to modify
or extend the current schema with new object classes, objects, and attributes. If the applications being installed cannot
communicate with the domain controller that has been designated as the schema master, installation will fail.
The schema master solely controls the management of the directory schema and propagates updates to the schema
to the other domain controllers as modifications occur. Because only directory administrators are allowed to make
changes, the schema operations master isn’t visible to directory users and doesn’t affect them.
The domain naming master is the only domain controller that controls the creation and deletion of domains, and
it propagates the changes to the other domain controllers as necessary. Because only directory administrators are
allowed to make structural domain changes to the forest, the domain naming operations master isn’t visible to
directory users and doesn’t affect them.
If a domain controller has remaining (unassigned) RIDs in its allocated block, the RID master role doesn’t
need to be available when new object accounts are created.
Infrastructure Master
If the domain controller that stores the infrastructure master role fails, a portion of AD won’t function properly. The
infrastructure master role controls and manages the updates to all cross-domain references, such as group references
and security identifier (SID) entries in access control lists (ACLs). For example, when you add, delete, or rename a
user who is a member of a group, the infrastructure master controls the reference updates. There is always only one
infrastructure master role in each domain in a forest.
Because only one domain controller is assigned to perform this role, it’s important that it doesn’t fail. However, if
it does, it’s not visible to network users. In fact, it’s visible to only directory administrators when they’ve recently
moved or renamed a large number of object accounts. In addition, having one domain controller assigned to this
role can be a big security problem.
If you force a transfer of the infrastructure master role from its original domain controller to another domain
controller in the same domain, you can transfer the role back to the original domain controller after you’ve
returned it to production.
It is strongly recommended that you not put the infrastructure master role on any domain controller that is
also acting as a global catalog server. For more information about FSMO placement rules and best practices,
see Microsoft Product Support Services article Q223346, at http://support.microsoft.com.
PDC Emulator
If the PDC fails or no longer communicates, users who depend on its service are affected. These are down-level
users from Window 95, Windows 98, and Windows NT 4.0 (or earlier). The PDC is responsible for changes to
the SAM database, changing passwords, account lockout for down-level workstations, and communications with
the domain controllers.
If you force a transfer of the PDC emulator role from its original domain controller to another domain
controller in the same domain, you can transfer the role back to the original domain controller after
you’ve returned it to production.
1. Click Start, Run, type dsa.msc, and press Enter or click OK.
2. Right-click the selected Domain Object in the top left pane, and then click Operations Masters.
3. Click the PDC tab to view the server holding the PDC master role.
4. Click the Infrastructure tab to view the server holding the Infrastructure master role.
5. Click the RID Pool tab to view the server holding the RID master role.
Determining the forest Schema Master role holder is a bit trickier, and involves the following:
1. Click Start, click Run, type mmc, and then click OK.
2. On the Console menu, click Add/Remove Snap-in, click Add, double-click Active Directory Schema,
click Close, and then click OK.
3. Right-click Active Directory Schema in the top left pane, and then click Operations Masters to view the
server holding the schema master role.
For the Active Directory Schema snap-in to be listed as an available choice in Step 2, you’ll have to have
already registered the Schmmgmt.dll file. If it doesn’t appear as an option, follow these steps to register
it: click Start, Run, type regsvr32 schmmgmt.dll in the Open box, and click OK. A message will be displayed
confirming that the registration was successful.
Determining the forest’s Domain Naming Master role holder requires the following steps:
2. On the Console menu, click Add/Remove Snap-in, click Add, double-click Active Directory Domains
and Trusts, click Close, and then click OK.
4. Right-click Active Directory Domains and Trust, and click Operations Master to view the server holding
the domain naming master role in the Forest.
Although the above methods certainly work, they aren’t necessarily the easiest. The following sections describe
some additional methods for determining FSMO role holders on your network.
1. Click Start, click Run, type cmd in the Open box, and then press Enter.
5. Type connect to server <server_name>, where <server_name> is the name of the Win2K domain
controller you want to view, and then press Enter.
8. Type list roles for connected server, and then press Enter.
Dumpfsmos <server_name>
Using DCDIAG
Another method involves use of the DCDIAG command. On a Windows 2000 Domain Controller, run the
following command: dcdiag /test:knowsofroleholders /v . Note that the use the /v switch here is required. This
operation lists the owners of all FSMO roles in the enterprise known by that domain controller.
4. The domain controllers that hold the operations master roles are displayed under the "Owner" column.
5. To test the connectivity to each of the operations master role holders, click Query to the right of each role.
An example of viewing the schema master using DirectoryAnalyzer is shown in Figure 5.13.
Figure 5.13: Using a third-party utility to determine which domain controller in your forest holds a particular FSMO role.
Once you’ve determined where the current operations master role holders in a domain or forest are (using the
information in the previous section), you can use the NTDSUTIL program to transfer the operations master role
from one domain controller in the forest or domain to another in the same forest or domain. To seize an operations
master role on a selected domain controller, follow these steps:
1. On the target domain controller (the domain controller that will be taking over the forest- or domain-wide
operation master role), choose Start>Run. In the Open dialog box, type NTDSUTIL, then click OK.
2. If you’re not running NTDSUTIL on the target domain controller, you need to select and connect to it.
At the ntdsutil: prompt, type connections, then press Enter. Type connect to server <server_name>,
where <server_name> is the name of the server you want to use, then press Enter. To supply additional
credentials, type set creds <domain_name user_name_password> and press Enter. At the Server
Connections prompt, type quit, then press Enter again.
4. To seize the role on the currently connected domain controller, enter seize <role_type>, where
<role_type> is one of the following: schema master, domain naming master, rid master, infrastructure
master, or pdc. (For a list of roles that you can seize, enter ? at the FSMO Maintenance prompt or see
the list of roles at the beginning of this section.)
5. After you seize the roles, type quit, then press Enter, to return to the previous menu in the NTDSUTIL
interface. Repeat this step until you’ve exited the utility.
6. Reboot the domain controller that seized the operations master role to complete the role change operation.
If the current operations master role holder domain controller is online and accessible, or can be
repaired and brought back online, it’s recommended that you transfer the role using NTDSUTIL’s
transfer command rather than the seize command. For more information on seizing and transferring
flexible single master operation roles, see Microsoft Product Support Services articles Q255504 and
Q223787, at http://support.microsoft.com.
Figure 5.14: Checking for consistency of the operations masters on domain controllers.
Using DirectoryAnalyzer
When you use DirectoryAnalyzer to see replication partners, you’re viewing the replication topology for the selected
domain controller in a forest, and you can check replication consistency among replication partners. In addition,
DirectoryAnalyzer constantly checks the replication topology to ensure that it’s transitively closed. If it isn’t,
DirectoryAnalyzer generates an alert.
Figure 5.15 shows the Replication Information tab in the Browse Directory By Site view. This tab allows you to
view the replication topology and the last successful replication cycle for each replication partner. It also shows the
replication partners and any errors that occurred during replication.
Using REPADMIN
You can also use the Replication Administration utility (REPADMIN) to monitor the current links to other replication
partners for a specific domain controller, including the domain controllers that are replicating to and from the
selected domain controller. Viewing these links shows you the replication topology as it exists for the current domain
controller. By viewing the replication topology, you can check replication consistency among replication partners,
monitor replication status, and display replication metadata.
To use REPADMIN to view the replication partners for a domain controller, enter the command:
REPADMIN /SHOWREPS
REPADMIN /KCC
In this command, <domain_controller> is the host name of the target domain controller for which you’re tracking
replicated changes for CJOHNSON in the ENGINEERING OU in the COMPANY.COM domain. The output
from this command shows the Update Sequence Number (USN), originating DSA, date and time, version number,
and replicated attribute.
In addition to tracking replicated changes, many third-party utilities also constantly evaluate replication
latency across all domain controllers. If the latency exceeds the specified threshold, the
utility will generate an administrative alert and/or generate a log entry reporting the condition.
To force replication among replication partners, you can use REPADMIN to issue a command to synchronize the
source domain controller with the destination domain controller by using the object GUID of the source domain
controller. To accomplish the task of forcing replication, you need to find the GUID of the source server. Enter the
following command to determine the GUID of the source domain controller:
You can find the GUID for the source domain controller under the Inbound Neighbors section of the output.
First, find the directory partition that needs synchronization and locate the source server with which the destination is
to be synchronized. Then note the GUID value of the source domain controller. Once you know the GUID, you
can initiate or force replication by entering the following command:
If the command is successful, the REPADMIN utility displays the following message:
Optionally, you can use the following switches at the command prompt:
You’ll typically force replication only when you know that the destination domain controller has been down or
offline for a long time. It also makes sense to force replication to a destination domain controller if network con-
nections haven’t been working for a while.
REPLMON provides a view only from the domain controller perspective. Like REPADMIN, you can install it
from the \Support\Tools folder on the Windows 2000 CD-ROM. REPLMON has two options that you’ll find
helpful when monitoring AD:
One domain controller in a specific site owns the role of creating inbound replication connection objects among
bridgehead servers from other sites. This domain controller is known as the Inter-Site Topology Generator. While
analyzing the site link and site link bridge structure to determine the most cost-effective route to synchronize a
naming context between two points, it might determine that a site doesn’t have membership in any site link and
therefore has no means of creating a replication object to a bridgehead server in that site.
The first site in AD (named Default-First-Site-Name) is created automatically. It’ a member of the default site link
(DEFAULTIPSITELINK) that is also created automatically and used for RPC communication over TCP/IP among
sites. If you create two additional sites—for instance, Site1 and Site2—you need to define a site link that each site
is going to be a member of before these sites can be written to AD. However, you can also edit the properties of a
site link and modify which sites reside in it. If you remove a site from all site links, the KCC displays the error
message listed above to indicate that a correction needs to be made to the configuration.
When the KCC generates this error message, it’s in a mode where it doesn’t remove any connections.
Normally, the KCC cleans up old connections from previous configurations or redundant connections.
Thus, you might find that there are extra connections during this time. The solution is to correct the
topology problem so that the spanning tree can form.
Obtaining this type of change-management data for AD has been difficult or impossible. Win2K doesn’t record
such changes automatically in its event logs, and logging AD infrastructure changes manually is cumbersome and
prone to error. Even third-party AD-management tools have been challenged in this area: Although a number of
tools are available to diagnose problems with and monitor real-time events related to AD infrastructure components,
the ability to analyze the progression of changes to these components over time has remained elusive.
This situation changed recently when NetPro Computing released a new tool, DirectoryInsight. DirectoryInsight is
unique in that it gives administrators a wide array of information about changes to an AD network that occur over
time. This information includes:
For example, when users at a particular site report suddenly slow network logins that began that morning, you
might analyze your change log and determine that the only domain controller serving the domain on that site was
removed by a site administrator the previous evening. DirectoryInsight consolidates all of the collected data into a
single, centralized database and provides convenient access to it using a Web-based administration tool (an ActiveX
control running in an Internet Explorer browser window.)
It’s a good idea to use change-management information proactively in addition to using it reactively. For
example, you might use the object-population information to analyze and plan network capacity and to
predict future trends and infrastructure needs. This information is invaluable for management reports
and Information Technology (IT) budget planning.
As a systems administrator, you must protect your network against data loss, computer failures, and loss of directo-
ry services. To accomplish this task, you must back up and restore each Windows 2000 (Win2K) server and the
services each computer provides. You also need to back up and recover critical system data such as the Active
Directory (AD) database, the Registry, and other service-specific configuration databases and configuration files.
Microsoft refers to this set of critical data as System State data.
It’s an unfortunate truth that, regardless of what precautions you take or what quality of hardware you’ve pur-
chased, the odds are that someday, somewhere, you’ll experience a system failure. If your organization depends on
its Win2K systems, you need to know how to troubleshoot and repair them when they fail. Although Microsoft
has greatly improved system reliability and recoverability in Win2K, things still can and do go wrong. Fortunately,
Microsoft provides tools to help you maintain and recover your system.
In this chapter, I’ll cover these new recovery and troubleshooting tools, including the Safe mode startup options,
Win2K Setup’s Emergency Repair Process, and the Recovery Console. I’ll also discuss some advanced recovery
tricks and techniques that you can use to assist in maintaining a high level of OS and network availability. I’ll also
mention some third-party tools that complement Win2K’s built-in recovery features.
The first level of defense against system disaster is ensuring that you’ve implemented at least minimal levels of fault
tolerance on your network servers. Fault tolerance is generally defined as the resilience of a particular computer,
subsystem, or component to various kinds of failures.
At the network level, implementing system fault tolerance might involve installing power-backup equipment (such
as an Uninterruptible Power Supply, or UPS, with line conditioning) on critical network devices and creating a
fault-tolerant routing topology using dynamic routing protocols. In the case of network servers, implementing sys-
tem fault tolerance usually involves creating backup or redundant resources for critical subsystems. The array of
fault-tolerant features for any important network system should always include, at least, the following:
•Disk redundancy—Using one or more levels of Redundant Array of Independent Disks (RAID)
technology, in which the disk subsystem can withstand the failure of any one drive. You can implement
RAID either using software, such as Win2K’s built-in RAID features (either RAID 1/disk mirroring or
RAID 5/disk striping with parity), or using a hardware RAID device or disk controller).
•Power redundancy—In the form of a UPS or power-conditioned backup generator capable of keeping the
system up and running if the power fails. Backup power devices should ideally be connected to the server(s)
they protect using hardware and software (for example, serial or network cabling and either Win2K’s UPS
software or the UPS vendor’s software). This software can inform the server, network administrator, and/or
network users about various power events that affect system availability. The software can also automatically
initiate a proper system shutdown after a specified amount of time or when batteries are running low.
•UPS monitoring and management —For example, Win2K includes a UPS monitoring and management util
ity written by American Power Conversion (a leading UPS vendor). In addition, most UPS devices also
include their own customized UPS management software.
•Regular system backups—Maintain offline data redundancy for all critical system data. For example,
you should develop and thoroughly test for each of your production servers and domain controllers a
backup and recovery plan that includes the operating system (OS) and directory services.
Here are some goals and objectives for developing an effective backup and recovery strategy. You should adapt
and expand these strategies to your environment and situation.
•Develop backup and restore methods that ensure that you can quickly recover lost data.
•Make sure that you have a backup for each volume that contains the data. In case of total failure, this
strategy allows you to restore the entire volume.
•Create a backup of AD that includes the entire domain on the domain controller. The backup should
include user accounts and security information.
•Use the log created during backup to determine which files were opened by other applications during
the backup. The names of any files that aren’t backed up appear in the log, so you can use the log to
keep track of which files aren’t being backed up and why. In addition, arrange a time to perform a sepa
rate backup of these files during times when the application isn’t running.
•Keep at least three copies of the backup media and rotate them. In addition, keep at least one copy off
site in case of a catastrophic failure.
•Perform a trial restore periodically to verify that your files are being properly backed up. A trial restore
can uncover problems that don’t show up using backup verifications.
•Secure both the storage device and the backup media to prevent an administrator for another server
from restoring stolen data onto your server. Ideally, this should include keeping an offsite copy as well
as on-site copies of backups.
The acronym RAID originally stood for Redundant Array of Inexpensive rather than Independent Disks
because the idea was to use a greater number of cheaper disks rather than a Single Large Expensive Disk
(SLED). However, the definition was changed because many modern RAID systems are built strictly for
redundancy rather than cost savings, and they often use disks that aren’t necessarily inexpensive.
Wherever possible, choose a hardware-based RAID solution over Win2K’s software-based RAID. Most hard-
ware RAID solutions offer better performance (many RAID controllers include on-board cache random
access memory (RAM) and dedicated processors), advanced RAID levels that offer improved flexibility and
performance, better management features, and easier recovery if a primary disk in a mirror set (RAID-1
volume) fails.
By far the most important step you can take to prepare yourself for a critical disaster is to back up the system
regularly. Every network should have in place a backup process that regularly copies the system data to secondary
or offline storage devices.
The process should back up the user data files and applications that would need to be recovered if they were lost
or damaged as a result of a disk failing or data becoming corrupted. Backups should also include critical system
data such as the AD database, the Registry, and other service-specific configuration databases and configuration
files.
Another important consideration in any backup routine is the timing of backups in relation to the availability of
the data files being backed up. For example, many backup utilities can’t back up files that are being used (by users
or applications), so it’s important to schedule backups to occur during off-hours, when network usage is at its low-
est and the maximum number of files are available to be backed up.
Backing up data during off-peak hours is also important from a performance standpoint. For example, backing up
a computer’s data taxes the computer’s resources and makes it less accessible. Also, performing a backup from a
remote computer over the system’s primary network connection can significantly reduce network performance.
The Win2K backup utility has been integrated to copy the core Windows 2000 Server distributed services, which
include AD, File Replication service (FRS), and Certificate Services. Backup lets you back up and restore these
services by checking the System State check box in the utility. Backup also supports Remote Storage, Removable
Storage, disk-to-disk operations, and other new Win2K services and features. You can back up data to a tape drive,
a logical drive, a removable disk, or an entire library of disks or tapes organized into a media pool and controlled by
a robotic changer.
To reduce the risk of losing data, and to improve data security, store backup sets in a safe that is both
heatproof and fireproof. In addition, as part of your backup cycle, regularly rotate one or more full backup
sets to an offsite location. Thus, if all of the equipment at the primary site is destroyed, you can still
recover your data.
•Back up selected user files and folders located on the domain controller’s hard drive
•Back up the domain controller’s System State, which are the files central to system operation (the Registry,
AD and SYSVOL, the COM+ Class Registration database, and boot files)
•Restore backed-up files and folders to your server’s hard drive
•Schedule regular backups
•Create an Emergency Repair Disk (ERD), which helps you repair system files that become corrupted.
Backup and restore operations can only be performed by backup operators and local administrators. Members of
the Backup Operators group can back up and restore data on the domain controller. (This group is one of the
built-in groups provided by Win2K.) Any domain user can be granted the user rights to back up and restore files
and directories. Members of the Local Administrators group can back up and restore data files, directories, and the
System State of the domain controller.
To start the backup utility, choose Start>Programs>Accessories>System Tools>Backup. Figure 6.1 shows the
Welcome page.
You can also start the backup utility from a command-line interface or prompt. At the command prompt on
the local computer, simply type Ntbackup.
The Welcome page provides three main options to assist you in backing up and restoring your data.
•Backup Wizard—Helps you back up your programs and files to help prevent data loss and damage caused
by disk failures, power outages, virus infections, and other potentially damaging events.
•Restore Wizard—Helps you restore your previously backed-up data in the event of a hardware failure, acci
dental erasure, or other data loss or damage.
•Emergency Repair Disk—Helps you create an ERD that you can use to repair and restart Win2K if it’s
damaged. This option doesn’t back up your files or programs, and it’s not a replacement for regularly back
ing up your system.
In addition to these options, the tabs on the Welcome page allow you to bypass the wizards and select backup and
restore options manually.
Figure 6.2 shows the What to Back Up dialog box, where you select what you want to back up.
Figure 6.2: The Backup Wizard prompts you to select what you want to back up on the local
computer.
In the next dialog box, you can select the type of backup media and the location and name of the destination back-
up file. (By default, the file is called Backup.bkf.)
You can use the backup utility to back up and restore data on either file allocation table (FAT) or NT file system
(NTFS) volumes on your system or domain controller. If you back up data from an NTFS 5 (Win2K NTFS) vol-
ume, you should in most cases restore it to an NTFS 5 volume. If you restore the data to a FAT or Windows NT
4.0 or earlier NTFS volume, you’ll lose certain file and folder features, and you’ll lose data as well—for example,
file permissions, Encrypting File System (EFS) settings, disk quota information, mounted drive information, and
remote storage information.
The Completing the Backup Wizard dialog box, shown in Figure 6.3, presents a summary of your current selec-
tions and allows you to specify additional options, if needed. (These are discussed in "Specifying Advanced
Options" below.) Clicking Finish starts the backup, and the Backup Progress dialog box tracks the progress of the
backup. (See Figure 6.4.)
Figure 6.4: The Backup Progress dialog box tracks the progress
of the backup.
Figure 6.5: The Type of Backup dialog box allows you to choose the mode, or type, of backup to
perform and whether to back up the data files that have been migrated to Remote Storage.
The backup utility allows you to perform the following types of backup:
•Normal—Copies all selected files and marks each one as having been backed up. (This clears the archive
attribute.) Performing a normal backup of all files restores the state of your computer to the time of the
backup using only the one normal backup. If you then create incremental or differential backups, you have
to restore all the incremental backups or the latest differential after restoring the normal backup.
A normal backup creates a backup of the files, folders, and drives selected with the entire System State of
the server or domain controller while it’s online. (See "Maintaining System State Backups" later in this
chapter.) If the server is a domain controller, AD is backed up as part of the System State. When you back
up the System State for the local computer, you cannot choose to back up individual components of the
System State data because of dependencies among them.
•Copy—Allows copies of all selected files but doesn’t mark each one as having been backed up. (This means
that the archive attribute is cleared.) If you want to back up files between normal and incremental backups,
copying is useful because it doesn’t affect these other backup operations.
•Incremental—Allows you to back up only those files that have been created or changed since the last nor
mal or incremental backup. It marks files as having been backed up. If you use a combination of normal
and incremental backups, restoring files and folders requires that you have the last normal backup set as
well as all subsequent incremental backup sets.
You can also perform a combination of backups. For example, backing up your domain controller data using nor-
mal and incremental backups requires the least amount of storage space and is the quickest backup method.
However, recovering files can be time-consuming and difficult because the backup sets can be stored in several dif-
ferent places, disks, and/or tapes. On the other hand, backing up your domain controller’s data using a combina-
tion of normal and differential backups is more time-consuming, especially if your data changes frequently, but it’s
easier to restore the data because the backup set is typically stored on only a few disks or tapes.
After you’ve selected the type of backup and whether to back up data files migrated to Remote Storage, you can use
the How to Back Up dialog box (shown in Figure 6.6) to specify whether you want the backup to be verified.
Verification reads the backed-up data to ensure its integrity. Verification takes extra time but helps ensure that the
backup is successful.
You can also specify whether you want the backup to use hardware compression, if available. Hardware compres-
sion allows the backup to try and reduce the size of the data files and System State being backed up. Backup files
that have been backed up using hardware compression must be restored to hard drives that support compression.
Figure 6.6: The How to Back Up dialog box provides options to verify the backup and support
hardware compression, if available on the local server.
Figure 6.7: The Backup Label dialog box allows you to customize the backup and media labels.
In the next dialog box, you can decide when to perform the backup. You can choose Now to perform it immediate-
ly or Later to schedule a time (preferably during off-peak hours) to have the backup run. (Scheduled backups are
discussed in "Backing Up Using Manual Selection" below. After you’ve selected all the advanced options, the
Completing the Backup dialog box appears again and summarizes your backup configuration. Click Finish to run
the backup. When it completes, a backup status report is displayed, as shown in Figure 6.8.
Here you can perform the following tasks and set up the backup configuration:
•Select the data files, folders, and drives to be backed up—The Backup dialog box provides a tree view of
the local files and folders that you can back up. You can use this tree view the same way you use Windows
Explorer to open devices and select files and folders.
•Select storage media and backup destination—Backups provide two options for selecting storage media:
removable and non-removable storage devices and tape devices. Storage devices can be hard drives, floppy
disks, and Zip and Jaz drives. The tape device option is only available when the local computer has a tape
drive installed.
One handy new feature in the Win2K backup utility is its ability to back up to alternative backup media
such as hard disks, Jaz and Zip drives, optical drives, compact disc-recordable/rewritable (CD-R/RW)
drives, and digital video disc-ROM (DVD-ROM) drives. As a rule, if it has a logical drive letter and Win2K
can write files to it, Win2K Backup can use it as a destination backup device.
After you’ve made your selections, click Start Backup. The Backup Job Information dialog box appears (shown in
Figure 6.10). Here you can customize the backup by specifying a name and a label for the backup and whether the
data should be appended to the backup media.
When you’ve selected these options, click Start Backup to start the backup. The backup utility displays the Backup
Progress dialog box (see Figure 6.4), where you can track the backup procedure.
If you want to schedule the backup to run at a later time (and unattended), click Schedule; the backup utility will
wait and run the backup at the scheduled date and time you specify. To specify additional backup options, click
Advanced to select the backup type, whether to verify the backup, and whether to back up remote media. You can
specify the type of backup (Normal, Copy, Incremental, Differential, or Daily), indicate whether you want a log file
to record the actions, exclude certain file types from the backup, and verify whether the backup was successful. (For
information on the types of backup, see "Specifying Additional Options" above.)
•The Registry
•System boot files (including all system files)
•AD service
•COM+ Class Registration database
•Certificate Services database
•SYSVOL folder
•Cluster service information
•Windows 2000 Professional—A copy of the system Registry hive files (as with an ERD), the COM+ Class
Registration database, the system boot files, including Ntldr, Ntdetect.com, and other system data, such as
performance-counter configuration files and all files protected by Windows File Protection (WFP).
•Windows 2000 Server—The Registry, COM+ Class Registration database, system boot files, and, if the
server is a certificate server, Certificate Services information.
•Win2K server that is also a domain controller—In addition to the components for Windows 2000 Server,
a copy of the AD database (Ntds.dit), the log and checkpoint files, and the contents of the SYSVOL folder.
•Win2K server that is also running Domain Name System (DNS) —In addition to the components for
Windows 2000 Server, Directory Services (DS)-integrated as well as non-DS-integrated DNS zone informa
tion.
•Windows 2000 Advanced Server acting as cluster members—In addition to the components for Windows
2000 Server, a copy of the quorum recovery log file.
When you back up the System State, a copy of the Registry is placed on the local system partition in a folder under
%Systemroot%\Repair\Regback (such as C:\Winnt\Repair\Regback). If the system Registry files are deleted or
damaged, you can use these backup copies of the Registry hive files to repair your system (for example, using the
Recovery Console) without performing a full restore of the System State data using the Win2K backup utility.
To restore the System State on a domain controller, you must first start your computer in Directory Services
Restore mode. This allows you to restore AD and the SYSVOL folder. When you back up or restore the System
State data, you get all the System State data that is relevant to your server or domain controller. You cannot back
up or restore individual components of the data because of dependencies among the components of the System
State. However, you can restore the System State data to an alternate location; if you do, only the Registry files,
SYSVOL folder files, cluster service information, and system boot files are restored.
Figure 6.11 shows the Welcome page for the Restore Wizard.
I don’t recommend using the restore procedure for copying a system from one computer to another. The
backup of the System State is server-specific.
When you click Next on the Welcome page, the What to Restore dialog box appears, where you can select the spe-
cific data set that you previously backed up (shown in Figure 6.12). The Restore Wizard provides you with a tree
view of the files and folders the data was in when it was backed up. You can navigate the tree to select the files and
folders in the same way that you use Windows Explorer. The label that you selected for the backup identifies the
data set.
When you click Next, the Completing the Restore Wizard dialog box appears. Like its equivalent in the Backup
Wizard, this dialog box shows your current selections and allows you to select advanced options, if needed. You
start the restore by clicking Finish. As in the Backup Wizard, a progress dialog box appears, where you can track
the restore.
•Original location—The folder(s) the data was in when it was backed up. This option is useful if you’re
restoring files and folders that have been damaged or lost.
•Alternate location—Retains the structure of the backed-up folders and files. This option is useful if you
know you’ll need some old files, but you don’t want to overwrite or change any of the current files or fold
ers on your disk.
•Single folder—Doesn’t retain the structure of the backed-up folders and files. Only the backed-up files are
placed here. This option is useful if you’re searching for a file and you don’t know its location.
After you select the location for the restored files, the How to Restore dialog box appears, where you select
how you want your files and folders restored. Again, you have three options.
•Don’t replace the file on my computer—Prevents files from being overwritten on your hard disk. This is the
safest method of restoring files.
•Replace the file on the disk only if the file is older—Allows you to make changes to the files since you last
backed up your data. This option ensures that you don’t lose the changes you’ve made to the files.
•Always replace the file on my disk—Replaces all of the files on your hard drive with files in your backup
data set. If you’ve made any changes to the files since you last backed up, your data will be replaced with
the backed-up data.
Once you’ve made your selection, the Advanced Restore Options dialog box allows you to specify whether you
want to restore security, the Removable Storage database, and/or junction point data. (See Figure 6.14.)
The last dialog box in the Restore Wizard provides a summary of the selections that you’ve made and lets you start
the restore process. Like the Backup Wizard, once you start the restore process, a progress dialog box appears,
which provides details during the restore and completion information once it’s done.
When you restore the System State, your recovery plan should take into account the fact that the age of
the backup tape mustn’t exceed the AD tombstone lifetime (the length of time that deleted objects are
maintained in AD before the system permanently removes them). The default for this value is 60 days. If a
tape older than the tombstone is restored, the restore application programming interfaces (APIs) will reject
all of the data as being out of date. Microsoft discusses this issue in Product Support Services article
Q216993, "Backup of the Active Directory Has 60-Day Useful Life." This underscores the importance of
performing regular backups of AD.
WINS The WINS database is restored to the state it was in at the time of the back-
up. This overrides the current state of the server.
DHCP DHCP leases are restored to their state at the time of the backup. You must
perform several steps to reconcile the state of this database with the current
state of the network.
Remote Storage During a restore operation, the Remote Storage database is recalled from tape
media when you restart the service—but only if the tapes are available.
Certificate Services server After a restore operation, this server may have outstanding certificates that are
now unknown. You can revoke and re-issue these certificates or leave the old
certificates orphaned.
Windows Media After a restore operation, you may have to re-install this server because the
Services server database containing setup information may be lost.
Internet Information If you perform a complete restore, no problems with IIS should arise. If you
Services (IIS) server perform a partial restore, you must follow the backup/restore procedures spe-
cific to the IIS service.
AD In a network with more than one domain controller, the default restore
method (non-authoritative) is generally the preferred method for restoring a
failed server. Use the authoritative restore process outlined later in this chapter
(see "Authoritative Restores") only if you want to get the system back to the
state at the time the backup was made. (You’d want to do this, for example, if
you erroneously deleted AD objects from the database and it would be diffi-
cult to re-create them. For more information, see "Developing a Backup and
Restore Strategy for Active Directory" later in this chapter.)
SYSVOL If the computer being restored is the only domain controller on the network,
you must select a restore option in the Advanced Restore Options dialog box
in the backup utility. Otherwise, you need to use the default (non-authorita-
tive) restore process.
Table 6.2: How to handle required server services when performing a restore.
The DHCP server allocates IP addresses and other network-configuration information to DHCP-aware network
clients. Using DHCP is the most common way to distribute IP addresses in a modern network. The restore process
restores the DHCP database. However, it’s restored to the date the backup was performed, and this can result in
duplicate IP addresses being issued. Having duplicate addresses causes those computers to cease all network opera-
tions.
To avoid this, DHCP has a Safe mode of operation. In Safe mode, DHCP broadcasts on the network to verify that
the IP address it’s about to issue isn’t already in use. After a restore, you need to reconcile the database and enter
Safe mode for a period of one-half of the IP lease duration. Because Safe mode significantly reduces network and
server performance, and because being in Safe mode for this period of time is enough to ensure that DHCP func-
tions properly, Microsoft recommends quitting Safe mode as soon as the one-half lease duration is met.
To reconcile the DHCP database, run the DHCP snap-in, then choose Action>Reconcile while the scope is high-
lighted. In the scope properties dialog box, choose Conflict Detection under Advanced and set the number of
attempts to 1.
The Remote Storage service cannot recall its database from the Remote Storage tape during the restore operation
unless the Remote Storage tape is in the correct drive—that is, the drive configured to be the Remote Storage
device—or in the robotic library. If any issues with the service exist, the tape will restore by using the database copy
that the service stores on the tape. This is an automatic process that requires no user intervention.
Certificate Server
Certificate Server is the Win2K service that issues X.509 security certificates for a particular Certificate Authority
(CA). It provides customizable services for issuing and managing certificates for the enterprise.
After performing a restore operation, you don’t have to take any special steps for Certificate Server. However, cer-
tificates may exist on the network that were issued before the restore operation. Although Certificate Server is now
unaware of these certificates, they’re valid and will continue to function.
Active Directory
For a detailed discussion of how to back up and restore AD as a service, see "Developing a Backup and Restore
Strategy for Active Directory" later in this chapter.
SYSVOL
SYSVOL is a replicated data set that contains the policies and scripts that are used by some Windows security sys-
tems. SYSVOL uses Win2K’s FRS for distribution throughout the network. The three restore options for SYSVOL
are identical to the options for FRS: non-authoritative (the default), authoritative, and primary.
You need to restore SYSVOL and AD together; however, to clarify the issues involved, this chapter
explains them separately. For a detailed discussion of how to back up and restore AD as a service, see
"Developing a Backup and Restore Strategy for Active Directory" later in this chapter.
To create an ERD, make sure you have a blank 1.44-megabyte (MB) floppy disk. Then on the Welcome page of
the backup utility, click Emergency Repair Disk to start the ERD Wizard. From there, you have a single option—
whether you also want to update the hard disk–based copy of the Registry files (which are stored in the
%Systemroot%\Repair folder). By default, this folder contains versions of the Registry hive files that were created
right after the Win2K setup process was completed. Therefore, if you want to preserve the original set of Registry
files created during Setup but still update the files when the ERD is created, you need to first copy the files in this
folder to another location.
•Safe mode booting—The first option to try is Safe mode and related startup options. This option is easy
and understandable to use. A related option is Last Known Good Configuration, which may provide a solu
tion in situations where a recently installed driver or service is causing problems.
•Recovery Console—If Safe mode fails, the next option to try is the Recovery Console. Only advanced or
administrative users should use this option. This method of starting a failed system uses the OS Setup CD-
ROM or ERD floppy disks you created from the Setup CD-ROM.
•Windows 2000 Setup Emergency Repair Process— If the Safe mode and Recovery Console options don’t
work, try rerunning the Setup program from the Windows 2000 CD-ROM. It may (but isn’t guaranteed
to) repair the system and make it startable.
•Third-party recovery tools—There are a number of excellent third-party recovery tools on the market that
you can use to assist you in troubleshooting and recovering unstartable Win2K systems.
Safe Mode
Safe mode is a boot option that lets you disable certain OS features so that you can successfully start the system.
For example, you can remove offending configurations, such as a newly installed driver, which might be causing a
problem.
Safe mode has a number of startup options that allow you to start using varying system configurations. For exam-
ple, the regular Safe mode starts Win2K in a bare-bones environment (set of drivers and services). It doesn’t provide
access to high-resolution video drivers, nor does it provide networking or other optional services.
To access most of these choices, you press F8 on the Windows 2000 Boot Loader menu when you start up. This
selection displays a menu of the following alternative safe-start options:
•Safe Mode—Starts Win2K with the minimal set of drivers and services necessary. These basic files and
drivers support the mouse, keyboard, monitor, mass storage, and other common services.
•Safe Mode with Networking—Similar to Safe mode but adds drivers and services necessary to enable net
working.
•Safe Mode with Command Prompt — Similar to Safe mode but starts Win2K with a Command Prompt
window instead of Windows Explorer.
•Enable VGA Mode—Starts Win2K in VGA mode by using the Vga.sys driver instead of the regular video
driver.
Depending on the type of problem you’re experiencing with a system, one or more of these options might be
appropriate in a given set of circumstances.
The Recovery Console is particularly useful if you need to repair your system by copying a file from a floppy disk
or CD-ROM to your hard drive, or if you need to reconfigure a service that is preventing your computer from
starting properly. For example, you can use the Recovery Console to replace an overwritten or corrupted driver file
with a good copy from a floppy disk.
The Recovery Console is extremely powerful, so I recommend it only for advanced users and administrators .
1. Insert the Windows 2000 Setup CD-ROM into the CD-ROM drive.
2. When the text-based part of the Setup begins, follow the prompts. Choose the REPAIR option
by typing R.
3. When you’re prompted, choose the Recovery Console by typing C.
4. If you have a system that has more than one OS installed, choose the Win2K installation that you need
to access.
5. When you’re prompted, type the administrator password.
6. At the system prompt, type Recovery Console commands, then type either Help for a list of commands
or Help <command name> for help on a specific command.
7. Quit the Recovery Console and restart the computer by typing Exit.
The commands for the Recovery Console allow you to accomplish simple operations such as change to a different
directory or view a directory, and more powerful operations such as fixing the boot sector on the hard drive. You
can display help for the commands by simply typing Help at the Recovery Console command prompt.
For a more extensive discussion of the capabilities, commands, and use of the Recovery Console, check
out Chapter 12 of the free eBook The Definitive Guide to Windows 2000 Administration by Sean Daily and
Darren Mar-Elia. You’ll find a link to it at www.realtimepublishers.com.
Assuming you have an ERD for the non-booting system (for more information on this, see "Creating an
Emergency Repair Disk" earlier in this chapter), make sure you have it ready during this procedure. (Even if you
don’t have an ERD, it may still be possible to use Setup Repair mode to recover the system; see the sidebar below,
"Using Windows 2000 Setup Repair without an ERD.")
The following steps provide a general overview of how to use the emergency repair process from the Setup CD-
ROM:
1. Start your computer from the Windows 2000 Setup disks or CD-ROM —You can only start your computer
from the CD-ROM if your computer hardware and basic input/output system (BIOS) have been set up to
support this option.
2. Choose the repair option during setup—After your computer starts up, the Setup program starts, and
you’re asked whether you want to continue installing the Win2K OS. Press Enter to continue. The installa
tion process starts and allows you to repair your system. During installation, you can choose whether to
install a fresh version of Windows 2000 Server or repair an existing installation of Win2K. To repair a dam
aged or corrupt system, type R. You’re then asked whether you want to repair your system using the
Recovery Console or the emergency repair process. Type R to use the emergency repair process.
3. Choose the type of repair—You can choose either the Fast Repair option (type F) or the Manual Repair
option (type M). The Manual Repair option requires that you make all the repair selections and determine
whether you want to repair system files, the partition boot sector, or startup environment problems. I rec
ommend that only advanced users and administrators use the Manual Repair option and that you use it
only to repair one item at a time when you know the others are intact. For example, if you’re confident that
the partition boot sector and startup environment are both intact, attempt to repair only the system boot
files. (The Manual Repair option doesn’t let you try to repair problems with the Registry. To manually
repair individual Registry settings and files or replace the entire Registry, use the Recovery Console.
However, administrators should be the only ones who use it.)
•A corrupted or invalid database schema (which defines the structure of the database—what type of data it
contains and how the data is arranged)
•Missing DNS records
•Damaged or corrupted information
•Accidental misconfiguration by an administrator.
In case one of these events occurs, it’s imperative that your disaster-preparation routine and disaster-recovery plan
include provisions for backing up and restoring AD.
When you back up System State data on a domain controller, you’re backing up all AD data that exists on the serv-
er (along with other system components such as the SYSVOL folder and the Registry). You cannot choose to back
up or restore individual components of the System State data or of AD; it’s an all-or-nothing process. This is
because of the dependencies among the components of the System State.
You can back up AD in its entirety only using a full backup; you can’t back up AD incrementally (that is, by
backing up only object data that has changed since the last backup)..
Although the Win2K backup utility can certainly do the job of backing up AD, most information technology (IT)
shops prefer to use more robust third-party applications for their backup solutions. If your organization is using a
third-party backup application, you need to make sure that you’ve purchased a Win2K-compatible version of the
product. It must be capable of backing up AD or provide a separate agent component that can do so. Windows
NT 4.0–era versions of backup utilities are incapable of understanding AD’s format or backing it up, so for most
companies, migrating to Win2K necessarily means upgrading their backup software.
As discussed in "Specifying Advanced Options" earlier in this chapter, the useful life of a backup of AD is
usually only 60 days, so you may experience problems if you attempt to restore AD backup images older
than this into a replicated Win2K network. The reason for the 60-day figure is that the useful life of a back-
up is identical to the tombstone lifetime setting for the enterprise, and in Win2K, the default value for the
tombstone lifetime entry is 60 days. However, you can change this value using the Directory Service (NTDS)
config object. The value is the "tombstoneLifetime" attribute of the CN=Directory Service, CN=Windows NT,
CN=Services, CN=Configuration, DC=COMPANY, DC=COM (for a domain called "company.com"). You can set
the value using the LDP or ADSI Edit utility..
The answers to these questions will help you determine which of the three AD restore modes to choose: non-
authoritative, authoritative, or primary.
•Non-Authoritative Restore—A normal restore; most restore operations are run using this restore mode. You
usually perform a non-authoritative restore when the problem is limited to the local Win2K domain con
troller and the AD replicas housed on other Win2K domain controllers are believed to contain valid repli
cas. During a non-authoritative restore, any data that you restore (including AD objects) retains its original
update sequence number (USN). (AD uses the USN to detect and propagate any changes to the other
domain controllers.) In turn, AD replication uses the USN to detect and propagate any changes to the
other domain controllers in the same domain.
•Authoritative Restore—Used when the other Win2K domain controllers contain invalid replicas or unde
sirable data. In this case, you manually designate the copy of the AD database that you restore. You desig
nate only the local domain controller to be authoritative (that is, the "master" copy from which all other
domain controllers seed their own AD replicas). An authoritative restore modifies the USN of each AD
object that is being restored to the domain controller. This allows the USN of each object to be higher than
any of those that are currently on the domain controller. After all of the objects have been restored, they’re
replicated to the other replicas.
You can use backup data from one domain controller to restore only to the same domain controller. You
cannot use the backup of one domain controller to restore another computer. However, if the domain con-
troller’s system fails and never comes back, you can restore the backup data to another computer that will
take the place of the original domain controller. In addition, to completely back up your environment, you
need to have a backup of every domain controller on the network. Keep this in mind when developing your
backup strategy. Also, because of the importance to the entire system of the domain controller that was
created first in the root domain, you need to frequently back it up.
If you’re using Win2K’s backup utility (Ntbackup.exe) to perform a restore, be aware of the following addi-
tional conditions, which must be met for the System State (including AD) to be successfully restored. If any
of these conditions isn’t met, the restore will fail.
- The server name must be identical to the backed-up server name.
- The drive on which the %Systemroot% folder is located must be the same as when it was backed up.
- The %Systemroot% folder must be the same folder as when it was backed up.
- If SYSVOL or other AD databases are located on another volume, the databases must exist and be on the
same drive as when they were backed up.
While it’s possible to back up AD either online (while the directory services are running) or offline (when
the services are stopped), you can restore AD only when the directory services are offline.
Mode" earlier in this chapter). When the system starts up, press F8 at the Windows 2000 Boot Loader menu, then
select Directory Services Restore Mode from the alternate boot menu. Win2K starts in a safe mode, and you can
follow these steps to restore AD information on a domain controller:
1. Log on as a member of the Administrators or Backup Operators groups. I should also point out that the
members are local computer users and groups, not domain users and groups. Also, users who are backup
administrators (but not regular administrators) can’t run Ntdsutil to do things like authoritative restores.
This requires a user with administrative privileges.
2. Run the Win2K backup program. On the Welcome page, click Restore Wizard, then select the System
State check box.
If you restore the System State data and you don’t designate an alternate location for the restored data, the
backup utility erases the System State data that is currently on your computer and replaces it with the
System State data you’re restoring. If you restore the System State files to an alternate location, the AD
database, the Certificate Services database, and COM+ Class Registration database aren’t restored.
3. Once the restore is complete, restart the domain controller.
After the non-authoritative restore is complete, the restored data (which may be out of date) becomes synchro-
nized. Once you restart the domain controller, it should begin participating in AD replication and receive directory
updates from the other domain controllers.
You can use a non-authoritative restore if the domain controller fails or the entire AD database is corrupt because
you can simply restore the entire system data non-authoritatively. In more technical terms, this means that a non-
authoritative restore retains the original USN.
A non-authoritative restore provides a start point (the time at which the data was backed up) for data replication.
This minimizes the replication traffic on the network because only changed data is replicated rather than the entire
directory. In the absence of this start point, all data would be replicated from other servers.
Another option for restoring a Win2K domain controller is to simply re-install Win2K and reconfigure the
system as a domain controller on its domain. You can give the domain controller a different name, or if
you rebuild it with the same name, you need to first remove the old domain controller from the domain.
After you’ve done this, the normal AD replication process repopulates the domain controller with current
directory information.
By definition, an authoritative restore replicates any changes made to the current data set to its outbound replica-
tion partners. To be the authoritative source for the restore, the authoritative restore modifies the USN of the AD
objects that are being restored to the domain controller so that each object has a higher value than any of those that
are currently on the domain controller. This forces the restored objects to be replicated to the other replicas of the
same domain controller.
Such a restore is unusual, but it has the effect of rolling back all of the AD objects in the domain controller to the
time of the original backup. You can do this to restore erroneously deleted information of a replicated set of data.
For example, if you inadvertently delete or modify objects stored in AD, you may want to authoritatively restore
them so that they can be replicated again to the other domain controllers. If you don’t authoritatively restore these
objects, they’re never replicated to the other domain controllers in the same domain because they appear to be older
than the objects currently on your domain controller.
To help you accomplish an authoritative restore, you can use the Ntdsutil utility to mark the target objects for
authoritative restore. This ensures that the data you want to be restored is replicated to the appropriate domain
controllers after the restore occurs. Table 6.3 describes the commands in Ntdsutil available to perform an authorita-
tive restore.
Restore database verinc %d Marks the entire AD database (Ntds.dit) as authoritative and increments the
(submenu option) version number by %d. Use this option only to authoritatively restore over a
previous sequential authoritative restore.
Restore subtree %s Marks a subtree (all objects in the subtree) as being authoritative. The subtree
(submenu option) is defined by using the distinguished name (DN) of the OU object.
Restore subtree %s verinc %d Marks a subtree (all objects in the subtree) as being authoritative and incre-
(submenu option) ments the version number by %d. The subtree is defined by using the DN of
the OU object. Use this option only to authoritatively restore from a backup
that contains the objects you want to restore over.
Once AD is restored, be certain to answer No to the option to restart the server. This step is critical because the
restore will otherwise be non-authoritative when the server restarts, and you’ll risk re-inheriting unwanted data
from other AD replicas.
Always authoritatively restore SYSVOL whenever you authoritatively restore AD and vice-versa. This
ensures that SYSVOL and AD stay synchronized.
There are a few potentially negative consequences of authoritative restores that you should be aware of.
One such effect relates to trust relationships and computer account passwords. Both of these are automat-
ically negotiated at a specified interval (every seven days, by default, except for computer accounts that
can be disabled by the administrator). During an authoritative restore, a previously used password for the
objects in AD that maintain trust relationships and computer accounts can be restored. In the case of trust
relationships, this could void communication with other domain controllers from other domains. With com-
puter account passwords, this could void communications between the member workstation or server and a
domain controller of its domain. In this case, you may have to manually reestablish trusts to resume proper
communications.
If you don’t select this option, the FRS data that you’re restoring may not be replicated to other servers because the
restored data will appear to be older than the existing data. The other servers will overwrite the restored data, and
this will effectively prevent you from restoring the FRS data. Third-party backup applications also provide similar
options for performing primary restores.
Advanced
Advanced verification is optional; it isn’t usually required for normal recovery operations. You run it regardless of
whether you performed an authoritative restore. If you do run it, you must do so before you run basic verification.
Incorrectly using the Ntdsutil utility may corrupt the AD database, and you’ll have to restore the database from
backup again.
Basic
Basic verification consists of rebooting and logging on normally, then confirming that the restored services are in a
state consistent with a successful restore. It also includes verifying that FRS and Certificate Services were successful-
ly restored.
1. Restart the computer. (AD will automatically detect that it’s been recovered from a backup, perform an
integrity check, and re-index its database. Both AD and FRS will be brought up to date from their repli
cation partners using the standard replication protocols for each of those services.)
2. Confirm that distributed services successfully restored. (You should be able to browse AD and confirm
that all of the user and group objects that were present in the directory before the backup were restored.)
3. Confirm that files that were members of an FRS replica set and certificates that were issued by the
Certificate Service are present.
Affected backups are corrupted in such a way that when they’re restored, they prevent the domain controller from
starting and cause it to display a "Directory Service cannot start" error message. Also, if you run Ntdsutil using the
Semantic Database Analysis option to run the database semantic checker, you receive error message 550: "Database
is inconsistent." With the backup utility, the problem happens even when the Verify option is turned on.
You can avoid these problems by installing SP2, then making new backups of the System State. The fix is preventa-
tive only; it doesn’t resolve errors that occur if you restore System State backups that contain incorrect header infor-
mation.
How It Occurs
As mentioned earlier in this chapter, you prepare for AD disaster recovery by making System State backups from
the console of Win2K-based domain controllers at regular intervals. The elements of AD that are captured in a
System State backup include the AD database (Ntds.dit), transaction logs (Edb*.log), and a patch file (Edb.pat).
You restore by starting Win2K domain controllers in Directory Service Repair mode and restoring the System State
using the Win2K backup utility, Ntbackup.exe, or a third-party equivalent. After performing the restore operation,
you can optionally use Ntdsutil.exe to mark specified domain name paths or subtrees as authoritative when the
domain controller next starts in AD mode.
•An initial backup of AD is performed on a Win2K domain controller. During the backup, a number of
objects change as a result of either local changes or replication. The changes to these objects generate addi
tional transaction logs, which in turn advance the Joint Engine Technology (Jet) database checkpoint. The
Jet checkpoint maintains a list of unflushed data in the database. Two copies of the checkpoint data are
stored: One in the database header of the Ntds.dit file and a second in-memory copy, which is written to
the backup media.
•A second backup is performed on another, relatively inactive domain controller. During the backup, the
log files aren’t generated, and the Jet checkpoint isn’t advanced. This second backup completes before the
log files are generated and the Jet checkpoint advances in the first backup.
•The second backup is then restored.
In order for the problem to manifest itself, the new transaction logs and Jet checkpoint advancement that occur
during the first backup must happen after the second backup completes. As a result, a relatively large first backup is
more likely to produce the problem because there is a commensurately larger window of time for the second back-
up to complete (and for the Jet checkpoint to advance). Domain controllers in busy production environments are
less likely to experience this problem during typical activity (creating, deleting, and modifying objects) because
these activities in AD result in a steady advance of the Jet checkpoint.
The result of all this, and the core of the problem, is that an outdated record of required transaction log files and
checkpoint data is written to the backup media, then later restored as the second backup. The header in the
restored database references logs that aren’t actually required for the AD recovery and that aren’t all included in the
backup. This explains why log entries appear stating, "Log files are missing from System State." However, such
entries are misleading because it isn’t a case of the log files being missing; instead, the number of log files referenced
in the restored database header is incorrect.
Working Around It
If you use backups as a method of recovery for Win2K-based domain controllers, you may want to consider doing
the following to work around the bug:
•Inventory, then clearly label backups that were made before installing SP2. Place pre-SP2 backup media in
locked storage. Don’t forget backup media that are stored on the local drives of computers in your organi
zation.
•Consistent with good change-management practices, install SP2 on domain controllers in a lab environ
ment that is representative of your production configuration. Make multiple backups, then initiate restore
tests.
•Install SP2 on production domain controllers. Create new System State backups and clearly label them as
post-SP1 backups.
•Destroy pre-SP2 backups.
•Use the ERD to either prepare a set of disaster-recovery disks, or use a set of previously established disks to
repair specific components of a domain controller so that it can successfully start up. Log on using an
account that has Administrator or Backup Operator privileges.
•Re-install the Windows 2000 Server OS and run the Active Directory Installation Wizard. If a major hard
ware failure or malfunction requires the computer to be completely rebuilt, you may need to re-install the
OS. After the computer is running, reconfigure the original network connections and DNS settings.
•If you need to remove a domain, run the Active Directory Installation Wizard to remove AD from all
domain controllers that you’re removing. Then use the NETDOM utility to remove the domain itself
(including cross-reference and trusted domain objects). To do this, type the following at the command
prompt:
netdom trust /remove /force
•To clean up metadata that has been left behind by decommissioned or failed domain controllers, you can
use Ntdsutil with the CLEANUP command. This operation removes the defunct domain controller’s
identification and information from AD.
You can back up and restore the domain controller’s System State—the files central to system operation; they
include the Registry, AD and SYSVOL, the COM+ Class Registration database, and boot files. You can also sched-
ule regular backups and create an ERD that helps you repair system files should your system become corrupted.
Without Sean Daily, The Definitive Guide to Active Directory Troubleshooting wouldn't be a definitive guide at all.
Sean has been running Active Directory since early beta, and he knows it inside and out. Check out his beefy bio
below! And, don't miss the backgrounder on the company he writes for. NetPro's ebook is the second comprehensive
guide published by Realtimepublishers.com that's free and available on the web - and it won't be the last!
Sean Daily is a world-renowned expert on Windows NT/2000 and a Senior Contributing Editor at Windows 2000
Magazine. In addition to being the author of numerous books, including The Definitive Guide to Windows 2000
Administration (Realtimepublishers.com) and Optimizing Windows NT (IDG/Hungry Minds), Sean speaks and
consults internationally on Windows NT/2000 and related technologies. Sean also serves as Series Editor for
Realtimepublishers.com's Definitive Guide and Tips and Tricks Guide series of ebooks.