You are on page 1of 71

Microsoft.Braindump.70-452.v2010-11-12.by.vlak.90q.

vce

Number: 70-450
Passing Score: 800
Time Limit: 120 min
File Version: 1.1

Exam: 70-452
PRO: Designing a Business Intelligence Infrastructure Using Microsoft SQL Server 2008

Dont throw your books away!!


Do not rely on dumps only but try to have good understanding of Exam topics and concepts!

Good Luck for the exam

Created by vlak
07-Nov-2010
Exam A

QUESTION 1
You design a Business Intelligence (BI) solution by using SQL Server 2008. You plan to create a SQL Server
2008 Reporting Services (SSRS) solution that contains five sales dashboard reports.
Users must be able to manipulate the reports' parameters to analyze data. You need to ensure that the following
requirements are met:
Users can manipulate the parameters for data analysis in a single trip to the data source. Reports are
automatically rendered as soon as they are accessed for the first time. Which two tasks should you perform?
(Each correct answer presents part of the solution. Choose two.)

A. Filter data by using expressions.


B. Specify the default values for each parameter.
C. Create an available values list for each parameter.
D. Create report parameters by using query parameters to filter data at the data source.

Answer: AB
Section: (none)

Explanation/Reference:

QUESTION 2
You design a SQL Server 2008 Reporting Services (SSRS) solution. You create a report by using Microsoft
Visual Studio .NET 2008.
The report contains the following components:
A dataset named Customer that lists all active customers and their details. The dataset accepts no parameters.
A dataset named SalesHistory that lists all sales transactions for a specified time period and accepts year and
month as parameters.You need to ensure that a summary of sales transactions is displayed for each customer
after the customer details.
Which component should you add to the report?

A. List
B. Table
C. Matrix
D. Subreport

Answer: D
Section: (none)

Explanation/Reference:
http://msdn.microsoft.com/en-us/library/ms160348(SQL.100).aspx
How to: Add a Subreport and Parameters (Reporting Services)
Add subreports to a report when you want to create a main report that is a container for multiple related reports.
A subreport is a reference to another report. To relate the reports through data values (for example, to have
multiple reports show data for the same customer), you must design a parameterized report (for example, a
report that shows the details for a specific customer) as the subreport. When you add a subreport to the main
report, you can specify parameters to pass to the subreport.
You can also add subreports to dynamic rows or columns in a table or matrix. When the main report is
processed, the subreport is processed for each row. In this case, consider whether you can achieve the desired
effect by using data regions or nested data regions.
QUESTION 3
You design a Business Intelligence (BI) solution by using SQL Server 2008. The solution includes a SQL Server
2008 Analysis Services (SSAS) database. The database contains a data mining structure that uses a SQL
Server 2008 table as a data source. A table named OrderDetails contains detailed information on product sales.
The OrderDetails table includes a column named Markup.
You build a data mining model by using the Microsoft Decision Trees algorithm. You classify Markup as
discretized content. The algorithm produces a large number of branches for Markup and results in low
confidence ratings on predictable columns. You need to verify whether the Markup values include inaccurate
data. What should you do?

A. Modify the content type of Markup as Continuous.


B. Create a data mining dimension in the SSAS database from OrderDetails.
C. Create a data profile by using SQL Server 2008 Integration Services (SSIS).
D. Create a cube in SSAS. Use OrderDetails as a measure group. Recreate the data mining structure and
mining model from the cube data.

Answer: C
Section: (none)

Explanation/Reference:
Discretized The column has continuous values that are grouped into buckets. Each bucket is considered to
have a specific order and to contain discrete values. Possible values for discretization method are automatic,
equal areas, or clusters. Automatic means that SSAS determines which method to use. Equal areas results in
the input data being divided into partitions of equal size. This method works best with data with regularly
distributed values. Clusters means that SSAS samples the data to produce a result that accounts for “clumps”
of data values. Because of this sampling, Clusters can be used only with numeric input columns. You can use
the date, double, long, or text data type with the Discretized content type.
Microsoft Decision Trees Algorithm
Microsoft Decision Trees is probably the most commonly used algorithm, in part because of its flexibility—
decision trees work with both discrete and continuous attributes—and also
because of the richness of its included viewers. It’s quite easy to understand the output via these viewers. This
algorithm is used to both view and to predict. It is also used (usually in
conjunction with the Microsoft Clustering algorithm) to find deviant values. The Microsoft Decision Trees
algorithm processes input data by splitting it into recursive (related) subsets.
In the default viewer, the output is shown as a recursive tree structure.
If you are using discrete data, the algorithm identifies the particular inputs that are most closely correlated with
particular predictable values, producing a result that shows which columns
are most strongly predictive of a selected attribute. If you are using continuous data, the algorithm uses
standard linear regression to determine where the splits in the decision tree occur.
Clicking a node displays detailed information in the Mining Legend window. You can configure the view using
the various drop-down lists at the top of the viewer, such as Tree,
Default Expansion, and so on. Finally, if you’ve enabled drillthrough on your model, you can display the
drillthrough information—either columns from the model or (new to SQL Server 2008) columns from the mining
structure, whether or not they are included in this model.
Data Profiling
The control flow Data Profiling task relates to business problems that are particularly prominent in BI projects:
how to deal with huge quantities of data and what to do when this data originates from disparate sources.
Understanding source data quality in BI projects—when scoping, early in prototyping, and during package
development—is critical when estimating the work involved in building the ETL processes to populate the OLAP
cubes and data mining structures. It’s common to underestimate the amount of work involved in cleaning the
source data before it is loaded into the SSAS destination structures.The Data Profiling task helps you to
understand the scope of the source-data cleanup involved in your projects. Specifically, this cleanup involves
deciding which methods to use to clean up your data. Methods can include the use of advanced package
transformations (such as fuzzy logic) or more staging areas (relational tables) so that fewer in-memory
transformations are necessary during the transformation processes. Other considerations include total number
of tasks in a single package, or overall package size.
(Smart Business Intelligence Solutions with Microsoft SQL Server® 2008, Copyright © 2009 by Kevin Goff and Lynn Langit)
QUESTION 4
You design a Business Intelligence (BI) solution by using SQL Server 2008. The solution contains a SQL Server
2008 Analysis Services (SSAS) database. A measure group in the database contains log entries of
manufacturing events. These events include accidents, machine failures, production capacity metrics, and other
activities. You need to implement a data mining model that meets the following requirements:
Predict the frequency of different event types.
Identify short-term and long-term patterns.
Which algorithm should the data mining model use?

A. the Microsoft Time Series algorithm


B. the Microsoft Decision Trees algorithm
C. the Microsoft Linear Regression algorithm
D. the Microsoft Logistic Regression algorithm

Answer: A
Section: (none)

Explanation/Reference:
Microsoft Time Series Algorithm
Microsoft Time Series is used to impact a common business problem, accurate forecasting. This algorithm is
often used to predict future values, such as rates of sale for a particular product. Most often the inputs are
continuous values. To use this algorithm, your source data must contain at one column marked as Key Time.
Any predictable columns must be of type Continuous. You can select one or more inputs as predictable columns
when using this algorithm.
Time series source data can also contain an optional Key Sequence column.
Function
The ARTxp algorithm has proved to be very good at short-term prediction. The ARIMA algorithm is much better
at longer-term prediction. By default, the Microsoft Time Series algorithm blends the results of the two
algorithms to produce the best prediction for both the short and long term.
Microsoft Decision Trees Algorithm
Microsoft Decision Trees is probably the most commonly used algorithm, in part because of its flexibility—
decision trees work with both discrete and continuous attributes—and also
because of the richness of its included viewers. It’s quite easy to understand the output via these viewers. This
algorithm is used to both view and to predict. It is also used (usually in
conjunction with the Microsoft Clustering algorithm) to find deviant values. The Microsoft Decision Trees
algorithm processes input data by splitting it into recursive (related) subsets.
In the default viewer, the output is shown as a recursive tree structure.
If you are using discrete data, the algorithm identifies the particular inputs that are most closely correlated with
particular predictable values, producing a result that shows which columns
are most strongly predictive of a selected attribute. If you are using continuous data, the algorithm uses
standard linear regression to determine where the splits in the decision tree occur.
Clicking a node displays detailed information in the Mining Legend window. You can configure the view using
the various drop-down lists at the top of the viewer, such as Tree,
Default Expansion, and so on. Finally, if you’ve enabled drillthrough on your model, you can display the
drillthrough information—either columns from the model or (new to SQL Server 2008) columns from the mining
structure, whether or not they are included in this model.
Microsoft Linear Regression Algorithm
Microsoft Linear Regression is a variation of the Microsoft Decision Trees algorithm, and works like classic
linear regression—it fits the best possible straight line through a series of points (the sources being at least two
columns of continuous data). This algorithm calculates all possible relationships between the attribute values
and produces more complete results than other (non–data mining) methods of applying linear regression. In
addition to a key column, you can use only columns of the continuous numeric data type. Another way to
understand this is that it disables splits. You use this algorithm to be able to visualize the relationship between
two continuous attributes. For example, in a retail scenario, you might want to create a trend line between
physical placement locations in a retail store and rate of sale for items. The algorithm result is similar to that
produced by any other linear regression method in that it produces a trend line. Unlike most other methods of
calculating linear regression, the Microsoft Linear Regression algorithm in SSAS calculates all possible
relationships between all input dataset values to produce its results. This differs from other methods of
calculating linear regression, which generally use progressive splitting techniques between the source inputs

(Smart Business Intelligence Solutions with Microsoft SQL Server® 2008, Copyright © 2009 by Kevin Goff and Lynn Langit)

QUESTION 5
You design a Business Intelligence (BI) solution by using SQL Server 2008. The solution includes a SQL Server
2008 Analysis Services (SSAS) database. A cube in the database contains a large dimension named
Customers. The database uses a data source that is located on a remote server.
Each day, an application adds millions of fact rows and thousands of new customers. Currently, a full process of
the cube takes several hours. You need to ensure that queries return the most recent customer data with the
minimum amount of latency.
Which cube storage model should you use?

A. hybrid online analytical processing (HOLAP)


B. relational online analytical processing (ROLAP)
C. multidimensional online analytical processing (MOLAP)
D. automatic multidimensional online analytical processing (automatic MOLAP)

Answer: A
Section: (none)

Explanation/Reference:
Relational OLAP
Relational OLAP (ROLAP) stores the cube structure in a multidimensional database. The leaf-level measures
are left in the relational data mart that serves as the source of the cube. The preprocessed aggregates are also
stored in a relational database table. When a decision maker requests the value of a measure for a certain set
of dimension members, the ROLAP system first checks to determine whether the dimension members specify
an aggregate or a leaf-level value. If an aggregate is specified, the value is selected from the relational table. If
a leaf-level value is specified, the value is selected from the data mart.
Also, because the ROLAP architecture retrieves leaf-level values directly from the data mart, the leaf-level
values returned by the ROLAP system are always as up-to-date as the data mart itself. In other words, the
ROLAP system does not add latency to leaf-level data. The disadvantage of a ROLAP system is that the
retrieval of the aggregate and leaf-level values is slower than the other OLAP architectures.
Multidimensional OLAP
Multidimensional OLAP (MOLAP) also stores the cube structure in a multidimensional database. However, both
the preprocessed aggregate values and a copy of the leaf-level values are placed in the multidimensional
database as well. Because of this, all data requests are answered from the multidimensional database, making
MOLAP systems extremely responsive. Additional time is required when loading a MOLAP system because all
the leaflevel data is copied into the multidimensional database. Because of this, times occur when the leaf-level
data returned by the MOLAP system is not in sync with the leaf-level data in the data mart itself. A MOLAP
system, therefore, does add latency to the leaf-level data. The MOLAP architecture also requires more disk
space to store the copy of the leaf-level values in the multidimensional database. However, because MOLAP is
extremely efficient at storing values, the additional space required is usually not significant.
Hybrid OLAP
Hybrid OLAP (HOLAP) combines ROLAP and MOLAP storage. This is why we end up with the word “hybrid” in
the name. HOLAP tries to take advantage of the strengths of each of the other two architectures while
minimizing their weaknesses. HOLAP stores the cube structure and the preprocessed aggregates in a
multidimensional database. This provides the fast retrieval of aggregates present in MOLAP structures. HOLAP
leaves the leaf-level data in the relational data mart that serves as the source of the cube. This leads to longer
retrieval times when accessing the leaf-level values. However, HOLAP does not need to take time to copy the
leaf-level data from the data mart. As soon as the data is updated in the data mart, it is available to the decision
maker. Therefore, HOLAP does not add latency to the leaf-level data. In essence, HOLAP sacrifices retrieval
speed on leaf-level data to prevent adding latency to leaf-level data and to speed the data load.
(McGraw-Hill - Delivering Business Intelligence with Microsoft SQL Server 2008 (2009))

QUESTION 6
You design a Business Intelligence (BI) solution by using SQL Server 2008. The solution includes a SQL Server
2008 Analysis Services (SSAS) database. The database contains a cube named Financials. The cube contains
objects as shown in the exhibit.

A calculated member named Gross Margin references both Sales Details and Product Costs. You need to
ensure that the solution meets the following requirements:
Managers must be able to view only their cost center's percentage of the company's gross margin.
The impact on query performance is minimal.
What should you do?

A. Add dimension-level security and enable the Visual Totals option.


B. Add cell-level security that has read permissions on the Gross Margin measure
C. Add cell-level security that has read contingent permissions on the Gross Margin measure.
D. Change the permissions on the Managers dimension level from Read to Read/Write.

Answer: A
Section: (none)

Explanation/Reference:
http://msdn.microsoft.com/en-us/library/ms174927.aspx
User Access Security Architecture
Microsoft SQL Server Analysis Services relies on Microsoft Windows to authenticate users. By default, only
authenticated users who have rights within Analysis Services can establish a connection to Analysis Services.
After a user connects to Analysis Services, the permissions that user has within Analysis Services are
determined by the rights that are assigned to the Analysis Services roles to which that user belongs, either
directly or through membership in a Windows role.
Dimension-Level Security
A database role can specify whether its members have permission to view or update dimension members in
specified database dimensions. Moreover, within each dimension to which a database role has been granted
rights, the role can be granted permission to view or update specific dimension members only instead of all
dimension members. If a database role is not granted permissions to view or update a particular dimension and
some or all the dimension's members, members of the database role have no permission to view the dimension
or any of its members.
Note
Dimension permissions that are granted to a database role apply to the cube dimensions based on the
database dimension, unless different permissions are explicitly granted within the cube that uses the database
dimension.
Cube-Level Security
A database role can specify whether its members have read or read/write permission to one or more cubes in a
database. If a database role is not granted permissions to read or read/write at least one cube, members of the
database role have no permission to view any cubes in the database, despite any rights those members may
have through the role to view dimension members.
Cell-Level Security
A database role can specify whether its members have read, read contingent, or read/write permissions on
some or all cells within a cube. If a database role is not granted permissions on cells within a cube, members of
the database role have no permission to view any cube data. If a database role is denied permission to view
certain dimensions based on dimension security, cell-level security cannot expand the rights of the database
role members to include cell members from that dimension. On the other hand, if a database role is granted
permission to view members of a dimension, cell-level security can be used to limit the cell members from the
dimension that the database role members can view.

QUESTION 7
You design a Business Intelligence (BI) solution by using SQL Server 2008. The solution includes a SQL Server
2008 Reporting Services (SSRS) infrastructure in a scale-out deployment. All reports use a SQL Server 2008
relational database as the data source. You implement row-level security.
You need to ensure that all reports display only the expected data based on the user who is viewing the report.
What should you do?

A. Store the credential of a user in the data source.


B. Configure the infrastructure to support Kerberos authentication.
C. Configure the infrastructure to support anonymous authentication by using a custom authentication
extension.
D. Ensure that all report queries add a filter that uses the User.UserID value as a hidden parameter.

Answer: B
Section: (none)

Explanation/Reference:

QUESTION 8
You design a Business Intelligence (BI) solution by using SQL Server 2008. You need to load data into your
online transaction processing (OLTP) database once a week by using data from a flat file. The file contains all
the details about new employees who joined your company last week. The data must be loaded into the tables
shown in the exhibit. (Click the Exhibit button.) Employee.EmployeeID is an identity.
A SQL Server 2008 Integration Services (SSIS) package contains one data flow for each of the destination
tables. In the Employee Data Flow, an OLE DB Command transformation executes a stored procedure that
loads the Employee record and returns the EmployeeID value.
You need to accomplish the following tasks:
Ensure that the EmployeeID is used as a foreign key (FK) in all child tables for the correct Employee record.
Minimize the number of round trips to the database.
Ensure that the package performs in the most efficient manner possible.
What should you do?

A. Use a Lookup Transformation in each of the child table data flows to find the EmployeeID based on first
name and last name.
B. Store the EmployeeID values in SSIS variables and use the variables to populate the FK columns in each of
the child tables.
C. After the Employee table is loaded, write the data to a Raw File Destination and use the raw file as a source
for each of the subsequent Data Flows.
D. After the Employee table is loaded, write the data to a Flat File Destination and use the flat file as a source
for each of the subsequent Data Flows.

Answer: C
Section: (none)

Explanation/Reference:
http://technet.microsoft.com/en-us/library/ms141661.aspx
Raw File Destination
The Raw File destination writes raw data to a file. Because the format of the data is native to the destination, the
data requires no translation and little parsing. This means that the Raw File destination can write data more
quickly than other destinations such as the Flat File and the OLE DB destinations.
You can configure the Raw File destination in the following ways:
Specify an access mode which is either the name of the file or a variable that contains the name of the file to
which the Raw File destination writes.
Indicate whether the Raw File destination appends data to an existing file that has the same name or creates a
new file.
The Raw File destination is frequently used to write intermediary results of partly processed data between
package executions. Storing raw data means that the data can be read quickly by a Raw File source and then
further transformed before it is loaded into its final destination. For example, a package might run several times,
and each time write raw data to files. Later, a different package can use the Raw File source to read from each
file, use a Union All transformation to merge the data into one data set, and then apply additional
transformations that summarize the data before loading the data into its final destination such as a SQL Server
table.

Raw File Source The Raw File source lets us utilize data that was previously written to a raw data file by a
Raw File destination. The raw file format is the native format for Integration Services. Because of this, raw files
can be written to disk and read from disk rapidly. One of the goals of Integration Services is to improve
processing efficiency by moving data from the original source to the ultimate destination without making any
stops in between. However, on some occasions, the data must be staged to disk as part of an Extract,
Transform, and Load process. When this is necessary, the raw file format provides the most efficient means of
accomplishing this task.
(McGraw-Hill - Delivering Business Intelligence with Microsoft SQL Server 2008 (2009))

QUESTION 9
You design a Business Intelligence (BI) solution by using SQL Server 2008. You create a SQL Server 2008
Integration Services (SSIS) package to perform an extract, transform, and load (ETL) process to load data to a
DimCustomer dimension table that contains 1 million rows.
Your data flow uses the following components:
A SQL Destination data flow task to insert new customers An OLE DB Command transform that updates
existing customers On average, 25 percent of existing customer records is updated each night. You need to
reduce the amount of time required to update customer records.
What should you do?

A. Modify the UPDATE statement in the OLE DB Command transform to use the PAGLOCK table hint.
B. Modify the UPDATE statement in the OLE DB Command transform to use the TABLOCK table hint.
C. Stage the data in the data flow. Replace the OLE DB Command transform in the data flow with an Execute
SQL task in the control flow.
D. Stage the data in the data flow. Replace the UPDATE statement in the OLE DB Command transform with a
DELETE statement followed by an INSERT statement.

Answer: C
Section: (none)

Explanation/Reference:
Data Flow
Once we set the precedence constraints for the control flow tasks in the package, we can define each of the
data flows. This is done on the Data Flow Designer tab. Each data flow task that was added to the control flow
has its own layout on the Data Flow Designer tab. We can switch between different data flows using the Data
Flow Task drop-down list located at the top of the Data Flow tab. The Data Flow Toolbox contains three types of
items: data flow sources, data flow transformations, and data flow destinations.
However, on some occasions, the data must be staged to disk as part of an Extract, Transform, and Load
process. When this is necessary, the Raw File format provides the most efficient means of accomplishing this
task.
Execute SQL Task The Execute SQL task enables us to execute SQL statements or stored procedures. The
contents of variables can be used for input, output, or input/output parameters and the return value. We can
also save the result set from the SQL statements or stored procedure in a package variable. This result set
could be a single value, a multirow/multicolumn result set, or an XML document.

http://www.sqlservercentral.com/blogs/dknight/archive/2008/12/29/ssis-avoid-ole-db-command.aspx
SSIS – Avoid OLE DB Command
he OLE DB Command runs insert, update or delete statements for each row, while the Execute SQL Task does
a Bulk Insert in this instance. That means every single row that goes through your package would have an
insert statement run when it gets to an OLE DB Command.
So if you know you are dealing with more than just a couple hundred rows per run then I would highly suggest
using a staging table vs. the OLE DB Command.

QUESTION 10
You design a SQL Server 2008 Analysis Services (SSAS) solution. The data source view has tables as shown
in the exhibit. (Click the Exhibit button.)

The FactInternetSales measure will be queried frequently based on the city and country of the customer.
You need to design a cube that will provide optimal performance for queries.
Which design should you choose?

A. Create two dimensions named Customer and Geography from the DimCustomer table and the
DimGeography table, respectively.
Create a materialized reference relationship between the Geography dimension and the FactInternetSales
measure by using the Customer dimension as an intermediate dimension.
B. Create two dimensions named Customer and Geography from the DimCustomer table and the
DimGeography table, respectively.
Create an unmaterialized reference relationship between the Geography dimension and the
FactInternetSales measure by using the Customer dimension as an intermediate dimension.
C. Create a dimension named Customer by joining the DimGeography and DimCustomer tables.
Add an attribute relationship from CustomerKey to City and from City to Country.
Create a regular relationship in the cube between the Customer dimension and the FactInternetSales
measure.
D. Create a dimension named Customer by joining the DimGeography and DimCustomer tables.
Add an attribute relationship from CustomerKey to City and from CustomerKey to Country.
Create a regular relationship in the cube between the Customer dimension and the FactInternetSales
measure.

Answer: C
Section: (none)

Explanation/Reference:

QUESTION 11
You design a Business Intelligence (BI) solution by using SQL Server 2008. Employees use a Windows Forms
application based on Microsoft .NET Framework 3.5. SQL Server is not installed on the employees' computers.
You write a report by using Report Definition Language (RDL). You need to ensure that if the employees are
disconnected from the corporate network, the application renders the report.
What should you do?

A. Configure the application to use an SSRS Web service by using the Render method.
B. Configure the application to use an SSRS Web service by using the RenderStream method.
C. Embed ReportViewer in the application and configure ReportViewer to render reports by using the local
processing mode.
D. Embed ReportViewer in the application and configure ReportViewer to render reports by using the remote
processing mode.

Answer: C
Section: (none)

Explanation/Reference:
Embedding Custom ReportViewer Controls
Microsoft provides two controls in Visual Studio 2008 that allow you to embed SSRS reports (or link to an
existing SSRS report hosted on an SSRS instance) in your custom Windows Forms or Web Forms applications.
Alternatively, you can also design some types of reports from within Visual Studio and then host them in your
custom applications.
The two report processing modes that this control supports are remote processing mode and local processing
mode.
Remote processing mode allows you to include a reference to a report that has already been deployed to a
report server instance. In remote processing mode, the ReportViewer control encapsulates the URL access
method we covered in the previous section. It uses the SSRS Web service to communicate with the report
server.Referencing deployed reports is preferred for BI solutions because the overhead of rendering and
processing the often large BI reports is handled by the SSRS server instance or instances. Also, you can
choose to scale report hosting to multiple SSRS servers if scaling is needed for your solution. Another
advantage to this mode is that all installed rendering and data extensions are available to be used by the
referenced report. Local processing mode allows you to run a report from a computer that does not have SSRS
installed on it.
Local reports are defined differently within Visual Studio itself, using a visual design interface that looks much
like the one in BIDS for SSRS. The output file is in a slightly different format for these reports if they’re created
locally in Visual Studio. It’s an *.rdlc file rather than an *.rdl file, which is created when using a Report Server
Project template in BIDS. The *.rdlc file is defined as an embedded resource in the Visual Studio project. When
displaying *.rdlc files to a user, data retrieval and processing is handled by the hosting application, and the
report rendering (translating it to an output format such as HTML or PDF) is handled by the ReportViewer
control. No server-based instance of SSRS is involved, which makes it very useful when you need to deploy
reports to users that are only occasionally connected to the network and thus wouldn’t have regular access to
the SSRS server. Only PDF, Excel, and image-rendering extensions are supported in local processing mode. If
you use local processing mode with some relational data as your data source, a new report design area opens
up. As mentioned, the metadata file generated has the *.rdlc extension. When working in local processing mode
in Visual Studio 2008, you’re limited to working with the old-style data containers—that is, table, matrix, or list.
The new combined-style Tablix container is not available in this report design mode in Visual Studio 2008. Both
versions of this control include a smart tag that helps you to configure the associated required properties for
each of the usage modes. Also, the ReportViewer control is freely redistributable, which is useful if you’re
considering using either version as part of a commercial application.
(Smart Business Intelligence Solutions with Microsoft SQL Server® 2008, Copyright © 2009 by Kevin Goff and Lynn Langit)

QUESTION 12
You design a SQL Server 2008 Reporting Services (SSRS) solution. The solution contains a report. The report
includes information that is grouped into hierarchical levels. You need to ensure that the solution meets the
following requirements:
When you click each level, the next level of information must be displayed. When you click the last level,
detailed information must be displayed. When the report is exported to a Microsoft Excel spreadsheet, all the
levels and all detailed information must be available in the spreadsheet.
Which feature should the report use?

A. filter
B. drilldown
C. drillthrough
D. a document map

Answer: B
Section: (none)

Explanation/Reference:
http://technet.microsoft.com/en-us/library/dd207141.aspx
Drillthrough, Drilldown, Subreports, and Nested Data Regions (Report Builder 3.0 and
SSRS)
You can organize data in a variety of ways to show the relationship of the general to the detailed.
You can put all the data in the report, but set it to be hidden until a user clicks to reveal details; this is a
drilldown action. You can display the data in a data region, such as a table or chart, which is nested inside
another data region, such as a table or matrix. You can display the data in a subreport that is completely
contained within a main report.
Or, you can put the detail data in drillthrough reports, separate reports that are displayed when a user clicks a
link.

QUESTION 13
You design a Business Intelligence (BI) solution by using SQL Server 2008. You plan to develop SQL Server
2008 Reporting Services (SSRS) reports. Several reports will contain identical data regions.
You need to minimize the amount of maintenance when changes are made to the data regions.
What should you do?

A. Grant the Create Linked Reports role to all users.


B. Create each data region as a report. Embed the reports by using the subreport control.
C. Create a report template for each data region. Use the report template to create each report.
D. Create a shared data source in the SSRS project. Use the new shared data source for all reports.

Answer: B
Section: (none)

Explanation/Reference:
QUESTION 14
You are designing a SQL Server 2008 Reporting Services (SSRS) solution. You have a report that has several
parameters that are populated when users execute the report. You need to ensure that the solution meets the
following requirements:
Users can define their own default parameter values for the report.
Users can schedule snapshots at any time.
Which feature should you use?

A. My Reports
B. Linked Reports
C. Standard Subscription
D. Data-Driven Subscription

Answer: B
Section: (none)

Explanation/Reference:
With a linked report, our report is deployed to one folder. It is then pointed to by links placed elsewhere within
the Report Catalog. To the user, the links look just like a report. Because of these links, the report appears to be
in many places. The sales department sees it in their folder. The personnel department sees it in their folder.
The fact of the matter is the report is only deployed to one location, so it is easy to administer and maintain.
An execution snapshot is another way to create a cached report instance. Up to this point, we have discussed
situations where cached report instances are created as the result of a user action. A user requests a report,
and a copy of that report’s intermediate format is placed in the report cache. With execution snapshots, a
cached report instance is created automatically.
Not all users can change execution snapshots. To change the execution snapshot properties for a report, you
must have rights to the Manage Reports task. Of the four predefined security roles, the Content Manager, My
Reports, and Publisher roles have rights to this task.
(McGraw-Hill - Delivering Business Intelligence with Microsoft SQL Server 2008 (2009))

http://msdn.microsoft.com/en-us/library/bb630404.aspx
A linked report is a report server item that provides an access point to an existing report. Conceptually, it is
similar to a program shortcut that you use to run a program or open a file.
A linked report is derived from an existing report and retains the original's report definition. A linked report
always inherits report layout and data source properties of the original report. All other properties and settings
can be different from those of the original report, including security, parameters, location, subscriptions, and
schedules.
You can create a linked report on the report server when you want to create additional versions of an existing
report. For example, you could use a single regional sales report to create region-specific reports for all of your
sales territories.
Although linked reports are typically based on parameterized reports, a parameterized report is not required.
You can create linked reports whenever you want to deploy an existing report with different settings.

QUESTION 15
You design a Business Intelligence (BI) solution by using SQL Server 2008. You have developed SQL Server
2008 Reporting Services (SSRS) reports that are deployed on an SSRS instance.
You plan to develop a new application to view the reports. The application will be developed by using Microsoft
ASP.NET 3.5.
You need to ensure that the application can perform the following tasks:
Display available reports in a tree view control.
Create and manage subscriptions on reports.
What should you do?
A. Configure the ASP.NET application to use the SSRS Web service.
B. Configure the ASP.NET application to use URL access along with the Command parameter.
C. Embed a ReportViewer control in the ASP.NET application. Configure the control to use the local processing
mode.
D. Embed a ReportViewer control in the ASP.NET application. Configure the control to use the remote
processing mode.

Answer: A
Section: (none)

Explanation/Reference:
Report Server Web Service
The Report Server Web service is the core engine for all on-demand report and model processing requests that
are initiated by a user or application in real time, including most requests that are directed to and from Report
Manager. It includes more than 70 public methods for you to access SSRS functionality programmatically. The
Report Manager Web site accesses these Web services to provide report rendering and other functionality.
Also,other integrated applications, such as the Report Center in Office SharePoint Server 2007, call SSRS Web
services to serve up deployed reports to authorized end users. The Report Server Web service performs end-
to-end processing for reports that run on demand. To support interactive processing, the Web service
authenticates the user and checks the authorization rules prior to handing a request. The Web service supports
the default Windows security extension and custom authentication extensions. The Web service
is also the primary programmatic interface for custom applications that integrate with Report Server, although its
use is not required. If you plan to develop a custom interface for your reports, rather than using the provided
Web site or some other integrated application (such as Office SharePoint Server 2007), you’ll want to explore
the SQL Server Books Online topic “Reporting Services Web Services Class Library.” There you can examine
specific Web methods.
(Smart Business Intelligence Solutions with Microsoft SQL Server® 2008, Copyright © 2009 by Kevin Goff and Lynn Langit)

QUESTION 16
You design a Business Intelligence (BI) solution by using SQL Server 2008. You design a SQL Server 2008
Reporting Services (SSRS) report that meets the following requirements:
Displays sales data for the last 12 months.
Enables users to view the sales information summarized by month. Enables users to view individual sales
orders for any given month. You need to design the report to minimize the impact on bandwidth.
What should you do?

A. Create a standard report that contains all sales orders. Implement report filtering based on the month.
B. Create a standard report that contains all sales orders. Implement grouping for the monthly summaries.
C. Create a standard report that contains the monthly summaries. Create a subreport for the sales orders for
any given month.
D. Create a standard report that contains the monthly summaries. Create a drillthrough report for the sales
orders for any given month.

Answer: D
Section: (none)

Explanation/Reference:
Drillthrough Action Defines a dataset to be returned as a drillthrough to a more detailed level.
Creating Drillthrough Actions
For the most part, Drillthrough Actions have the same properties as Actions. Drillthrough Actions do not have
Target Type or Target Object properties. In their place, the Drillthrough Action has the following:
- Drillthrough Columns Defines the objects to be included in the drillthrough dataset.
- Default A flag showing whether this is the default Drillthrough Action.
- Maximum Rows The maximum number of rows to be included in the drillthrough dataset.
(McGraw-Hill - Delivering Business Intelligence with Microsoft SQL Server 2008 (2009))

http://technet.microsoft.com/en-us/library/ff519554.aspx
Drillthrough Reports (Report Builder 3.0 and SSRS)
A drillthrough report is a report that a user opens by clicking a link within another report. Drillthrough reports
commonly contain details about an item that is contained in an original summary report. The data in the
drillthrough report is not retrieved until the user clicks the link in the main report that opens the drillthrough
report. If the data for the main report and the drillthrough report must be retrieved at the same time, consider
using a subreport

QUESTION 17
You design a Business Intelligence (BI) solution by using SQL Server 2008. You create a sales report by using
SQL Server 2008 Reporting Services (SSRS). The report is used by managers in a specific country.
Each manager prints multiple copies of the report that contains the previous day's sales for each of their sales
executives.
You need to ensure that the report uses the minimum number of round trips to the database server.
What should you do?

A. Query the database for both Country and Sales Executive.


B. Implement report filtering for both Country and Sales Executive.
C. Implement report filtering for Country and query the data source for Sales Executive.
D. Implement report filtering for Sales Executive and query the data source for Country.

Answer: D
Section: (none)

Explanation/Reference:
http://technet.microsoft.com/en-us/library/dd239395.aspx
Choosing When to Set a Filter
Specify filters for report items when you cannot filter data at the source. For example, use report filters when the
data source does not support query parameters, or you must run stored procedures and cannot modify the
query, or a parameterized report snapshot displays customized data for different users.

You can filter report data before or after it is retrieved for a report dataset. To filter data before it is retrieved,
change the query for each dataset. When you filter data in the query, you filter data at the data source, which
reduces the amount data that must be retrieved and processed in a report. To filter data after it is retrieved,
create filter expressions in the report. You can set filter expressions for a dataset, a data region, or a group,
including detail groups. You can also include parameters in filter expressions, providing a way to filter data for
specific values or for specific users, for example, filtering on a value that identifies the user viewing the report.

QUESTION 18
You design a Business Intelligence (BI) solution by using SQL Server 2008. You plan to create a SQL Server
2008 Reporting Services (SSRS) report. The report must display the list of orders placed through the Internet.
You need to ensure that the following requirements are met:
The amount of time required for processing the report is minimal. The report contains the most recent data at
the end of each business day. A report is available for any of the last seven days.
Which type of report should you create?

A. Linked
B. Ad Hoc
C. Cached
D. Snapshot

Answer: D
Section: (none)

Explanation/Reference:
http://msdn.microsoft.com/en-us/library/bb630404.aspx#Snapshot

A report snapshot is a report that contains layout information and query results that were retrieved at a specific
point in time. Unlike on-demand reports, which get up-to-date query results when you select the report, report
snapshots are processed on a schedule and then saved to a report server. When you select a report snapshot
for viewing, the report server retrieves the stored report from the report server database and shows the data and
layout that were current for the report at the time the snapshot was created.

Report snapshots are not saved in a particular rendering format. Instead, report snapshots are rendered in a
final viewing format (such as HTML) only when a user or an application requests it. Deferred rendering makes a
snapshot portable. The report can be rendered in the correct format for the requesting device or Web browser.

Report snapshots serve three purposes:


- Report history. By creating a series of report snapshots, you can build a history of a report that shows how
data changes over time.
- Consistency. Use report snapshots when you want to provide consistent results for multiple users who must
work with identical sets of data. With volatile data, an on-demand report can produce different results from one
minute to the next. A report snapshot, by contrast, allows you to make valid comparisons against other reports
or analytical tools that contain data from the same point in time.
- Performance. By scheduling large reports to run during off-peak hours, you can reduce processing impact on
the report server during core business hours.

QUESTION 19
You are creating a SQL Server 2008 Reporting Services (SSRS) solution for a company that has offices in
different countries. The company has a data server for each country. Sales data for each country is persisted in
the respective data server for the country. Report developers have only Read access to all data servers. All data
servers have the same schema for the database.
You design an SSRS solution to view sales data.
You need to ensure that users are able to easily switch between sales data for different countries.
What should you do?

A. Implement a single shared data source.


B. Implement multiple shared data sources.
C. Implement an embedded data source that has a static connection string.
D. Implement an embedded data source that has an expression-based connection string.

Answer: D
Section: (none)

Explanation/Reference:
http://msdn.microsoft.com/en-us/library/ms156450.aspx
Expression-based connection strings are evaluated at run time. For example, you can specify the
data source as a parameter, include the parameter reference in the connection string, and allow the user to
choose a data source for the report. For example, suppose a multinational firm has data servers in several
countries. With an expression-based connection string, a user who is running a sales report can select a data
source for a particular country before running the report.

Design the report using a static connection string. A static connection string refers to a connection string that is
not set through an expression (for example, when you follow the steps for creating a report-specific or shared
data source, you are defining a static connection string). Using a static connection string allows you to connect
to the data source in Report Designer so that you can get the query results you need to create the report.
When defining the data source connection, do not use a shared data source. You cannot use a data source
expression in a shared data source. You must define an embedded data source for the report.
Specify credentials separately from the connection string. You can use stored credentials, prompted credentials,
or integrated security.

Add a report parameter to specify a data source. For parameter values, you can either provide a static list of
available values (in this case, the available values should be data sources you can use with the report) or define
a query that retrieves a list of data sources at run time.

Be sure that the list of data sources shares the same database schema. All report design begins with schema
information. If there is a mismatch between the schema used to define the report and the actual schema used
by the report at run time, the report might not run.

Before publishing the report, replace the static connection string with an expression. Wait until you are finished
designing the report before you replace the static connection string with an expression. Once you use an
expression, you cannot execute the query in Report Designer. Furthermore, the field list in the Report Data pane
and the Parameters list will not update automatically.

QUESTION 20
You design a Business Intelligence (BI) solution by using SQL Server 2008. The solution will contain a total of
100 different reports created by using Report Definition Language (RDL).
Each report must meet the following requirements:
Display a common set of calculations based on a parameter value. Provide a method for setting values on a
report URL or in a subscription definition that are not exposed to a user.
The business rules for all reports that determine the calculations change frequently. You need to design a
solution that meets the requirements. You need to perform this action by using the minimum amount of
development and maintenance effort.
What should you do?

A. Create hidden parameters in each report.


B. Create internal parameters in each report.
C. Implement the function in the <Code> element of each report.
D. Implement the function in a custom assembly. Reference the assembly in each report.

Answer: D
Section: (none)

Explanation/Reference:
http://msdn.microsoft.com/en-us/library/ms159238.aspx
Including References to Code from Custom Assemblies
To use custom assemblies in a report, you must first create the assembly, make it available to Report Designer,
add a reference to the assembly in the report, and then use an expression in the report to refer to the methods
contained in that assembly. When the report is deployed to the report server, you must also deploy the custom
assembly to the report server.
To refer to custom code in an expression, you must call the member of a class within the assembly. How you do
this depends on whether the method is static or instance-based. Static methods within a custom assembly are
available globally within the report. You can access static methods in expressions by specifying the namespace,
class, and method name. The following example calls the method ToGBP, which converts the value of the
StandardCost value from dollar to pounds sterling:
Copy
=CurrencyConversion.DollarCurrencyConversion.ToGBP(Fields!StandardCost.Value)
Instance-based methods are available through a globally defined Code member. You access these by referring
to the Code member, followed by the instance and method name. The following example calls the instance
method ToEUR, which converts the value of StandardCost from dollar to euro:
Copy
=Code.m_myDollarCoversion.ToEUR(Fields!StandardCost.Value)
Note
In Report Designer, a custom assembly is loaded once and is not unloaded until you close Visual Studio. If you
preview a report, make changes to a custom assembly used in the report, and then preview the report again,
the changes will not appear in the second preview. To reload the assembly, close and reopen Visual Studio and
then preview the report.

QUESTION 21
You design a Business Intelligence (BI) solution by using SQL Server 2008. You create a SQL Server 2008
Reporting Services (SSRS) solution. The solution contains a report named Sales Details that displays sales
information of all the employees. You create an SSRS report named Sales Summary that displays the total
monthly sales of each employee.
Users who view the Sales Summary report occasionally require the monthly sales details for a particular
employee.
You need to ensure that the users can click a value in the month column of the Sales Summary report to open
and render the Sales Details report.
What should you do?

A. Use a subreport.
B. Use a bookmark link.
C. Use the drilldown functionality.
D. Use a drillthrough report link.

Answer: D
Section: (none)

Explanation/Reference:
http://technet.microsoft.com/en-us/library/ff519554.aspx
Drillthrough Reports (Report Builder 3.0 and SSRS)
A drillthrough report is a report that a user opens by clicking a link within another report. Drillthrough reports
commonly contain details about an item that is contained in an original summary report. The data in the
drillthrough report is not retrieved until the user clicks the link in the main report that opens the drillthrough
report. If the data for the main report and the drillthrough report must be retrieved at the same time, consider
using a subreport

QUESTION 22
You design a Business Intelligence (BI) solution by using SQL Server 2008. You create a SQL Server 2008
Reporting Services (SSRS) report.
The report contains summary information in two sections named Agencies and States. The Agency summary
section contains two matrices and the State summary section contains a table. The information about each
section is grouped together.
You need to design the report to meet the following requirements:
When the report is exported to a Microsoft Excel spreadsheet, each summary section is rendered to a separate
tab.
The structure in each section is retained.
What should you do?
A. Select the Keep together on one page option on all report items.
B. Select a line component between the report items for the Agency and State summary sections.
C. Select all the report items for each section in a list report item and enable the Add A Page Break Before
option on the list report item.
D. Select all the report items for each section in a rectangle report item and enable the Add A Page Break
Before option on the rectangle report item.

Answer: D
Section: (none)

Explanation/Reference:
http://technet.microsoft.com/en-us/library/ms155915(SQL.100).aspx
Add a rectangle to your report when you want a graphical element to separate areas of the report, emphasize
areas of a report, or provide a background for one or more report items. Rectangles are also used as containers
to help control the way data regions render in a report. You can customize the appearance of a rectangle by
editing rectangle properties such as the background and border colors.

QUESTION 23
You design a Business Intelligence (BI) solution by using SQL Server 2008. You plan to create a SQL Server
2008 Reporting Services (SSRS) solution. Developers generate random reports against a data source that
contains 200 tables. Power users generate random reports against four of the 200 tables. You need to design a
strategy for the SSRS solution to meet the following requirements:
Uses minimum amount of development effort.
Provides two sets of tables in SSRS to the developers group and the power users group.
Which strategy should you use?

A. Create two Report Builder models.


Include the four frequently used tables in the first model and all the tables in the second model.
B. Create a Report Builder model by using all the tables.
Create a perspective within the model to use only the four frequently used tables.
C. Create a Report Builder model by using all the tables.
Create two folders.
Place the four frequently used tables in the first folder and the remaining tables in the second folder.
D. Create two Data Source Views.
Include all the tables in one Data Source View and the four frequently used tables in the other Data Source
View.
Create two Report Builder models so that each model uses one of the Data Source Views.

Answer: B
Section: (none)

Explanation/Reference:
Creating a Report Model
Like reports, Report Models are created in the Business Intelligence Development Studio and then deployed to
a report server. Unlike reports, Report Models can have
security rights assigned to different pieces of their structure to provide the fine-grained security that is often
required in ad hoc reporting situations. We use the Report Model Wizard to create the Report Model, and then
do some manual tweaking to make it more usable. We then deploy the Report Model to the report server.
Finally, we set security within the model itself.
A perspective is a subset of the information in the model. Usually, a perspective coincides with a particular
job or work area within an organization. If a plus sign is to the left of the model, the model contains one or more
perspectives. Click the plus sign to view the perspectives. If you select one of these perspectives as the data
source for your report, only the entities in that perspective will be available to your report. Because perspectives
reduce the number of entities you have to look through to find the data you need on your report, it is usually a
good idea to choose a perspective, rather than using the entire Report Model.
(McGraw-Hill - Delivering Business Intelligence with Microsoft SQL Server 2008 (2009))

http://msdn.microsoft.com/en-us/library/ms345316.aspx
For models that contain many subject areas, for example, Sales, Manufacturing, and Supply data, it might be
helpful to Report Builder users if you create perspectives of the model.
A perspective is a sub-set of a model. Creating perspectives can make navigating through the contents of the
model easier for your model users. I

QUESTION 24
You design a Business Intelligence (BI) solution by using SQL Server 2008. Users have no specialized
knowledge of the Transact-SQL (T-SQL) language.
You plan to create a reporting solution by using SQL Server 2008 Reporting Services (SSRS). You have data
stored in a SQL Server 2008 relational data warehouse. You need to ensure that users are able to create the
necessary reports by using data from the data warehouse.
What should you do?

A. Create a shared data source that points to the data warehouse.


Instruct the users to use Report Designer in Business Intelligence Development Studio (BIDS) to create the
reports by using the shared data source.
B. Create a Report Model from the data warehouse.
Instruct the users to use Report Builder 2.0 to create the reports by using the Report Model.
C. Create a shared data source that point to the data warehouse.
Instruct the users to use Report Builder 2.0 to create the reports by using the shared data source.
D. Create a Report Model from the data warehouse.
Instruct the users to use Report Designer in Business Intelligence Development Studio (BIDS) to create the
reports by using the Report Model.

Answer: B
Section: (none)

Explanation/Reference:
If you are creating a new report, select the Report Model or the perspective that should serve as the data
source, along with the report layout, and click OK. If you want to edit an existing report,click the Open button on
the toolbar. You can then navigate the report server folder structure to find the Report Builder report you want to
edit. You cannot use the Report Builder to edit reports that were created or edited using the Report Designer in
Visual Studio 2008 or the Business Intelligence Development Studio.
Once a Report Model has been selected for your data source, or an existing report has been chosen for editing,
the main Report Builder opens. When creating a new report, the Report Builder appears similar to Figure.
(McGraw-Hill - Delivering Business Intelligence with Microsoft SQL Server 2008 (2009))

http://msdn.microsoft.com/en-us/library/ms159750(SQL.100).aspx
Report Builder allows business users to create their own reports based on a user-friendly report model
created in Model Designer. Fully integrated with Microsoft SQL Server Reporting Services, Report Builder
leverages the full reporting platform to bring ad hoc reporting to all users.
Report Builder
Users create reports with the Report Builder tool. The Report Builder interface is built on top of familiar Microsoft
Office paradigms such as Excel and PowerPoint. Users start with report layout templates containing pre-defined
data regions to build combinations of tables, matrices and charts. They navigate the reporting model to select
report items and set constraints to filter the report data. The reporting model contains all of the necessary
information for the Report Builder to automatically generate the source query and retrieve the requested data.
The Report Builder also allows users to:

QUESTION 25
You design a Business Intelligence (BI) solution by using SQL Server 2008. You create a SQL Server 2008
Reporting Services (SSRS) solution. Twenty users edit the content of the reports. The users belong to an Active
Directory group named Developers. At times, the reports are published with incomplete information.
You need to design an approval process.
What should you do?

A. Restrict the Developers group to the Browser role in SSRS.


B. Add the Developers group to the Content Manager role in SSRS.
C. Deploy the reports to a Microsoft Office SharePoint Server 2007 environment.
D. Create a shared schedule for the reports. Set the snapshot execution option for all the reports by using the
shared schedule.
Answer: C
Section: (none)

Explanation/Reference:

QUESTION 26
You design a Business Intelligence (BI) solution by using SQL Server 2008. You create a SQL Server 2008
Reporting Services (SSRS) solution. The solution has a report named SalesDetails that contains a parameter
named EmployeeID .
You have the following constraints:
Ten thousand employees require the report in different file formats. The employees can view only their sales
data by specifying their identity number as the EmployeeID parameter.
You need to ensure that the constraints are met before you deliver the report to the employees.
What should you do?

A. Create a data-driven subscription.


B. Create a SharePoint Report Center site.
C. Create a subscription for each employee.
D. Create a report model for each employee.

Answer: A
Section: (none)

Explanation/Reference:
http://msdn.microsoft.com/en-us/library/ms159150.aspx
A data-driven subscription provides a way to use dynamic subscription data that is retrieved from an
external data source at run time. A data-driven subscription can also use static text and default values that you
specify when the subscription is defined.

You can use data-driven subscriptions to do the following:


- Distribute a report to a fluctuating list of subscribers. For example, you can use data-driven subscriptions to
distribute a report throughout a large organization where subscribers vary from one month to the next, or use
other criteria that determines group membership from an existing set of users.
- Filter the report output using report parameter values that are retrieved at run time.
- Vary report output formats and delivery options for each report delivery.

A data-driven subscription is composed of multiple parts. The fixed aspects of a data-driven subscription are
defined when you create the subscription, and these include the following:
- The report for which the subscription is defined (a subscription is always associated with a single report).
- The delivery extension used to distribute the report. You can specify report server e-mail delivery, file share
delivery, the null delivery provider used for preloading the cache, or a custom delivery extension. You cannot
specify multiple delivery extensions within a single subscription.
- The subscriber data source. You must specify a connection string to the data source that contains subscriber
data when you define the subscription. The subscriber data source cannot be specified dynamically at run time.
- The query that you use to select subscriber data must be specified when you define the subscription. You
cannot change the query at run time.

Dynamic values used in a data-driven subscription are obtained when the subscription is processed. Examples
of variable data that you might use in a subscription include the subscriber name, e-mail address, preferred
report output format, or any value that is valid for a report parameter. To use dynamic values in a data-driven
subscription, you define a mapping between the fields that are returned in the query to specific delivery options
and to report parameters. Variable data is retrieved from a subscriber data source each time the subscription is
processed.

QUESTION 27
You are a SQL Server 2008 Analysis Services (SSAS) data mining architect. The customer table contains the
following column names:
Customer_key
Name
Age
Education Level
IsBuyer
You plan to build a data mining model by using Microsoft Decision Trees algorithm for the customer table.
You need to identify the data column-model parameter pairs to predict possible buyers.
Which model should you select?

A. Data Column Model Parameter Type


Customer_key Input
Name Ignore
Education Level Input, Predict
Age Input, Predict
IsBuyer Key
B. Data Column Model Parameter Type
Customer_key Key
Name Ignore
Education Level Input
Age Input
IsBuyer Predict
C. Data Column Model Parameter Type
Customer_key Input
Name Ignore
Education Level Input
Age Input
IsBuyer Key
D. Data Column Model Parameter Type
Customer_key Predict
Name Key
Education Level Input
Age Input
IsBuyer Input

Answer: B
Section: (none)

Explanation/Reference:
http://technet.microsoft.com/en-us/library/ms175423.aspx
Changing Mining Column Usage
You can change which columns are included in a mining model and how each column is used, such as input,
key, or predictable, by using the cells for that model column in the grid on the Mining Models tab. Each cell
corresponds to a column in the mining structure. For key columns, you can set the cell to Key or Ignore. For
input and output columns, you can set the cell to the following values:
- Ignore
- Input
- Predict
- PredictOnly
If you set a cell to Ignore, the column is removed from the mining model, but that column can still be used by
other mining models in the structure.
QUESTION 28
.

You design a SQL Server 2008 Analysis Services (SSAS) solution. You have the following requirements for a
single data mining model:
Group all customers by two different age ranges.
Group all customers by ten different age ranges.
You need to design the model to meet the requirements.
What should you include in the design?

A. one column of the Long data type and the Discrete content type
B. one column of the Long data type and the Continuous content type
C. two columns, each of the Long data type and the Discrete content type
D. two columns, each of the Long data type and the Discretized content type

Answer: D
Section: (none)

Explanation/Reference:
We do this by choosing Discretized as the Content Type, and then selecting a method that groups continuous
values into a discrete number of buckets (that is, ages 11–15, ages 16–20, and so on). Often, it is much easier
to do analysis on discretized values than on continuous values. For the purposes of a given analysis, the buying
habits of 16–20-year-olds may be similar enough so that we can study them as a group in one discretized
bucket.
(McGraw-Hill - Delivering Business Intelligence with Microsoft SQL Server 2008 (2009))

Discretized The column has continuous values that are grouped into buckets.Each bucket is considered to
have a specific order and to contain discrete values. You saw an example of this in Figure 12-2 using the Age
column in the Targeted Mining sample. Note that you’ll also set the DiscretizationMethod and (optionally) the
DiscretizationBucketCount properties if you mark your column as Discretized. In our sample, we’ve set the
bucket size to 10 and DiscretizationMethod to Automatic. Possible values for discretization method are
automatic, equal areas, or clusters. Automatic means that SSAS determines which method to use. Equal areas
results in the input data being divided into partitions of equal size. This method works best with data with
regularly
distributed values. Clusters means that SSAS samples the data to produce a result that accounts for “clumps”
of data values. Because of this sampling, Clusters can be used only with numeric input columns. You can use
the date, double, long, or text data type with the Discretized content type.
(Smart Business Intelligence Solutions with Microsoft SQL Server® 2008, Copyright © 2009 by Kevin Goff and Lynn Langit)

QUESTION 29
You design a SQL Server 2008 Analysis Services (SSAS) solution. The solution includes a mining structure that
is created by using the default options and a mining model that uses the Microsoft Clustering algorithm. You
need to ensure that users can access source data by querying the mining model. What should you do?

A. Modify the mining structure to include a filter.


B. Modify the mining structure to enable drillthrough.
C. Include a task in the solution to process the mining model.
D. Include a task in the solution to delete all cached data from the mining model.

Answer: B
Section: (none)
Explanation/Reference:
Microsoft Clustering algorithm
Function
The Microsoft Clustering algorithm builds clusters of entities as it processes the training data set. It looks at
the values of each attribute for the entities in the cluster. By entering the attribute value we want, we can have
the clusters color-coded according to the concentration of our desired value.
Tasks
The main purpose of the Microsoft Clustering algorithm is
- Segmentation
It can also be used for
- Regression
- Classification
(McGraw-Hill - Delivering Business Intelligence with Microsoft SQL Server 2008 (2009))

Microsoft Clustering Algorithm


As its name indicates, the Microsoft Clustering algorithm focuses on showing you meaningful groupings in your
source data. Unlike Naпve Bayes, which requires discrete content inputs and
considers all input attributes of equal weight, Microsoft Clustering allows for more flexibility in input types and
grouping methodologies. You can use more content type as input and you
can configure the method used to create the groups or clusters. Microsoft Clustering separates your data into
intelligent groupings. As we mentioned in the previous paragraph, you can use Continuous, Discrete, and most
other content types. You can optionally supply a predictive value, by marking it as predict only. Be aware that
Microsoft Clustering is generally not used for prediction—you use it to find natural groupings in your data.
(Smart Business Intelligence Solutions with Microsoft SQL Server® 2008, Copyright © 2009 by Kevin Goff and Lynn Langit)

Drillthrough Action Defines a dataset to be returned as a drillthrough to a more detailed level.


Creating Drillthrough Actions
For the most part, Drillthrough Actions have the same properties as Actions. Drillthrough Actions do not have
Target Type or Target Object properties. In their place, the Drillthrough Action has the following:
- Drillthrough Columns Defines the objects to be included in the drillthrough dataset.

QUESTION 30
You design a Business Intelligence (BI) solution by using SQL Server 2008. A data warehouse named
CustomerDW contains a Fact table named FactCustomer. The FactCustomer table contains two columns
named CustomerKey and CustomerSales. You create a data mining model named CustomerModel by using
SQL Server 2008 Analysis Services (SSAS).
A report that is developed by using SQL Server 2008 Reporting Services (SSRS) lists the top 50 customers
based on the sales amount. The report extracts data from a SQL Server relational database.
You add a column named UpSell to the report.
You need to ensure that the UpSell column displays the probability values of the expensive products that
customers are likely to purchase.
Which Data Mining Extensions (DMX) query should you use?

A. SELECT PredictProbability(t.[UpSell]) as [UpSell],


[CustomerKey], m.[CustomerSales]
From [CustomerModel] m
PREDICTION JOIN OPENQUERY([CustomerDW],
'SELECT
[CustomerKey], [CustomerSales] From FactCustomer
ORDER BY [CustomerSales]
') AS t
ON m.[CustomerKey] = t.[CustomerKey]
B. SELECT PredictProbability(m.[UpSell]) as [UpSell],
[CustomerKey], t.[CustomerSales]
From [CustomerModel] m
PREDICTION JOIN
OPENQUERY([CustomerDW],
'SELECT TOP 50
[CustomerKey], [CustomerSales]
FROM FactCustomer
ORDER BY [CustomerSales]
') AS t
C. SELECT PredictProbability(m.[UpSell]) as [UpSell],
[CustomerKey], t.[CustomerSales]
From [CustomerModel] m
PREDICTION JOIN OPENQUERY([CustomerDW],
'SELECT TOP 50
[CustomerKey],[CustomerSales]
From FactCustomer
ORDER BY [CustomerSales]
') AS t
ON m.[CustomerKey] = t.[CustomerKey]
D. SELECT Probability(m.[UpSell]) as [UpSell],
[CustomerKey], t.[CustomerSales]
From [CustomerModel] m
PREDICTION JOIN OPENQUERY([CustomerDW],
'SELECT
[CustomerKey], [CustomerSales]
From FactCustomer
ORDER BY [CustomerSales]
') AS t
ON m.[CustomerKey] = t.[CustomerKey]

Answer: C
Section: (none)

Explanation/Reference:
http://msdn.microsoft.com/en-us/library/ms132031.aspx
SELECT FROM <model> PREDICTION JOIN (DMX)
SQL Server 2008 R2 Other Versions
Uses a mining model to predict the states of columns in an external data source. The PREDICTION JOIN
statement matches each case from the source query to the model.

SELECT [FLATTENED] [TOP <n>] <select expression list>


FROM <model> | <sub select> [NATURAL] PREDICTION JOIN
<source data query> [ON <join mapping list>]
[WHERE <condition expression>]
[ORDER BY <expression> [DESC|ASC]]

DMX Prediction Queries


The Data Mining Extensions language is modeled after the SQL query language. You probably recognized the
SQL-like syntax of SELECT…FROM…JOIN…ON from the code generated in
the preceding example. Note the use of the DMX Predict function and the PREDICTION JOIN keyword.

(Smart Business Intelligence Solutions with Microsoft SQL Server® 2008, Copyright © 2009 by Kevin Goff and Lynn Langit)

Answer C contains: TOP 50 ( top 50 customers based on the sales amount), ON m.[CustomerKey] = t.
[CustomerKey]
QUESTION 31
You design a Business Intelligence solution by using SQL Server 2008. The solution includes a SQL Server
2008 Analysis Services (SSAS) database. You design a data mining structure. The structure is used by a data
mining model that predicts the expected growth for a particular market location. The model also populates a
data mining dimension named Market Location.
The database includes a cube that contains a calculated member named Predicted Revenue. The calculated
member uses predictions from the data mining model.
You have the following business requirements:
The view shown in the following exhibit must be displayed to consultants. (Click the Exhibit button.)

·The view shown in the following exhibit must be displayed to managers. (Click the Exhibit button.)

You need to design a solution that meets the business requirements. What should you do?

A. Implement cell-level security on the cube.


B. Implement drillthrough security on the cube.
C. Implement dimension-level security on the Market Location dimension.
D. Create a new reference dimension that joins Windows user names and their allowed market locations.
Implement dimension-level security on the new reference dimension.

Answer: A
Section: (none)

Explanation/Reference:

QUESTION 32
You design a Business Intelligence (BI) solution by using SQL Server 2008. The solution includes a SQL Server
2008 Analysis Services (SSAS) database.
A measure group in the database contains transaction details. The transaction details include the price, volume
of shares, trade type, and several other attributes of each transaction. You need to implement a data mining
model that will estimate the future prices based on the existing transaction data.
Which algorithm should the data mining model use?

A. the Microsoft Clustering algorithm


B. the Microsoft Association algorithm
C. the Microsoft Naive Bayes algorithm
D. the Microsoft Neural Network algorithm

Answer: D
Section: (none)

Explanation/Reference:
Microsoft Neural Network Algorithm
Microsoft Neural Network is by far the most powerful and complex algorithm. This algorithm creates
classification and regression mining models by constructing a Multilayer Perceptron network of neurons. Similar
to the Microsoft Decision Trees algorithm,the Microsoft Neural Network algorithm calculates probabilities for
each possible state of the input attribute when given each state of the predictable attribute. You can later use
these probabilities to predict an outcome of the predicted attribute, based on the input attributes. It is
recommended for use when other algorithms fail to
produce meaningful results, such as those measured by a lift chart output. We often use Microsoft Neural
Network as a kind of a last resort, when dealing with large and complex datasets that fail to produce meaningful
results when processed using other algorithms. This algorithm can accept a data type of Discrete or Continuous
as input
(Smart Business Intelligence Solutions with Microsoft SQL Server® 2008, Copyright © 2009 by Kevin Goff and Lynn Langit)

Microsoft Neural Network


Neural networks were developed in the 1960s to model the way human neurons function. Microsoft has created
the Microsoft Neural Network algorithm so we can use neural networks for such mundane activities as predictin
g product sales. Of course, predicting product sales might not seem so mundane if your future employment is
dependent on being correct.
Function
The Microsoft Neural Network algorithm creates a web of nodes that connect inputs derived from attribute
values to a final output. The combination function determines how to combine the inputs coming into the node.
Certain inputs might get more weight than others when it comes to affecting the output from this node.
The second function in each node is the activation function. The activation function takes input from the
combination function and comes up with the output from this node to be sent to the next node in the network.
Tasks
The main purposes of the Microsoft Neural Network algorithm are
- Classification
- Regression
(McGraw-Hill - Delivering Business Intelligence with Microsoft SQL Server 2008 (2009))

http://msdn.microsoft.com/en-us/library/ms174941.aspx
Microsoft Neural Network Algorithm
In SQL Server Analysis Services, the Microsoft Neural Network algorithm combines each possible state of the
input attribute with each possible state of the predictable attribute, and uses the training data to calculate
probabilities. You can later use these probabilities for classification or regression, and to predict an outcome of
the predicted attribute, based on the input attributes.
A mining model that is constructed with the Microsoft Neural Network algorithm can contain multiple networks,
depending on the number of columns that are used for both input and prediction, or that are used only for
prediction. The number of networks that a single mining model contains depends on the number of states that
are contained by the input columns and predictable columns that the mining model uses.
Creating Predictions
After the model has been processed, you can use the network and the weights stored within each node to make
predictions. A neural network model supports regression, association, and classification analysis, Therefore, the
meaning of each prediction might be different. You can also query the model itself, to review the correlations
that were found and retrieve related statistics. For examples of how to create queries against a neural network
model, see Querying a Neural Network Model (Analysis Services- Data Mining).
Remarks
Does not support drillthrough or data mining dimensions. This is because the structure of the nodes in the
mining model does not necessarily correspond directly to the underlying data.
Does not support the creation of models in Predictive Model Markup Language (PMML) format.
Supports the use of OLAP mining models.
Does not support the creation of data mining dimension

QUESTION 33
You design a Business Intelligence (BI) solution by using SQL Server 2008. Your company processes all
transaction data in a Point of Sale (POS) application. Based on the transaction data, you design a solution to
predict the type of products a customer tends to purchase on a single visit.
You need to identify the appropriate algorithm to design the solution.
Which algorithm should you use?

A. Clustering
B. Na?ve Bayes
C. Association Rules
D. Time Series

Answer: C
Section: (none)

Explanation/Reference:
Microsoft Association Rules
The Microsoft Association Rules algorithm deals with items that have been formed into sets within the data. In
the example here, we look at sets of products that were purchased by the same customer. The Business
Intelligence Development Studio provides us with three viewers for examining these sets.

QUESTION 34
You design a Business Intelligence (BI) solution by using SQL Server 2008. Your solution includes a data
mining structure that uses SQL Server 2008 Analysis Services (SSAS) as its data source. The measure groups
use 100 percent multidimensional online analytical processing (MOLAP) storage.
You need to provide detailed information on the training and test data to ensure the accuracy of the mining
model. You also need to minimize the time required to create the training and test data. Which two tasks should
you perform? (Each correct answer presents part of the solution. Choose two.)

A. Perform cross-validation queries to the test and training data.


B. Create a new mining structure that has a holdout value.
C. Create a SQL Sever 2008 Integration Services (SSIS) package that partitions test and training datasets and
merges case and nested tables.
D. Use a Sort Data Flow transformation.
E. Use an ORDER BY clause in the Data Flow source query. Define a SortKeyPosition ordinal key for the
appropriate output column.

Answer: AB
Section: (none)

Explanation/Reference:
Cross Validation
The cross validation tool was added specifically to address requests from enterprise customers. Keep in mind
that cross validation does not require separate training and testing datasets. You can use testing data, but you
won’t always need to. This elimination of the need for holdout (testing) data can make cross validation more
convenient to use for data mining model validation. Cross validation works by automatically separating the
source data into partitions of equal size. It then performs iterative testing against each of the partitions and
shows the results in a detailed output grid. Cross validation works according to the value specified in the Fold
Count parameter on the Cross Validation tab of the Mining Accuracy Chart tab in BIDS. The default value for
this parameter is 10, which equates to 10 sets. If you’re using temporary mining models to cross validate in
Excel 2007, 10 is the maximum number of allowable folds. If you’re using BIDS, the maximum number is 256.
Of course, a greater number of folds equates to more processing overhead.
You can also implement cross validation using newly introduced stored procedures.A reason to use the new
cross-validation capability is that it’s a quick way to perform validation using multiple mining models as source
inputs.
Note Cross validation cannot be used to validate models built using the Time Series or Sequence Clustering
algorithms. This is logical if you think about it because both of these algorithms depend on sequences and if the
data was partitioned for testing, the validity of the sequence would be violated
(Smart Business Intelligence Solutions with Microsoft SQL Server® 2008, Copyright © 2009 by Kevin Goff and Lynn Langit)

QUESTION 35
You design a Business Intelligence (BI) solution by using SQL Server 2008. The solution has been deployed by
using default settings on a SQL Server 2008 Analysis Services (SSAS) instance. The solution has a large cube
that processes 10 million fact rows. You frequently encounter out-of-memory exceptions when the cube is
processed. You need to recommend a solution to resolve the out-of-memory exceptions when the cube is
processed. You want to achieve this task by using the minimum amount of development effort.
What should you do?

A. Reduce the number of aggregations.


B. Partition the cube. Process the cube based on each partition.
C. Increase the physical memory available to the SSAS instance by modifying the Memory\TotalMemoryLimit
server property.
D. Increase the physical memory available to the SSAS instance by modifying the OLAP\Process
\BufferMemoryLimit server property.

Answer: D
Section: (none)

Explanation/Reference:

QUESTION 36
You design a Business Intelligence (BI) solution by using SQL Server 2008. You deploy a SQL Server 2008
Analysis Services (SSAS) cube. The cube contains a measure group that uses table binding. The measure
group contains 200 million rows. A job that processes the measure group fails. The log shows an out-of-memory
error. The job uses the Process Update option.
You need to resolve the error. You need to perform this action without increasing the available physical memory
for the SSAS instance. What should you do?
A. Change the job to process the cube.
B. Change the job to process the measure group with the Process Full option.
C. Increase the number of partitions in the measure group.
D. Increase the number of aggregations in the measure group.

Answer: C
Section: (none)

Explanation/Reference:

QUESTION 37
You design a Business Intelligence (BI) solution by using SQL Server 2008. Your solution includes relational
and analysis services.
The solution has a cube that is queried by more than 650 users. During peak hours, more than 100 active
connections are open on the cube at any given time. Users connect to and query the cube by using custom-built
applications. You need to view the connection details and the application name that is used to connect to the
cube of all users. What should you do?

A. Use the Resource Governor.


B. Use the Database Tuning Advisor.
C. Use the Analysis Services performance counters.
D. Prepare a report by using a dynamic management view.

Answer: D
Section: (none)

Explanation/Reference:
http://msdn.microsoft.com/en-us/library/ms188754.aspx
Dynamic Management Views and Functions (Transact-SQL)
Dynamic management views and functions return server state information that can be used to monitor the
health of a server instance, diagnose problems, and tune performance.
Important
Dynamic management views and functions return internal, implementation-specific state data. Their schemas
and the data they return may change in future releases of SQL Server. Therefore, dynamic management views
and functions in future releases may not be compatible with the dynamic management views and functions in
this release. For example, in future releases of SQL Server, Microsoft may augment the definition of any
dynamic management view by adding columns to the end of the column list. We recommend against using the
syntax SELECT * FROM dynamic_management_view_name in production code because the number of
columns returned might change and break your application.

There are two types of dynamic management views and functions:


- Server-scoped dynamic management views and functions. These require VIEW SERVER STATE permission
on the server.
- Database-scoped dynamic management views and functions. These require VIEW DATABASE STATE
permission on the database.

Querying Dynamic Management Views


Dynamic management views can be referenced in Transact-SQL statements by using two-part, three-part, or
four-part names. Dynamic management functions on the other hand can be referenced in Transact-SQL
statements by using either two-part or three-part names. Dynamic management views and functions cannot be
referenced in Transact-SQL statements by using one-part names.
All dynamic management views and functions exist in the sys schema and follow this naming convention dm_*.
When you use a dynamic management view or function, you must prefix the name of the view or function by
using the sys schema. For example, to query the dm_os_wait_stats dynamic management view, run the
following query:

SELECT wait_type, wait_time_ms


FROM sys.dm_os_wait_stats;
GO

QUESTION 38
You design a Business Intelligence (BI) solution by using SQL Server 2008. A SQL Server 2008 Reporting
Services (SSRS) instance contains a report. Employees execute the report by using Report Manager.
Each employee has his own specific set of parameters to execute the report. Data for the report is updated
once daily. For each employee, the report takes more than five minutes to execute. You discover that data
retrieval takes most of the time during report execution. You need to reduce the execution time of the report.
What should you do?

A. Create a report execution snapshot.


B. Create a data-driven subscription that uses the NULL delivery method.
C. Create a data-driven subscription that uses the file share delivery method.
D. Create a standard subscription that uses the file share delivery method.

Answer: B
Section: (none)

Explanation/Reference:
Data-Driven Subscriptions
A better name for a data-driven subscription might be “mass mailing.” The data-driven subscription enables us
to take a report and e-mail it to a number of people on a mailing list. The mailing list can be queried from any
valid Reporting Services data source.The mailing list can contain fields in addition to the recipient’s e-mail
address, which are used to control the content of the e-mail sent to each recipient.

http://msdn.microsoft.com/en-us/library/ms159762.aspx
Standard and Data-Driven Subscriptions
Reporting Services supports two kinds of subscriptions: standard and data-driven.
Standard subscriptions are created and managed by individual users. A standard subscription consists of s
tatic values that cannot be varied during subscription processing. For each standard subscription, there is
exactly one set of report presentation options, delivery options, and report parameters.
Data-driven subscriptions get subscription information at run time by querying an external data source that
provides values used to specify a recipient, report parameters, or application format. You might use data-driven
subscriptions if you have a very large recipient list or if you want to vary report output for each recipient. To use
data-driven subscriptions, you must have expertise in building queries and an understanding of how parameters
are used. Report server administrators typically create and manage these subscriptions.
Delivery Extensions
Subscriptions use delivery extensions to determine how to distribute a report and in what format. When a user
creates a subscription, he or she can choose one of the available delivery extensions to determine how the
report is delivered. Reporting Services includes the following delivery extensions. Developers can create
additional delivery extensions to route reports to other locations.
Windows File Share - Delivers a report as a static application file to a shared folder that is accessible on the
network.
E-mail - Delivers a notification or a report as an e-mail attachment or URL link.
SharePoint library - Delivers a report as a static application file to a SharePoint library that is accessible from
a SharePoint site. The site must be integrated with a report server that runs in SharePoint integrated mode.
Null - The null delivery provider is a highly specialized delivery extension that is used to preload a cache with
ready-to-view parameterized reports This method is not available to users in individual subscriptions. Null
delivery is used by administrators in data-driven subscriptions to improve report server performance by
preloading the cache.

QUESTION 39
You design a Business Intelligence (BI) solution by using SQL Server 2008. The solution includes several SQL
Server 2008 Integration Services (SSIS) packages. The SSIS packages import data from files located on other
servers.
The packages will be deployed to a SQL Server 2008 instance and scheduled to run through the SQL Server
Agent service. The SQL Server Agent service runs under a local user account. The SSIS packages fail to run
when the SQL Server Agent jobs are executed. You need to ensure that the packages run successfully in the
production environment.
What should you do?

A. Configure the SQL Server Agent job step to run as a proxy account.
B. Configure the SQL Server Agent job to use the sa account as the job owner.
C. Configure the SQL Server Agent service to use the Local Service account.
D. Configure the SQL Server Agent service to use a local administrator account.

Answer: A
Section: (none)

Explanation/Reference:
http://msdn.microsoft.com/en-us/library/ms189064.aspx
Creating SQL Server Agent Proxies
A SQL Server Agent proxy defines the security context for a job step. A proxy provides SQL Server Agent with
access to the security credentials for a Microsoft Windows user. Each proxy can be associated with one or more
subsystems. A job step that uses the proxy can access the specified subsystems by using the security context
of the Windows user. Before SQL Server Agent runs a job step that uses a proxy, SQL Server Agent
impersonates the credentials defined in the proxy, and then runs the job step by using that security context.
Job steps that execute Transact-SQL do not use SQL Server Agent proxies. Transact-SQL job steps run in the
security context of the owner of the job. To set the security context for a Transact-SQL job step, use the
database_user_name parameter in the sp_add_jobstep stored procedure.

QUESTION 40
You design a Business Intelligence (BI) solution by using SQL Server 2008. The SQL Server 2008 Integration
Services (SSIS) developers use a SQL Server 2008 instance as the primary development environment.
All the SSIS packages contain data connection managers that use SQL Server authentication to extract data.
The packages are saved by using the EncryptAllWithUserKey package protection level.You plan a package
migration strategy from the development environment to a production environment. Migration will be performed
by using an automated script.
You need to ensure that the packages execute without error in the production environment.
What should you do?

A. Create a package configuration for every package that uses a SQL Server table.
B. Create a package configuration for every package that uses an XML configuration file.
C. Export each package and change the package protection level to DontSaveSensitive.
D. Export each package and change the package protection level to EncryptSensitiveWithPassword.

Answer: D
Section: (none)
Explanation/Reference:
http://msdn.microsoft.com/en-us/library/ms141747.aspx
Setting the Protection Level of Packages
To protect the data in an Integration Services package, you can set a protection level that helps protect just
sensitive data or all the data in the package. Furthermore, you can encrypt this data with a password or a user
key, or rely on the database to encrypt the data. Also, the protection level that you use for a package is not
necessarily static, but changes throughout the life cycle of the package. You often set one protection level
during development and another as soon as you deploy the package.
Do not save sensitive (DontSaveSensitive) - Suppresses the values of sensitive properties in the package
when the package is saved. This protection level does not encrypt, but instead it prevents properties that are
marked sensitive from being saved with the package and therefore makes the sensitive data unavailable to
other users. If a different user opens the package, the sensitive information is replaced with blanks and the user
must provide the sensitive information.
Encrypt all with password (EncryptAllWithPassword) - Uses a password to encrypt the whole package. The
package is encrypted by using a password that the user supplies when the package is created or exported. To
open the package in SSIS Designer or run the package by using the dtexec command prompt utility, the user
must provide the package password. Without the password the user cannot access or run the package.
Encrypt all with user key (EncryptAllWithUserKey) - Uses a key that is based on the current user profile to
encrypt the whole package. Only the user who created or exported the package can open the package in SSIS
Designer or run the package by using the dtexec command prompt utility.
Encrypt sensitive with password (EncryptSensitiveWithPassword) - Uses a password to encrypt only the
values of sensitive properties in the package. DPAPI is used for this encryption. Sensitive data is saved as a
part of the package, but that data is encrypted by using a password that the current user supplies when the
package is created or exported. To open the package in SSIS Designer, the user must provide the package
password. If the password is not provided, the package opens without the sensitive data and the current user
must provide new values for sensitive data. If the user tries to execute the package without providing the
password, package execution fails.
Encrypt sensitive with user key (EncryptSensitiveWithUserKey) - Uses a key that is based on the current
user profile to encrypt only the values of sensitive properties in the package. Only the same user who uses the
same profile can load the package. If a different user opens the package, the sensitive information is replaced
with blanks and the current user must provide new values for the sensitive data. If the user attempts to execute
the package, package execution fails. DPAPI is used for this encryption.
Rely on server storage for encryption (ServerStorage) - Protects the whole package using SQL Server
database roles. This option is supported only when a package is saved to the SQL Server msdb database. It is
not supported when a package is saved to the file system from Business Intelligence Development Studio.

QUESTION 41
You design a Business Intelligence (BI) solution by using SQL Server 2008. The SQL Server 2008 instance
hosts a database. The database is currently scheduled for a full backup on a monthly basis.
The 4-terabyte database contains 3.5 terabyte of data in a read-only filegroup. The database uses the bulk-
logged recovery model.
You need to back up the database changes to a tape drive every night by using minimum storage space and
time.
Which backup strategy should you use?

A. File backup
B. Partial backup
C. Differential backup
D. Differential Partial backup

Answer: D
Section: (none)

Explanation/Reference:
http://msdn.microsoft.com/en-us/library/ms191539.aspx
Partial Backups
Partial backups were introduced in SQL Server 2005. Partial backups are designed for use under the simple
recovery model to improve flexibility for backing up very large databases that contain one or more read-only
filegroups. However, partial backups work on all databases, regardless of recovery model.
Note
For an overview of the various types of backups, see either Backup Under the Simple Recovery Model or
Backup Under the Full Recovery Model.
A partial backup resembles a full database backup, but a partial backup does not contain all the filegroups.
Instead, a partial backup contains all the data in the primary filegroup, every read/write filegroup, and any
optionally-specified read-only files. Partial backups are useful whenever you want to exclude read-only
filegroups. A partial backup of a read-only database contains only the primary filegroup.

QUESTION 42
You administer a server that runs SQL Server 2008 Integration Services (SSIS) packages.
The packages are stored on the file system.
You need to integrate the backup and restore processes for the SSIS packages along with the regular SQL
Server backup process.
What should you do?

A. Deploy the packages to SQL Server 2008 and back up the msdb system database.
B. Deploy the packages to SQL Server 2008 and back up the master system database.
C. In the ProtectionLevel package property, select the ServerStorage option.
D. In the ProtectionLevel package property, select the DontSaveSensitive option.

Answer: A
Section: (none)

Explanation/Reference:

QUESTION 43
You administer a SQL Server 2008 Reporting Services (SSRS) environment. You restore the ReportServer and
ReportServerTempDB databases to a new server. When you browse to the Report Manager Web page, you
receive an error message. You are unable to view the folder structure and the reports.
You need to view the folder structure and the reports.
What should you do?

A. Restore the symmetric key.


B. Restore the msdb database.
C. Restore the master database.
D. Configure the IIS virtual directory.

Answer: A
Section: (none)

Explanation/Reference:
http://technet.microsoft.com/en-us/library/ms156016.aspx
Report Server Database
A report server is a stateless server that uses the SQL Server Database Engine to store metadata and object
definitions. A Reporting Services installation uses two databases to separate persistent data storage from
temporary storage requirements. The databases are created together and bound by name. By default, the
database names are reportserver and reportservertempdb, respectively.
The databases can run on a local or remote Database Engine instance. Choosing a local instance is useful if
you have sufficient system resources or want to conserve software licenses, but running the databases on a
remote computer can improve performance.
The report server database is a SQL Server database that stores the following content:
- Items managed by a report server (reports and linked reports, shared data sources, report models, folders,
resources) and all of the properties and security settings that are associated with those items.
- Subscription and schedule definitions.
- Report snapshots (which include query results) and report history.
- System properties and system-level security settings.
- Report execution log data.
- Symmetric keys and encrypted connection and credentials for report data sources.
Administering a Report Server Database
A Reporting Services deployment uses two SQL Server relational databases for internal storage. By default, the
databases are named ReportServer and ReportServerTempdb. ReportServerTempdb is created with the
primary report server database and is used to store temporary data, session information, and cached reports.
In Reporting Services, database administration tasks include backing up and restoring the report server
databases and managing the encryption keys that are used to encrypt and decrypt sensitive data.
Moving the Report Server Databases to Another Computer
Both the reportserver and reportservertempdb databases must be moved or copied together. A Reporting
Services installation requires both databases; the reportservertempdb database must be related by name to the
primary reportserver database you are moving.
Moving a database does not effect scheduled operations that are currently defined for report server items.
- Schedules will be recreated the first time that you restart the Report Server service.
- SQL Server Agent jobs that are used to trigger a schedule will be recreated on the new database instance.
You do not have to move the jobs to the new computer, but you might want to delete jobs on the computer that
will no longer be used.
- Subscriptions, cached reports, and snapshots are preserved in the moved database. If a snapshot is not
picking up refreshed data after the database is moved, clear the snapshot options in Report Manager, click
Apply to save your changes, re-create the schedule, and click Apply again to save your changes.
- Temporary report and user session data that is stored in reportservertempdb are persisted when you move
that database.
Use the following steps to move the databases:
1. Backup the encryption keys for the report server database you want to move. You can use the
Reporting Services Configuration tool backup the keys.
2. Stop the Report Server service. You can use the Reporting Services Configuration tool to stop the service.
3. Start SQL Server Management Studio and open a connection to the SQL Server instance that hosts the
report server databases.
4 Right-click the report server database, point to Tasks, and click Detach. Repeat this step for the report server
temporary database.
5. Copy or move the .mdf and .ldf files to the Data folder of the SQL Server instance you want to use. Because
you are moving two databases, make sure that you move or copy all four files.
6. In Management Studio, open a connection to the new SQL Server instance that will host the report server
databases.
7. Right-click the Databases node, and then click Attach.
8. Click Add to select the report server database .mdf and .ldf files that you want to attach. Repeat this step for
the report server temporary database.
9. After the databases are attached, verify that the RSExecRole is a database role in the report server database
and temporary database. RSExecRole must have select, insert, update, delete, and reference permissions on
the report server database tables, and execute permissions on the stored procedures. For more information,
see How to: Create the RSExecRole.
10. Start the Reporting Services Configuration tool and open a connection to the report server.
11. On the Database page, select the new SQL Server instance, and then click Connect.
12. Select the report server database that you just moved, and then click Apply.
13. On the Encryption Keys page, click Restore. Specify the file that contains the backup copy of the
keys and the password to unlock the file.
14. Restart the Report Server service.
QUESTION 44
You administer a SQL Server 2005 Reporting Services server. Your company publishes reports to a public Web
site. Customers can view the reports without providing user credentials. You plan to upgrade the server to use
SQL Server 2008 Reporting Services. You need to ensure that customers can continue to view the reports
without providing user credentials.
What should you do?

A. Enable Basic authentication.


B. Enable Anonymous access on the IIS virtual directory.
C. Use a custom authentication extension.
D. Select Windows Authentication and add the Guest user account.

Answer: C
Section: (none)

Explanation/Reference:
Authentication
All users or automated processes that request access to Report Server must be authenticated before access is
allowed. Reporting Services provides default authentication based on Windows integrated security and
assumes trusted relationships where client and network resources are in the same domain or a trusted domain.
You can change the authentication settings to narrow the range of accepted requests to specific security
packages for Windows integrated security, use Basic authentication, or use a custom forms-based
authentication extension that you provide. To change the authentication type to a method other than the default,
you must deploy a custom authentication extension. Previous versions of SSRS relied on IIS to perform all
types of authentication. Because SSRS 2008 no longer depends on IIS, there is a new authentication
subsystem that supports this. The Windows authentication extension supports multiple authentication types so
that you can precisely control which HTTP requests a report server will accept
(Smart Business Intelligence Solutions with Microsoft SQL Server® 2008, Copyright © 2009 by Kevin Goff and Lynn Langit)

QUESTION 45
You administer a Microsoft SQL Server 2005 Reporting Services (SSRS) instance.
The instance has the following features:
Deployed as a single server
Configured to use Native mode
A custom data extension developed by using Microsoft .NET Framework 2.0 You plan to upgrade the instance
to SQL Server 2008 Reporting Services. You need to upgrade the instance without loss of functionality.
What should you do?

A. Uninstall Internet Information Services (IIS).


B. Upgrade the data extension to .NET Framework 3.5.
C. Edit the RSWebapplication.config file to refer to the upgraded SSRS endpoint location.
D. Install a new instance of SSRS. Migrate the existing configuration files and database to the new instance.

Answer: D
Section: (none)

Explanation/Reference:

QUESTION 46
You administer a SQL Server 2000 server.
The SQL Server 2000 server hosts a SQL Server 2000 relational data warehouse and a SQL Server 2000
Analysis Services database (OLAP database).
You plan to migrate to a new SQL Server 2008 server in a new untrusted domain. You need to ensure that both
the relational data warehouse and the OLAP database are migrated in the minimum possible time.
What should you do?

A. Use the Copy Database Wizard to migrate the relational data warehouse.
Use the Migration Wizard to migrate the OLAP database and process the OLAP database.
B. Use the Copy Database Wizard to migrate the relational data warehouse.
Use the Migration Wizard to migrate the OLAP database and do not process the OLAP database.
C. Perform a detach and attach of the relational data warehouse files from SQL Server 2000 to SQL Server
2008.
Use the Migration Wizard to migrate the OLAP database and process the OLAP database.
D. Perform a detach and attach of the relational data warehouse files from SQL Server 2000 to SQL Server
2008.
Use the Migration Wizard to migrate the OLAP database and do not process the OLAP database.

Answer: C
Section: (none)

Explanation/Reference:
http://msdn.microsoft.com/en-us/library/ms190794.aspx
Detaching and Attaching Databases
The data and transaction log files of a database can be detached and then reattached to the same or another
instance of SQL Server. Detaching and attaching a database is useful if you want to change the database to a
different instance of SQL Server on the same computer or to move the database.
Note: The SQL Server on-disk storage format is the same in the 64-bit and 32-bit environments. Therefore,
attach works across 32-bit and 64-bit environments. A database detached from a server instance running in one
environment can be attached on a server instance that runs in another environment.
In SQL Server 2008, you can use detach and attach operations to upgrade a user database from SQL Server
2000 or SQL Server 2005. However, the following restrictions apply:
Copies of the master, model or msdb database created using SQL Server 2000 or SQL Server 2005 cannot be
attached.

http://technet.microsoft.com/en-us/library/ms143409.aspx
http://technet.microsoft.com/en-us/library/ms174860.aspx
Migrating Existing Analysis Services Databases
You can use the Analysis Services Migration Wizard to upgrade Microsoft SQL Server 2000 Analysis Services
databases to Microsoft SQL Server 2008 Analysis Services. During migration, the wizard copies SQL Server
2000 Analysis Services database objects and then re-creates them on an instance of SQL Server 2008 Analysis
Services. The source databases are left intact and are not modified. After you verify that the new databases are
fully operational, you can manually delete the old databases.
As a best practice, you should migrate your databases one at a time, or in small batches. This will allow you to
verify that each database object appears as expected on the destination server, before you migrate additional
objects. When you use the Migration Wizard, the MSSQLServerOLAPService service must be running on both
the source and the destination server.
Using the Migration Wizard
You can start the Migration Wizard from an Analysis Services server node in the Object Browser in SQL Server
Management Studio. You can also start the wizard at the command prompt, by running the program
MigrationWizard.exe
After you migrate a database, you must process the database from the original data source before you can
query the database.
As an administrator, you keep the Microsoft SQL Server Analysis Services objects in the production databases
current by processing them. Processing is the step, or series of steps, that populate Analysis Services objects
with data from relational data sources. Processing is different depending on the type of object and the selection
of processing options.
While the processing job is working, the affected Analysis Services objects can be accessed for querying. The
processing job works inside a transaction and the transaction can be committed or rolled back. If the processing
job fails, the transaction is rolled back. If the processing job succeeds, an exclusive lock is put on the object
when changes are being committed, which means the object is temporarily unavailable for query or processing.
During the commit phase of the transaction, queries can still be sent to the object, but they will be queued until
the commit is completed. For more information about locking and unlocking during processing, see Locking and
Unlocking Databases (XMLA).

QUESTION 47
You design a Business Intelligence (BI) solution by using SQL Server 2008. The solution will support a Microsoft
ASP.NET application that is deployed to a Web farm. Reports will be deployed to a SQL Server 2008 Reporting
Services (SSRS) instance. The databases for the SSRS instance will be deployed to a two-node failover cluster
that hosts a single instance of SQL Server 2008.
You need to ensure that the SSRS instance remains available even when one of the servers fails.
What should you do?

A. Configure SSRS for native server mode.


B. Configure SSRS for integrated server mode.
C. Deploy SSRS on the primary node of the cluster.
D. Deploy SSRS in a scale-out deployment on the Web farm.

Answer: D
Section: (none)

Explanation/Reference:
Scaling Out
The Enterprise edition of SSRS supports scaling out—that is, using more than one physical machine to support
the particular SSRS solution that runs from a common database. To implement a scaled-out solution, you use
the Reporting Services Configuration Manager tool (Scale-Out Deployment section). This is also called a Web
farm. SSRS is not a clusteraware application; this means that you can use network load balancing (NLB) as
part of your scale-out deployment.
A typical scaled-out SSRS implementation includes multiple physical servers. Some of these servers distribute
the front-end report rendering via a network load balancing type of scenario.
You can also add more physical servers to perform snapshots or caching in enterprisesized implementations.
(Smart Business Intelligence Solutions with Microsoft SQL Server® 2008, Copyright © 2009 by Kevin Goff and Lynn Langit)

http://technet.microsoft.com/en-us/library/ms157293.aspx
Planning a Deployment Topology
Standard Scale-Out Server Deployment
In a standard scale-out server deployment, multiple report servers share a single report server database. The
report server database should be installed on a remote SQL Server instance.
Deploy Reporting Services in a scale-out deployment to provide a highly available and scalable report server
installation. In a scale-out deployment, each report server in the deployment is referred to as a node. Nodes
participate in the scale-out if the report server is configured to use the same report server database as another
report server. Report server nodes can be load balanced to support high-volume interactive reporting.
A scale-out server deployment configuration is recommended in the following circumstances:
- For high-volume reporting, where activity is measured in concurrent users or in the complexity of reports that
take a long time to process or render.
- For high-availability scenarios, where it is important that the reporting environment does not encounter
unplanned downtime or become unavailable.
- When you want to improve the performance of scheduled operations and subscription delivery.
Scale-out deployment is not supported in all editions of SQL Server. All report server nodes in a deployment
must run the same version and service pack level of SQL Server.

QUESTION 48
You design a Business Intelligence (BI) solution by using SQL Server 2008. You plan to deploy a new SQL
Server 2008 Reporting Services (SSRS) solution for an accounting department. The department currently uses
Microsoft Excel 2007-based reports hosted on Windows SharePoint Services (WSS) 3.0.
You need to replace the existing Excel 2007 reports with SSRS-based reports.
Your solution must meet the following requirements:
Users must be able to access the reports by using WSS.
Reports must be version-controlled.
Developers must be able to deploy the reports to a WSS document library. Which two tasks should you
perform? (Each correct answer presents part of the solution. Choose two.)

A. Configure the SSRS instance by using native mode.


B. Configure the SSRS instance by using SharePoint integration mode.
C. Install the Reporting Services Add-in for SharePoint Technologies in the WSS server.
D. Install SharePoint Web Part in the WSS server. Configure the Web Part to point to the reports in Report
Manager.

Answer: BC
Section: (none)

Explanation/Reference:
http://msdn.microsoft.com/en-us/library/bb283190(SQL.100).aspx
Requirements for Running Reporting Services in SharePoint Integrated Mode
You can integrate Microsoft SQL Server Reporting Services with Windows SharePoint Services or Office
SharePoint Server by configuring a report server to run in SharePoint integrated mode and by installing a
Reporting Services Add-in that adds infrastructure and application pages to a SharePoint Web application.
Report Server Requirements
A report server that runs in SharePoint integrated mode has edition and software requirements:
- The report server computer must satisfy the hardware and software requirements for SQL Server installations.
- Edition requirements for Reporting Services in SharePoint integrated mode include Developer, Evaluation,
Standard, or Enterprise editions. There is no support for this feature in the Workgroup edition or in SQL Server
Express with Advanced Services.
- The report server database must be created for SharePoint integrated mode.
- To join a report server to a SharePoint farm, the report server must be installed on a computer that has an
instance of a SharePoint product or technology. You can install the report server before or after installing the
SharePoint product or technology instance.
- The version of the SharePoint product or technology that you install on the report server computer must be the
same version that is used throughout the farm. I
SharePoint Product and Technology Requirements
The SharePoint server farm that you integrate with Reporting Services has the following edition and software
requirements:
- Edition requirements for the SharePoint product or technology are Windows SharePoint Services 3.0 or
Microsoft Office SharePoint Server 2007. If you use Microsoft Office SharePoint Server , you must run either
the Standard Edition or the Enterprise Edition of Microsoft Office SharePoint Server 2007.
- Reporting Services Add-in for SharePoint Technologies must be installed on the Web front-end. The Reporting
Services Add-in provides server integration features and Web application pages for accessing report server
items from a SharePoint site. The add-in must be installed on each Web front-end in the server farm through
which users will access reports and other items.
- You must have at least 2 gigabytes of RAM on the Web front-end computer.
- Anonymous access cannot be enabled on the SharePoint Web application. If Anonymous access is enabled,
you will be able to configure integration settings but users will get an error when they run a report. All other
authentication providers and options are supported. If you are configuring integration between a report server
and a SharePoint farm, each SharePoint Web application in the farm can be configured to use different
authentication providers.
Database Server Requirements
Both Reporting Services and SharePoint products and technologies use SQL Server relational databases for
internal storage. Windows SharePoint Services installs the Embedded Edition for its database. Reporting
Services cannot use this edition for its database; it requires that you install the Evaluation, Developer, Standard,
or Enterprise edition of SQL Server Database Engine. The SQL Server 2008 Reporting Services Add-in for
SharePoint Technologies requires a SQL Server 2008 Reporting Services (SSRS) report server instance
because this add-in is not supported with earlier versions of SQL Server. However, the report server may
connect to a report server database that is hosted in either SQL Server 2005 or SQL Server 2008.
If you want to install Reporting Services and a SharePoint technology instance on the same computer, you can
run SQL Server Express and another edition of SQL Server side-by-side on the same computer or you can use
the same instance of the Database Engine for the SharePoint configuration and content databases if you
choose the Advanced installation option when installing a SharePoint product or technology. If you choose the
Basic installation option instead, the SharePoint Setup program will install SQL Server Embedded Edition as an
internal component and use that instance to host the SharePoint databases.

QUESTION 49
You design a Business Intelligence (BI) solution by using SQL Server 2008. The solution includes reports
hosted on a single SQL Server 2008 Reporting Services (SSRS) server. You plan to modify the report server
infrastructure to support a scale-out deployment. You need to ensure that the scale-out deployment meets the
following requirements:
Allows users to access any of the Report Server servers by using the original Report Manager URL.
Minimizes network traffic.
Which three tasks should you perform? (Each correct answer presents part of the solution. Choose three.)

A. Add a <HostName> element in the <Service> section of each RsReportServer.config file.


B. Add the same <machineKey> element in the <system.web> section of all web.config files for each Report
Server server.
C. Add a different <machineKey> element in the <system.web> section of all web.config files for each Report
Server server.
D. Modify the <UrlRoot> element in the <Service> section of each RsReportServer.config file.
E. Modify the <ReportServerUrl> element in the <UI> section of each RsReportServer.config file.
F. Modify the <sessionState> element in the <system.web> section of the web.config file in each
ReportManager folder.

Answer: ABD
Section: (none)

Explanation/Reference:
http://msdn.microsoft.com/en-us/library/ms156453(SQL.90).aspx
Configuring a Report Server Scale-Out Deployment
A scale-out deployment refers to an installation configuration that has multiple report server instances sharing a
single report server database.
Configuring View State Validation
- Open the Web.config file for Report Manager and set the <machineKey> element. You must specify the
validation key, decryption key, and the type of encryption used for validation of data.
- Repeat these steps for each report server in the scale-out deployment. Verify that all Web.Config files in the
\Reporting Services\Report Manager folders contain identical <machineKey> elements in the <system.web>
section.
Specifying the Virtual Server Name in Reporting Services Configuration files
If you configure the report server scale-out deployment to run on an NLB cluster, you must manually update
report server URL settings in the configuration files to use the virtual server name.
- Set the <ReportServerUrl> to the virtual server name and remove the entry for
<ReportServerVirtualDirectory>. This step ensures that all requests coming through Report Manager are load-
balanced to the report servers that are running in the scale-out deployment.
- Set the <UrlRoot> to the virtual server address. This step ensures that all hyperlinks in reports point back to
the scale-out deployment and are load-balanced accordingly. This setting is also used to complete report
delivery.
http://msdn.microsoft.com/en-us/library/ms157273.aspx
RSReportServer Configuration File
The RSReportServer.config file stores settings that are used by Report Manager, the Report Server Web
service, and background processing. All Reporting Services applications run within a single process that reads
configuration settings stored in the RSReportServer.config file.
Service section specifies the application settings that apply to the service as a whole. For more information
about the internal components of the service.
UrlRoot - Used by the report server delivery extensions to compose URLs that are used by reports delivered in
e-mail and file share subscriptions. The value must be a valid URL address to the report server from which the
published report is accessed. Used by the report server to generate URLs for offline or unattended access.
These URLs are used in exported reports, and by delivery extensions to compose a URL that is included in
delivery messages such as links in e-mails.
The report server determines URLs in reports based on the following behavior:
- When UrlRoot is blank (the default value) and there are URL reservations, the report server automatically
determines URLs the same way that URLs are generated for the ListReportServerUrls method. The first URL
returned by the ListReportServerUrls method is used. Or, if SecureConnectionLevel is greater than zero (0), the
first SSL URL is used.
- When UrlRoot is set to a specific value, the explicit value is used.
- When UrlRoot is blank and there are no URL reservations configured, the URLs in rendered reports and in e-
mail links are incorrect.

QUESTION 50
You design a Business Intelligence (BI) solution by using SQL Server 2008. You plan to develop a report that
will use data obtained from a SQL Server 2008 Analysis Services (SSAS) instance. The solution has many key
performance indicators (KPIs). You need to ensure that users can perform the following tasks in the minimum
amount of time and by using the minimum amount of development effort:
Browse through the report offline.
View the KPIs that they want to see.
Which tool should you use?

A. Report Builder
B. Microsoft Excel 2007
C. SQL Server 2008 Reporting Services (SSRS)
D. a custom Microsoft ASP.NET application that has the ReportViewer control

Answer: B
Section: (none)

Explanation/Reference:
Could be chieved by other means, but only answer B - Microsoft Excel 2007 satisfies condition "minimum
amount of development effort".

QUESTION 51
You design a Business Intelligence (BI) solution by using SQL Server 2008. You plan to design a report that
uses data obtained from a SQL Server 2008 Analysis Services (SSAS) instance.
The SSAS cube contains five parent-child key performance indicators (KPIs). Each KPI has nine children.
You need to create an executive dashboard in Microsoft Office SharePoint Server (MOSS) that displays the
KPIs and depicts the parent-child relationship.
Which technology should you use?

A. Microsoft Office Excel


B. MOSS KPI Library
C. MOSS Business Data Catalog
D. Microsoft Office PerformancePoint Server

Answer: D
Section: (none)

Explanation/Reference:
http://office.microsoft.com/en-us/performancepoint-server/
PerformancePoint Server 2007
A complete business intelligence and performance management solution. Use PerformancePoint Server to
plan, report, monitor, and analyze your organization’s performance.

http://office.microsoft.com/en-us/dashboard-designer-help/getting-started-performancepoint-dashboard-
designer-HA100800792.aspx
PerformancePoint Dashboard Designe r is a tool that you can use to create dashboards,
scorecards, and reports and publish them to a SharePoint site.
Dashboard Designer helps you build a dashboard that contains a single, consistent, and visible view of your
business. You can use this tool to provide business intelligence (BI) to the people in your organization so that
they can make informed business decisions. By using Dashboard Designer, you can create reports and
scorecards that display important metrics and key performance indicators (KPIs) that will help you measure
and track performance in your organization.

QUESTION 52
You design a Business Intelligence (BI) solution by using SQL Server 2008. You have a SQL Server 2008
Analysis Services (SSAS) cube. Some users only use Microsoft Excel 2007.
Some users only use Microsoft ProClarity.
You plan to add new key performance indicators (KPIs). You need to ensure that the KPIs meet the following
requirements:
Provide all users access to the new KPIs.
Minimize future maintenance.
What should you do?

A. Create the KPIs in ProClarity.


B. Create the KPIs in the Excel spreadsheets.
C. Create the KPIs in the SSAS cube.
D. Create SQL Server 2008 Reporting Services (SSRS) reports that contain the expected KPIs.

Answer: C
Section: (none)

Explanation/Reference:
Microsoft PerformancePoint Server
PerformancePoint Server allows you to quickly create a centralized Web site with all of your company’s
performance metrics. The environment is designed to allow business analysts to create sophisticated
dashboards that are hosted in a SharePoint environment. These dashboards can contain SSRS reports and
visualizations of data from OLAP cubes as well as other data sources. It also has a strong set of products that
support business forecasting. PerformancePoint Server includes the functionality of the Business Scorecard
Manager
and ProClarity Analytics Server. Its purpose is to facilitate the design and hosting of enterprise-level scorecards
via rich data-visualization options such as charts and reports available in Reporting Services, Excel, and Visio.
PerformancePoint Server also includes some custom visualizers, such as the Strategy Map.
Note We are sometimes asked what happened to ProClarity, a company that had provided a specialized client for OLAP cubes. Its target
customer was the business analyst. Microsoft acquired ProClarity in 2006 and has folded features of its products into PerformancePoint
Server
(Smart Business Intelligence Solutions with Microsoft SQL Server® 2008, Copyright © 2009 by Kevin Goff and Lynn Langit)
QUESTION 53
You design a Business Intelligence (BI) solution by using SQL Server 2008. You plan to design a dimensional
modeling strategy for a new data warehouse. The data warehouse has a dimension table named Employees.
The Employees dimension table contains information about the employees and their departments.
Employees are moved to different departments frequently. You need to preserve the historical information of the
Employees table.
Which dimensional model should you use?

A. Role-Playing Dimension
B. Degenerated Dimension
C. Type I Slowly Changing Dimension
D. Type II Slowly Changing Dimension

Answer: D
Section: (none)

Explanation/Reference:
Slowly Changing Dimensions
For our cubes to provide meaningful information from year to year, we need to have dimensions whose
members are fairly constant. If the dimensions are changing drastically month to month, our analysis across the
time dimension becomes worthless. Therefore, we need mainly static dimensions. Some dimensions, however,
change over time. Salespeople move from one sales territory to another. The corporate organizational chart
changes as employees are promoted or resign. These are known as Slowly Changing Dimensions (SCD).
SCDs come in three varieties: Type 1, Type 2, and Type 3, as defined by the Business Intelligence community.
Not exciting names, but they didn’t ask for my input, so it’s what we are stuck with!
Type 1 Slowly Changing Dimensions
When a dimension is implemented as a Type 1 SCD, we don’t keep track of its history as it changes. The
members of the dimension represent the way things are right now. With a Type 1 SCD, it is impossible to go
back and determine the state of the dimension members at any time in the past.
Type 2 Slowly Changing Dimensions
When a dimension is implemented as a Type 2 SCD, four supplementary attributes are added to the dimension
to track the history of that dimension. These four attributes are:
- SCD Original ID An alternative primary key for the dimension
- SCD Start Date The date this dimension member became active
- SCD End Date The date this dimension member ceased being active
- SCD Status The current state of this dimension member, either active or inactive
Type 3 Slowly Changing Dimensions
A Type 3 SCD is similar to a Type 2 SCD with one exception. A Type 3 SCD does not track the entire history of
the dimension members. Instead, a Type 3 SCD tracks only the current state and the original state of a
dimension member.A Type 3 SCD is implemented using two additional attributes:
- SCD Start Date The date the current state of this dimension member became active
- SCD Initial Value The original state of this attribute
(McGraw-Hill - Delivering Business Intelligence with Microsoft SQL Server 2008 (2009))

QUESTION 54
You design a Business Intelligence (BI) solution by using SQL Server 2008. At the end of every business day,
an application records the inventory to the Products table. The business solution for the application must
accommodate the following features:
The content of the Products table varies every day.
Historical product attributes are not stored.
You need to identify an appropriate dimensional model to meet the business solution.
Which model should you use?

A. Degenerate Dimension
B. Parent-Child Dimension
C. Type I Slowly Changing Dimension
D. Type II Slowly Changing Dimension

Answer: C
Section: (none)

Explanation/Reference:
Type 1 Slowly Changing Dimensions
When a dimension is implemented as a Type 1 SCD, we don’t keep track of its history as it changes. The
members of the dimension represent the way things are right now. With a Type 1 SCD, it is impossible to go
back and determine the state of the dimension members at any time in the past.

QUESTION 55
You design a Business Intelligence (BI) solution by using SQL Server 2008. You plan to design a dimensional
modeling strategy for a new data warehouse application.
The application contains the following dimensions:
Product
Time
Customer
SalesPerson
The application contains the following cubes:
Sales that contains all the dimensions
Products that contain the Product and the Time dimensions Customers that contain the Customer and the Time
dimensions You need to design an appropriate dimensional modeling strategy for the Product and the Time
dimensions.
Which dimensional model should you use?

A. Conformed dimensions
B. Degenerate dimensions
C. Parent-Child dimensions
D. Reference dimensions

Answer: A
Section: (none)

Explanation/Reference:
Conformed dimensions enable a dimension to be used in multiple data marts, such as a time dimension
or a product dimension. Surrogate keys are artificial keys that link dimension and fact tables and that are used
to ensure uniqueness and to improve performance.

http://blog.sqlauthority.com/2007/07/27/sql-server-data-warehousing-interview-questions-and-answers-part-2/ -
Excellent site
Conformed dimensions mean the exact same thing with every possible fact table to which they are
joined. They are common to the cubes.

QUESTION 56
You design a Business Intelligence (BI) solution by using SQL Server 2008. The data warehouse contains a
table named Employee Dimension.
The table contains the following three attributes:
EmployeeID
EmployeeName
ReportsTo
The ReportsTo attribute tracks the EmployeeID attribute of the manager that an employee reports to.
You need to ensure that sales data of only managers and the names of all employees reporting to each of these
managers are displayed. You want to achieve this goal by using a hierarchy model that provides the best
possible performance when the data warehouse is queried.
Which hierarchy model should you use?

A. Ragged Hierarchy
B. Balanced Hierarchy
C. Non-Natural Hierarchy
D. Parent-Child Dimensional Hierarchy

Answer: D
Section: (none)

Explanation/Reference:
http://msdn.microsoft.com/en-us/library/ms174846.aspx
Defining a Parent-Child Hierarchy
A parent-child hierarchy is a hierarchy in a standard dimension that contains a parent attribute. A parent
attribute describes a self-referencing relationship, or self-join, within a dimension main table. Parent-child
hierarchies are constructed from a single parent attribute. Only one level is assigned to a parent-child hierarchy,
because the levels present in the hierarchy are drawn from the parent-child relationships between members
associated with the parent attribute. The position of a member in a parent-child hierarchy is determined by the
KeyColumns and RootMemberIf properties of the parent attribute, whereas the position of a member in a level is
determined by the OrderBy property of the parent attribute. For more information about attribute properties, see
Attributes and Attribute Hierarchies.
Because of parent-child relationships between levels in a parent-child hierarchy, some nonleaf members can
also have data derived from underlying data sources, in addition to data aggregated from child members.

QUESTION 57
You design a Business Intelligence (BI) solution by using SQL Server 2008. You have deployed a SQL Server
2008 Reporting Services (SSRS) application. The number of users for the SSRS application increases. This
results in performance problems. The datasets for the reports are optimized.
You investigate the SSRS deployment and discover the following characteristics:
Users report that the reports take a long time to run. There are a large number of graphical reports that
summarize data. Subscription processing affects the performance of reports that are run by interactive users.
You need to modify the SSRS infrastructure to resolve the performance problems.
Which scale-out strategy should you use?

A. Single SSRS server and a single reporting database server


B. Multiple SSRS servers and a single reporting database server
C. Single SSRS server and multiple reporting database servers
D. Multiple SSRS servers and multiple reporting database servers

Answer: B
Section: (none)

Explanation/Reference:
Scaling Out
The Enterprise edition of SSRS supports scaling out—that is, using more than one physical machine to support
the particular SSRS solution that runs from a common database.
Multiple reporting database servers are not possible and Single SSRS server won't improve RS performance.

QUESTION 58
You design a Business Intelligence (BI) solution by using SQL Server 2008. The solution includes a SQL Server
2008 Analysis Services (SSAS) application that resides on a single server.
Data in the SSAS database grows every month.
You need to provide a scalability strategy that meets the following requirements:
Maximizes the end-user throughput.
Minimizes daily downtime window due to processing.
Accommodates an unexpected increase in the number of users.
What should you do?

A. Use proactive caching on the server.


B. Add an additional server and use remote partitions.
C. Scale out the solution by adding more computers and use the Read-Only Database functionality.
D. Use the multidimensional online analytical processing (MOLAP)-enabled write-back capabilities of the
Analysis Services.

Answer: C
Section: (none)

Explanation/Reference:

QUESTION 59
You design a Business Intelligence (BI) solution by using SQL Server 2008. You plan to deploy a new database
to the SQL Server 2008 Analysis Services (SSAS) instance. The database contains a cube. The cube contains
three Type 1 slowly changing dimensions. The database is updated throughout the day by adding 5,000 rows of
data every hour. You need to ensure that the cube always contains up-to-date data. You also need to ensure
that the users can access the cube during cube processing.
What should you do?

A. Use the relational online analytical processing (ROLAP) cube storage model.
B. Use the hybrid online analytical processing (HOLAP) cube storage model. Use the snapshot isolation level in
the relational database that the cube is built on.
C. Use the automatic multidimensional online analytical processing (MOLAP) cube storage model.
D. Use the hybrid online analytical processing (HOLAP) cube storage model. Use SQL Server 2008 Integration
Services (SSIS) pipeline tasks to schedule periodic cube updates.

Answer: A
Section: (none)

Explanation/Reference:
ROLAP
Relational OLAP (ROLAP) does not make a copy of the facts on SSAS. It reads this information from the star
schema source. Any aggregations that are designed are written back to tables
on the same star schema source system. Query performance is significantly slower than that of partitions using
MOLAP or HOLAP; however, particular business scenarios can be well served by using ROLAP partitions:
■■ Huge amounts of source data, such as cubes that are many TBs in size
■■ Need for near real-time data—or example, latency in seconds
■■ Need for near 100 percent cube availability—or example, downtime because of processing limited to
minutes or seconds
MOLAP (default) Source data (fact table rows) is copied from the star schema to the SSAS instance as
MOLAP data. Source metadata (which includes cube and dimension structure and dimension data) and
aggregations are copied (for dimension data) or generated (for all other metadata and aggregations). The r
esults are stored in MOLAP format on SSAS, and proactive caching is not used.
MOLAP (nondefault) Source data is copied. Metadata and aggregations are stored in MOLAP format on
SSAS. Proactive caching is enabled. This includes scheduled, automatic, and medium- and low-latency
MOLAP.
HOLAP Source data is not copied, metadata and aggregations are stored in MOLAP format on SSAS, and
proactive caching is enabled
(Smart Business Intelligence Solutions with Microsoft SQL Server® 2008, Copyright © 2009 by Kevin Goff and Lynn Langit)

http://msdn.microsoft.com/en-us/library/ms174915.aspx
Partition Storage Modes and Processing
MOLAP
The MOLAP storage mode causes the aggregations of the partition and a copy of its source data to be stored in
a multidimensional structure in Analysis Services when the partition is processed. This MOLAP structure is
highly optimized to maximize query performance. The storage location can be on the computer where the
partition is defined or on another computer running Analysis Services. Because a copy of the source data
resides in the multidimensional structure, queries can be resolved without accessing the partition's source data.
Query response times can be decreased substantially by using aggregations. The data in the partition's MOLAP
structure is only as current as the most recent processing of the partition.
As the source data changes, objects in MOLAP storage must be processed periodically to incorporate those
changes and make them available to users. Processing updates the data in the MOLAP structure, either fully or
incrementally. The time between one processing and the next creates a latency period during which data in
OLAP objects may not match the source data. You can incrementally or fully update objects in MOLAP storage
without taking the partition or cube offline.
ROLAP
The ROLAP storage mode causes the aggregations of the partition to be stored in indexed views in the
relational database that was specified in the partition's data source. Unlike the MOLAP storage mode, ROLAP
does not cause a copy of the source data to be stored in the Analysis Services data folders. Instead, when
results cannot be derived from the query cache, the indexed views in the data source is accessed to answer
queries. Query response is generally slower with ROLAP storage than with the MOLAP or HOLAP storage
modes. Processing time is also typically slower with ROLAP. However, ROLAP enables users to view data in
real time and can save storage space when you are working with large datasets that are infrequently queried,
such as purely historical data.
HOLAP
The HOLAP storage mode combines attributes of both MOLAP and ROLAP. Like MOLAP, HOLAP causes the
aggregations of the partition to be stored in a multidimensional structure in an SQL Server Analysis Services
instance. HOLAP does not cause a copy of the source data to be stored. For queries that access only summary
data in the aggregations of a partition, HOLAP is the equivalent of MOLAP. Queries that access source data—
for example, if you want to drill down to an atomic cube cell for which there is no aggregation data—must
retrieve data from the relational database and will not be as fast as they would be if the source data were stored
in the MOLAP structure. With HOLAP storage mode, users will typically experience substantial differences in
query times depending upon whether the query can be resolved from cache or aggregations versus from the
source data itself.
Partitions stored as HOLAP are smaller than the equivalent MOLAP partitions because they do not contain
source data and respond faster than ROLAP partitions for queries involving summary data. HOLAP storage
mode is generally suited for partitions in cubes that require rapid query response for summaries based on a
large amount of source data. However, where users generate queries that must touch leaf level data, such as
for calculating median values, MOLAP is generally a better choice.

QUESTION 60
You design a Business Intelligence (BI) solution by using SQL Server 2008. The solution has a cube that is
processed periodically. The cube takes several hours to process. Cube processing results in a considerable
amount of downtime. You need to minimize the downtime while maintaining the best possible query
performance of the cube.
What should you do?

A. Use the multidimensional online analytical processing (MOLAP) cube storage model.
Process the cube on a staging server.
Use database synchronization to copy the cube to a production server.
B. Use the relational online analytical processing (ROLAP) cube storage model.
Process the cube on a staging server.
Use database synchronization to copy the cube to a production server.
C. Use the hybrid online analytical processing (HOLAP) cube storage model.
Process the cube on a production server.
D. Partition the cube into several partitions.
Use the relational online analytical processing (ROLAP) cube storage model for each partition.
Process the cube on a production server.

Answer: A
Section: (none)

Explanation/Reference:
MOLAP
The MOLAP storage mode causes the aggregations of the partition and a copy of its source data to be stored in
a multidimensional structure in Analysis Services when the partition is processed. This MOLAP structure is
highly optimized to maximize query performance. The storage location can be on the computer where the
partition is defined or on another computer running Analysis Services. Because a copy of the source data
resides in the multidimensional structure, queries can be resolved without accessing the partition's source data.
Query response times can be decreased substantially by using aggregations. The data in the partition's MOLAP
structure is only as current as the most recent processing of the partition.
As the source data changes, objects in MOLAP storage must be processed periodically to incorporate those
changes and make them available to users. Processing updates the data in the MOLAP structure, either fully or
incrementally. The time between one processing and the next creates a latency period during which data in
OLAP objects may not match the source data. You can incrementally or fully update objects in MOLAP storage
without taking the partition or cube offline.

ROLAP will not improve performance. HOLAP storage mode is generally suited for partitions in cubes that
require rapid query response for summaries based on a large amount of source data

QUESTION 61
You design a Business Intelligence (BI) solution by using SQL Server 2008. You develop a SQL Server 2008
Analysis Services (SSAS) project and use Microsoft Visual Source Safe (VSS) as the source control system.
After making changes to the project, you check in the files to the VSS. Then, you deploy the project to a shared
server for testing. Four new developers are assigned to work in parallel on the project. You need to ensure that
the new developers can modify the most recent version of the project and test it without affecting the work of the
existing developers.
What should you do?

A. Deploy the project to a local server.


B. Set the Deployment mode of the project to Deploy All.
C. Set the Deployment mode of the project to Deploy Changes Only.
D. Increment the version number of the deployment server in build properties for each deployment.

Answer: A
Section: (none)

Explanation/Reference:
QUESTION 62
You design a Business Intelligence (BI) solution by using SQL Server 2008. Several developers work on a large
SQL Server 2008 Analysis Services (SSAS) project. Developers will work in parallel on the same cubes in the
solution.
You need to manage the cube definitions to ensure that each developer's work is not overwritten. You also need
to ensure that conflicts can be easily resolved.
What should you recommend the developers to do?

A. Work in online mode against a shared server.


B. Work in disconnected mode and deploy the solution to a shared server frequently.
C. Work in disconnected mode and check in the project to a source control system frequently.
D. Work in online mode against a local server and synchronize the SSAS database with a shared server
frequently.

Answer: C
Section: (none)

Explanation/Reference:

QUESTION 63
You are the lead developer for a SQL Server 2008 data warehousing project. The source database for the
project is an online transaction processing (OLTP) system. The OLTP system executes 4,000 transactions
every minute during business hours. The OLTP system records only the date and time of insertion of a new row
and not for the updates of existing rows.
You plan to design an extract, transform, and load (ETL) process for the project that populates a data
warehouse from the source database.
The ETL process must be configured in the following manner:
To run after business hours
To capture new rows and existing rows that have been modified You need to ensure that only new rows or
modified rows from the database tables are processed by the ETL process.
What should you do?

A. Configure the data warehouse database to support the Type I Slowly Changing Dimension transformation.
B. Configure the data warehouse database to support the Type II Slowly Changing Dimension transformation.
C. Configure the Change Data Capture feature on all the source database tables that will be processed by the
ETL process.
D. Configure the Change Data Capture feature on all the data warehouse database tables that will be
processed by the ETL process.

Answer: C
Section: (none)

Explanation/Reference:
Change Data Capture
One of the biggest challenges of the Extract, Transform, and Load (ETL) process is determining which records
need to be extracted from the source data and loaded into the data mart. For smaller dimensional tables that
are not used to populate slowly changing dimensions, we may choose to truncate the target table and refill it
with all of the data from the source with every load
There are several methods for determining which data has changed since the last extract. They include:
c Adding create and last update fields to the database table
c Adding flag fields to indicate when records have been extracted
c Creating triggers or stored procedures to replicate changes to change capture tables
If our source data is coming from a SQL Server 2008 database, we have a new feature to make this process
much easier. That feature is known as change data capture (CDC).
The transaction information is converted into a more readily usable format and stored in a change table. One
change table is created for each table that is being tracked by change data capture.
(McGraw-Hill - Delivering Business Intelligence with Microsoft SQL Server 2008 (2009))

http://msdn.microsoft.com/en-us/library/bb522489.aspx
Change Data Capture
Change data capture is designed to capture insert, update, and delete activity applied to SQL Server tables, and to make the details of the
changes available in an easily consumed relational format. The change tables used by change data capture contain columns that mirror
the column structure of a tracked source table, along with the metadata needed to understand the changes that have occurred.
Change data capture is available only on the Enterprise, Developer, and Evaluation editions of SQL Server.
Change data capture provides information about DML changes on a table and a database. By using change data capture, you eliminate
expensive techniques such as user triggers, timestamp columns, and join queries.

QUESTION 64
You design a Business Intelligence (BI) solution by using SQL Server 2008. The solution includes a data
warehouse that has an online transaction processing (OLTP) database as the data source.
The tables in the OLTP database do not include date or time information. On each execution, the SQL Server
2008 Integration Services (SSIS) package that copies the data must reload and process the entire dataset.
You plan to improve the process of loading the data warehouse. You need to ensure that the following
requirements are met:
Only new and modified data is processed.
All modifications of the rows caused due to insert, update, and delete activities are processed. The impact of the
loading process on the source system is minimal. Which action should you perform on the tables that are
involved in the load process?

A. Set up the Change Tracking feature.


B. Set up the Change Data Capture (CDC) feature.
C. Create timestamp columns.
D. Create Data Manipulation Language (DML) triggers.

Answer: B
Section: (none)

Explanation/Reference:
Change Data Capture
Change data capture is designed to capture insert, update, and delete activity applied to SQL Server tables, and to make the details of the
changes available in an easily consumed relational format. The change tables used by change data capture contain columns that mirror
the column structure of a tracked source table, along with the metadata needed to understand the changes that have occurred.

QUESTION 65
You design a Business Intelligence (BI) solution by using SQL Server 2008. The instance contains an SQL
Server 2008 Analysis Services (SSAS) database. The SSAS database contains a cube named Sales. The Sales
cube has a dimension named Geography and a role named roleEurope.
The Geography dimension has a hierarchy that contains the following members:
Continent
Region
City
You plan to design the security configuration for the Sales cube. You need to enable the Read permissions for
the roleEurope role. You also need to ensure that the roleEurope role can access only the Fact rows that are
members of the Europe continent. Which Multidimensional Expressions (MDX) statement should you use?
A. MEASURES.CURRENTMEMBER IS EUROPE
B. MEASURES.CURRENTMEMBER[CONTINENT] IS EUROPE
C. ANCESTOR(GEOGRAPHY.CURRENTMEMBER) IS EUROPE
D. ANCESTOR(GEOGRAPHY.CURRENTMEMBER,[CONTINENT]) IS EUROPE

Answer: D
Section: (none)

Explanation/Reference:
The Ancestor function returns the parent, grandparent, great-grandparent, and so forth of the specified
member. The Ancestor function requires two parameters, both of which must be placed within the parentheses.
The first parameter is the member that serves as the starting point for the function. The second parameter is
either the hierarchy level where the ancestor is to be found or an integer specifying the number of levels to
move upward to find the ancestor.
(McGraw-Hill - Delivering Business Intelligence with Microsoft SQL Server 2008 (2009))

http://intelligent-bi.blogspot.com/2009/12/ancestor-mdx-function.html
http://technet.microsoft.com/en-us/library/ms145616.aspx
ANCESTOR MDX Function
A function that returns the ancestor of a specified member at a specified level or at a specified distance from the
member.
Level syntax
Ancestor(Member_Expression, Level_Expression)
Member_Expression - A valid Multidimensional Expressions (MDX) expression that returns a member.
Level_Expression - A valid Multidimensional Expressions (MDX) expression that returns a level.

QUESTION 66
You design a Business Intelligence (BI) solution by using SQL Server 2008. You are building an extract,
transform, and load (ETL) process by using SQL Server 2008 Integration Services (SSIS).
You stage data in a SQL Server 2008 database. You load a data warehouse that has 15 dimension tables and 3
fact tables.
You need to design a control flow that meets the following requirements:
Each table must be loaded with its own SSIS package. All packages must be controlled from a master package.
Dimension tables must have no interdependencies.
Dimension tables must load without error before the Fact tables are loaded.
Master package parallelism must be maximized.
What should you do?

A. Place the dimension packages in a sequence container and connect them by using a precedence constraint
set to Success.
Use a precedence constraint set to Success to connect another sequence container holding the Fact
packages.
B. Place the dimension packages in a sequence container.
Use a precedence constraint set to Success to connect another sequence container holding the Fact
packages.
Set the FailParentOnFailure property to True for each dimension package.
C. Connect the dimension packages by using a precedence constraint set to Success.
Connect the Fact packages to the end of the dimension packages by using a precedence constraint set to
Success.
Set the FailPackageOnFailure property to True for each dimension package.
D. Place an Execute Package task inside a Foreach Loop container and change the connection string for the
Execute Package task on each iteration.
Connect the Fact packages to the end of the Foreach Loop container by using a precedence constraint set
to Success.
Set the FailPackageOnFailure property to True for the Foreach Loop container.
Answer: B
Section: (none)

Explanation/Reference:
http://msdn.microsoft.com/en-us/library/ms139855.aspx
Sequence Container
The Sequence container defines a control flow that is a subset of the package control flow. Sequence
containers group the package into multiple separate control flows, each containing one or more tasks and
containers that run within the overall package control flow.
There are many benefits of using a Sequence container:
- Disabling groups of tasks to focus package debugging on one subset of the package control flow.
- Managing properties on multiple tasks in one location by setting properties on a Sequence container instead of
on the individual tasks.
- Providing scope for variables that a group of related tasks and containers use.
You can set a transaction attribute on the Sequence container to define a transaction for a subset of the
package control flow. In this way, you can manage transactions at a more granular level. For example, if a
Sequence container includes two related tasks, one task that deletes data in a table and another task that
inserts data into a table, you can configure a transaction to ensure that the delete action is rolled back if the
insert action fails.
FailParentOnFailure - A Boolean value that specifies whether the parent container fails if an error occurs
in the container. The default value for this property is False.
Precedence Constraints - Precedence constraints link containers and tasks within the same parent
container into an ordered control flow.
http://msdn.microsoft.com/en-us/library/ms141261.aspx
Precedence constraints link executables, containers, and tasks in packages into a control flow, and specify
conditions that determine whether executables run. An executable can be a For Loop, Foreach Loop, or
Sequence container; a task; or an event handler. Event handlers also use precedence constraints to link their
executables into a control flow.
A precedence constraint links two executables: the precedence executable and the constrained executable. The
precedence executable runs before the constrained executable, and the execution result of the precedence
executable may determine whether the constrained executable runs. The following diagram shows two
executables linked by a precedence constraint.

QUESTION 67
You design a Business Intelligence (BI) solution by using SQL Server 2008. You plan to transform sales data
from a retail sales outlet database to a SQL Server 2008 data warehouse by using SQL Server 2008 Integration
Services (SSIS). The retail sales database is an online transaction processing (OLTP) database that processes
large amounts of transactions twenty-four hours a day.
You need to design the structure of the SSIS packages such that the performance of the source system is
minimally affected.
What should you do?

A. Load and transform data from the source directly to the data warehouse once a day.
B. Load data from the source to a staging database once a day. Then, transform the data to the data
warehouse.
C. Load and transform data from the source directly to the data warehouse four times a day at regular intervals
of time.
D. Load data from the source to a staging database four times a day at regular intervals of time.
Then, transform the data to the data warehouse once a day.

Answer: D
Section: (none)

Explanation/Reference:
Using a Staging Server
In most situations (whether we’ve chosen to use a dedicated SSIS server or not), we create one SSIS package
per data source. We load all types of source data—flat files, Excel, XML, relational, and so on—into a series of
staging tables in a SQL Server instance. We then perform subsequent needed processing, such as validation,
cleansing, and translations using SSIS processes. It is important to understand that the data stored on this SQL
Server instance is used only as a pass-through for cleansing and transformation and should never be used for
end-user queries. If you use SQL Server 2008 as a staging database, you can take advantage of several new
relational features that can help you create more efficient load and update staging processes.
MERGE logic is also useful for building load packages that redirect data depending on whether it is new. You
can think of MERGE as alternative to the built-in Slowly Changing Dimension (SCD) transformation for those
types of business scenarios. MERGE performs a validation of existing data versus new data on load, to avoid
duplicates among other issues. MERGE uses ID values to do these comparisons. Therefore, pay careful
attention to using correct (and unique) ID values in data sources that you intend to merge.
(Smart Business Intelligence Solutions with Microsoft SQL Server® 2008, Copyright © 2009 by Kevin Goff and Lynn Langit)

QUESTION 68
You design a Business Intelligence (BI) solution by using SQL Server 2008. You have a SQL Server 2008
Integration Services (SSIS) package that runs against a SQL Server 2008 data source. The package contains
an opening Execute SQL task that runs the BEGIN TRANSACTION command. This is followed by a Sequence
task that contains additional Execute SQL tasks, each with the FailParentOnFailure property set to TRUE .
There are two Execute SQL tasks. The first task is connected to a Success precedence constraint that runs the
COMMIT TRANSACTION command. The next task is connected to a Failure precedence constraint that runs
the ROLLBACK TRANSACTION command. The package fails but the transaction is not rolled back. You need
to ensure that the transaction is successfully rolled back if the package fails.
What should you do?

A. Modify the RetainSameConnection property as True for the Connection Object.


B. Modify the TransactionOption property as Required for the Sequence Container.
C. Modify the TransactionOption property as Required for each Execute SQL task.
D. Modify the IsolationLevel property as ReadCommitted for each Execute SQL task.

Answer: A
Section: (none)

Explanation/Reference:
You can use Execute SQL tasks to issue your own
BEGIN TRANSACTION and COMMIT TRANSACTION
commands to manually start and stop the transactions. Be aware that, if you take this approach, you must set
the RetainSameConnection property of the connection manager to True.
This is because the default behavior in SSIS is to drop the connection after each task completes, so that
connection pooling can be used.
(Smart Business Intelligence Solutions with Microsoft SQL Server® 2008, Copyright © 2009 by Kevin Goff and Lynn Langit)

QUESTION 69
You design a Business Intelligence (BI) solution by using SQL Server 2008. You design a solution that analyzes
the usage of discount vouchers issued by the company. The data warehouse of the company contains a
dimension table named Vouchers. The dimension table contains two columns named VoucherNumber and
CustomerFullName. A value for the CustomerFullName column is not available till the voucher is used. You
need to configure the Slowly Changing Dimension transformation to load the Vouchers dimension even if there
is a NULL value in the CustomerFullName column.
What should you do?

A. Enable the support for inferred members.


B. Set the Change Type option to Fixed Attribute for the CustomerFullName column.
C. Set the Change Type option to Historical Attribute for the VoucherNumber column.
D. Set the Change Type option to Changing Attribute for the VoucherNumber column.

Answer: A
Section: (none)

Explanation/Reference:
The Inferred Dimension Members page lets us specify whether we can infer information for dimension
members that do not yet exist. When the wizard completes, it adds a number of transformations to the package.
These additional transformations provide the functionality to make the slowly changing dimension update work
properly.
(McGraw-Hill - Delivering Business Intelligence with Microsoft SQL Server 2008 (2009))

http://msdn.microsoft.com/en-us/library/ms186969.aspx
Inferred Dimension Members (Slowly Changing Dimension Wizard)
Use the Inferred Dimension Members dialog box to specify options for using inferred members. Inferred
members exist when a fact table references dimension members that are not yet loaded. When data for the
inferred member is loaded, you can update the existing record rather than create a new one.
Inferred member indicates that the row is an inferred member record in the dimension table. An inferred
member exists when a fact table references a dimension member that is not yet loaded. A minimal inferred-
member record is created in anticipation of relevant dimension data, which is provided in a subsequent loading
of the dimension data. The Slowly Changing Dimension transformation directs these rows to an output named
Inferred Member Updates.

QUESTION 70
You design a Business Intelligence (BI) solution by using SQL Server 2008. You plan to transform vehicle
survey data from a flat file to a data warehouse. You need to design a solution that performs the following tasks:
Redirect data of vehicle owners to a table named DimOwner. Redirect data of vehicle enthusiasts to a table
named DimEnthusiast. Log each record's key along with the data transfer date to an audit table.
What should you do?

A. Use a Conditional Split component to redirect data to the audit table and to a Multicast component.
Use the Multicast component to redirect the data to the DimOwner and DimEnthusiast tables.
B. Use a Conditional Split component to redirect data to the audit table and to a second Conditional Split
component.
Use the second Conditional Split component to redirect the data to the DimOwner and DimEnthusiast tables.
C. Use a Multicast component to redirect data to the audit table and to a second Multicast component.
Use the second Multicast component to redirect the data to the DimOwner and DimEnthusiast tables.
D. Use a Multicast component to redirect data to the audit table and to a Conditional Split component.
Use the Conditional Split component to redirect the data to the DimOwner and DimEnthusiast tables.

Answer: D
Section: (none)

Explanation/Reference:
http://msdn.microsoft.com/en-us/library/ms137701(SQL.100).aspx
Multicast Transformation
The Multicast transformation distributes its input to one or more outputs. This transformation is similar to the
Conditional Split transformation. Both transformations direct an input to multiple outputs. The difference
between the two is that the Multicast transformation directs every row to every output, and the Conditional Split
directs a row to a single output.
Using the Multicast transformation, a package can create logical copies of data. This capability is useful when
the package needs to apply multiple sets of transformations to the same data. For example, one copy of the
data is aggregated and only the summary information is loaded into its destination, while another copy of the
data is extended with lookup values and derived columns before it is loaded into its destination.
This transformation has one input and multiple outputs. It does not support an error output.

QUESTION 71
You design a Business Intelligence (BI) solution by using SQL Server 2008. You develop a SQL Server 2008
Integration Services (SSIS) package to perform an extract, transform, and load (ETL) process from a Microsoft
Access database to a SQL Server 2008 data warehouse. The package is developed on a computer that runs a
32-bit operating system. You deploy the package to a server that runs a 64-bit operating system. You create a
SQL Server Agent job to run the package. The package fails to run when the job starts. You need to ensure that
the package runs successfully.
What should you do?

A. Redeploy the package to the Program Files (x86) folder.


B. Enable the Use 32 bit runtime option in the job step of the SQL Server Agent job.
C. Rebuild the package on a computer that runs a 64-bit operating system. Redeploy the package to the
server.
D. Modify the project of the package by setting the Run64BitRuntime property to TRUE . Rebuild and redeploy
the package to the server.

Answer: B
Section: (none)

Explanation/Reference:
http://msdn.microsoft.com/en-us/library/ms141766.aspx
64-bit Considerations for Integration Services
Selecting 32-bit or 64-bit Package Execution in a SQL Server Agent Job
To run a package in 32-bit mode from a 64-bit version of SQL Server Agent, select Use 32 bit runtime on the
Execution options tab of the New Job Step dialog box.

QUESTION 72
You are managing a Business Intelligence (BI) infrastructure that uses SQL Server 2008 Integration Services
(SSIS). Your infrastructure has many SSIS solutions that contain several packages. The current backup
strategy includes nightly backups of all databases on the server. You need to develop a deployment strategy
that meets the following requirements:
Deploys only the packages that have been modified.
Includes all packages in the current backup strategy.
What should you do?

A. Use the Package Installation Wizard to deploy packages to SQL Server.


B. Use the Package Installation Wizard to deploy packages to the file system.
C. Create a reusable deployment script by using dtutil.exe to deploy packages to the msdb database.
D. Create a reusable deployment script by using dtutil.exe to deploy packages to the SSIS Package Store.

Answer: C
Section: (none)

Explanation/Reference:
DTUTIL
DTUTIL.exe is another command-line utility; it is used for SSIS package deployment and management. With
DTUTIL, developers and administrators can move or copy packages to the msdb database, to the SSIS
Package Store (which allows you to further group packages into child folders viewable in SSMS), or to any file
system folder. You can also use DTUTIL to encrypt packages, set package passwords, and more. Unlike
DTEXEC, DTUTIL has no corresponding graphical utility, unfortunately, but SQL Server Books Online contains
a comprehensive reference to the available switches and options.
(Smart Business Intelligence Solutions with Microsoft SQL Server® 2008, Copyright © 2009 by Kevin Goff and Lynn Langit)

QUESTION 73
You design a Business Intelligence (BI) solution by using SQL Server 2008. You have created an extract,
transform, and load (ETL) solution by using SQL Server 2008 Integration Services (SSIS). The solution contains
10 child packages and a parent package that executes the child packages in sequence.
You plan to deploy the solution to 20 locations that are not connected to each other. You need to deploy the
solution by configuring the connection managers of all packages with the appropriate settings. You need to
achieve this goal by using the minimum amount of administrative effort.
What should you do?

A. Create an XML configuration file each for the parent package and the child packages.
B. Create an XML configuration file for the parent package. Configure the child packages by using Parent
package variables.
C. Create a SQL Server configuration each for the parent package and the child packages in a central SQL
Server 2008 database.
D. Create a SQL Server configuration for the parent package in a central SQL Server 2008 database.
Configure the child packages by using Parent package variables.

Answer: B
Section: (none)

Explanation/Reference:
http://msdn.microsoft.com/en-us/library/ms141682.aspx
Package Configurations
SQL Server Integration Services provides package configurations that you can use to update the values of
properties at run time. A configuration is a property/value pair that you add to a completed package. Typically,
you create a package set properties on the package objects during package development, and then add the
configuration to the package. When the package runs, it gets the new values of the property from the
configuration. For example, by using a configuration, you can change the connection string of a connection
manager, or update the value of a variable.
Package configurations provide the following benefits:
- Configurations make it easier to move packages from a development environment to a production
environment. For example, a configuration can update the path of a source file, or change the name of a
database or server.
- Configurations are useful when you deploy packages to many different servers. For example, a variable in the
configuration for each deployed package can contain a different disk space value, and if the available disk
space does not meet this value, the package does not run.
- Configurations make packages more flexible. For example, a configuration can update the value of a variable
that is used in a property expression.
Package Configuration Types
- XML configuration file - An XML file contains the configurations. The XML file can include multiple
configurations.
- Environment variable - An environment variable contains the configuration.
- Registry entry - A registry entry contains the configuration.
- Parent package variable- A variable in the package contains the configuration. This configuration type is
typically used to update properties in child packages.
- SQL Server table - A table in a SQL Server database contains the configuration. The table can include multiple
configurations.

QUESTION 74
You design a Business Intelligence (BI) solution by using SQL Server 2008. You develop 10 SQL Server 2008
Integration Services (SSIS) packages. You plan to include package configuration for all the packages.
The package configuration has the following requirements:
All configurations are stored in a single location.
Configuration variables are easily backed up and restored.
Indirect configuration is used.
The database administrators will use SQL Server client tools to change the configuration values. You need to
create package configurations for the packages to meet the configuration requirements.
What should you do?

A. Store all configuration information in a SQL Server table.


Specify configuration database connectivity settings in an environment variable.
B. Store all configuration information in a SQL Server table.
Specify configuration database connectivity settings in the SQL Server table.
C. Use XML configuration files for all packages.
Store each XML configuration file in a common folder.
Specify the XML configuration file location in an environment variable.
D. Use XML configuration files for all packages.
Store each XML configuration file in a common folder.
Specify the XML configuration file location in the configuration settings.

Answer: A
Section: (none)

Explanation/Reference:
SQL Server table configuration Type
A table in a SQL Server database contains the configuration. The table can include multiple configurations.
If you select the SQL Server configuration type, you specify the connection to the SQL Server database in which
you want to store the configurations. You can save the configurations to an existing table or create a new table
in the specified database.
Direct and Indirect Configurations
Integration Services provides direct and indirect configurations. If you specify configurations directly, Integration
Services creates a direct link between the configuration item and the package object property. Direct
configurations are a better choice when the location of the source does not change. For example, if you are sure
that all deployments in the package use the same file path, you can specify an XML configuration file.
Indirect configurations use environment variables. Instead of specifying the configuration setting directly, the
configuration points to an environment variable, which in turn contains the configuration value. Using indirect
configurations is a better choice when the location of the configuration can change for each deployment of a
package.

QUESTION 75
You design a Business Intelligence (BI) solution by using SQL Server 2008. You plan to design a logging
strategy for all SQL Server 2008 Integration Services (SSIS) packages for your company. You want to log errors
that occur in all existing and future packages to a SQL Server 2008 table.
You need to design the strategy to meet the following requirements:
The logging mechanism must be reused by each package. Changes to the logging mechanism must be applied
to all packages by using the minimum amount of administrative effort. What should you do?

A. Enable and configure logging in a package.


Create all other packages by using the first package as the template.
B. Create an event handler in a package.
Configure the event handler to perform logging.
Create all other packages by using the first package as the template.
C. Enable and configure logging in a package.
Save the log settings to an XML file.
Enable logging in all other packages.
Load the log settings on each package by using the XML file.
D. Create an event handler in a package.
Configure the event handler to perform logging.
Enable package configurations in the package.
Store the properties of the event handler in an XML configuration file.
Configure all the packages to use the configuration file during execution.

Answer: C
Section: (none)

Explanation/Reference:
Logging
Because Integration Services packages are, for the most part, designed for unattended operation, it can be
extremely important to create a log documenting the execution of the package. This type of execution log can
also be helpful for testing and debugging during the creation of the package. We control the logging performed
by an Integration Services package using the Configure SSIS Logs dialog box.
We can create the following types of logs:
c Comma-separated values text file
c File to be read by the SQL Profiler
c SQL Server Table named sysdtslog90
c Windows Event Log
c Extensible Markup Language (XML) text file
All of the log types, with the exception of the Windows Event Log, need to be configured to specify exactly
where the logged information is to be stored.
Finally, we need to determine which events should be logged for the package or for a package item.
(McGraw-Hill - Delivering Business Intelligence with Microsoft SQL Server 2008 (2009)

QUESTION 76
You design a Business Intelligence (BI) solution by using SQL Server 2008. You plan to design a logging
strategy for a SQL Server 2008 Integration Services (SSIS) solution. The SSIS solution contains 15 packages.
You want to log detailed information about each package. You need to ensure that custom events specific to
each control flow task in each package are logged.
What should you do?

A. Configure logging for each control flow task in each package for the required events.
B. Enable the Log Events window in Business Intelligence Development Studio (BIDS) that has the SSIS
solution loaded.
C. Enable event handling for each control flow task in each package for the required events. Create custom
code to perform the logging by using a Script component.
D. Create a custom assembly that writes to the log, and use the assembly in a Script task. Ensure that the
Script task is connected to the appropriate control flow task by using a Failure precedence constraint.

Answer: A
Section: (none)

Explanation/Reference:

QUESTION 77
You design a Business Intelligence (BI) solution by using SQL Server 2008. You create a SQL Server 2008
Analysis Services (SSAS) solution by using SQL Server 2008. The solution contains a dimension named
DimProduct. The DimProduct dimension contains attributes named Product, Color, Sub-Category, and
Category. The Product attribute is the key attribute for DimProduct.
A sample data set of the solution is as shown in the following table.
Product Color Sub-Category Category
A001 Blue Jeans Clothing
A002 Red Jeans Clothing
A003 Yellow Couch Furniture
A004 Red T-shirt Clothing
A005 Black Chair Furniture
You discover that the DimProduct dimension has performance issues. You need to design attribute relationships
on the DimProduct dimension for optimal performance.
Which set of relationships should you use?

A. Source Attribute Related Attribute


Product Color
Product Category
Product Sub-Category
B. Source Attribute Related Attribute
Product Color
Product Sub-Category
Sub-Category Category
C. Source Attribute Related Attribute
Product Color
Color Sub-Category
Sub-Category Category
D. Source Attribute Related Attribute
Product Color
Product Category
Product Sub-Category
Sub-Category Category

Answer: B
Section: (none)

Explanation/Reference:
Attribute Relationships
Before we examine the new attribute relationship designer, let’s take a minute to define the term attribute
relationship. We know that attributes represent aspects of dimensions.
Hierarchies are roll-up groupings of one or more attributes. Most often, measure data is aggregated (usually
summed) in hierarchies.
Measure data is loaded into the cube via rows in the fact table. These are loaded at the lowestlevel of
granularity.
It’s important to understand that the SSAS query processing engine is designed to use or calculate aggregate
values of measure data at intermediate levels in dimensional hierarchies.
If you’re creating natural hierarchies, the SSAS query engine can use intermediate aggregations if and only if
you define the attribute relationships between the level members. These intermediate aggregations can
significantly speed up MDX queries to the dimension. To that end, the new Attribute Relationships tab in the
dimension editor lets you visualize and configure these important relationships correctly
(Smart Business Intelligence Solutions with Microsoft SQL Server® 2008, Copyright © 2009 by Kevin Goff and Lynn Langit)

http://msdn.microsoft.com/en-us/library/ms166553.aspx
Specifying Attribute Relationships Between Attributes in a User-Defined Hierarchy
As you have already learned in this tutorial, you can organize attribute hierarchies into levels within user
hierarchies to provide navigation paths for users in a cube. A user hierarchy can represent a natural hierarchy,
such as city, state, and country, or can just represent a navigation path, such as employee name, title, and
department name. To the user navigating a hierarchy, these two types of user hierarchies are the same.
With a natural hierarchy, if you define attribute relationships between the attributes that make up the levels,
Analysis Services can use an aggregation from one attribute to obtain the results from a related attribute. If
there are no defined relationships between attributes, Analysis Services will aggregate all non-key attributes
from the key attribute. Therefore, if the underlying data supports it, you should define attribute relationships
between attributes. Defining attribute relationships improves dimension, partition, and query processing
performance.
When you define attribute relationships, you can specify that the relationship is either flexible or rigid. If you
define a relationship as rigid, Analysis Services retains aggregations when the dimension is updated. If a
relationship that is defined as rigid actually changes, Analysis Services generates an error during processing
unless the dimension is fully processed. Specifying the appropriate relationships and relationship properties
increases query and processing performance.

http://msdn.microsoft.com/en-us/library/ms174878.aspx
Defining Attribute Relationships
In Microsoft SQL Server Analysis Services, attributes are the fundamental building block of a dimension. A
dimension contains a set of attributes that are organized based on attribute relationships.
For each table included in a dimension, there is an attribute relationship that relates the table's key attribute to
other attributes from that table. You create this relationship when you create the dimension.
An attribute relationship provides the following advantages:
- Reduces the amount of memory needed for dimension processing. This speeds up dimension, partition, and
query processing.
- Increases query performance because storage access is faster and execution plans are better optimized.
- Results in the selection of more effective aggregates by the aggregation design algorithms, provided that user-
defined hierarchies have been defined along the relationship paths.
Attribute Relationship Considerations
When the underlying data supports it, you should also define unique attribute relationships between attributes.
To define unique attribute relationships, use the Attribute Relationships tab of Dimension Designer.
Any attribute that has an outgoing relationship must have a unique key relative to its related attribute. In other
words, a member in a source attribute must identify one and only one member in a related attribute. For
example, consider the relationship, City -> State. In this relationship, the source attribute is City and the related
attribute is State. The source attribute is the “many” side and the related side is the “one” side of the many-to-
one relationship. The key for the source attribute would be City + State.

QUESTION 78
You design a SQL Server 2008 Analysis Services (SSAS) solution. The solution has dimensions named
Account and Scenario. The Scenario dimension has the keys numbered 1 and 2 for the members named Actual
and Budget, respectively. The Account dimension has the key numbered 40 for the member named Income.
You create a key performance indicator (KPI) named Net Income that has the following parameters:
KPI Value: ( [Account].[Accounts].&[40], [Scenario].[Scenario].&[1], [Measures].[Amount] ) KPI Goal: ( [Account].
[Accounts].&[40], [Scenario].[Scenario].&[2], [Measures].[Amount] ) If the net income is less than 70 percent of
the budgeted value, the performance is considered as bad. If the net income is greater than or equal to 90
percent of the budgeted value, the performance is considered as good. You need to calculate the performance
at a specific point in time.
What should you do?

A. Set the Trend expression in the KPI to the following code segment:
Case
When KpiValue( "Net Income" ) / KpiGoal( "Net Income" ) >= .90 Then 1
When KpiValue( "Net Income" ) / KpiGoal( "Net Income" ) < .90 And
KpiValue( "Net Income" ) / KpiGoal( "Net Income" ) >= .70 Then 0
Else -1
End
B. Set the Trend expression in the KPI to the following code segment:
Case
When KpiGoal( "Net Income" ) / KpiValue( "Net Income" ) >= .90 Then 1
When KpiGoal( "Net Income" ) / KpiValue( "Net Income" ) < .90 And
KpiGoal( "Net Income" ) / KpiValue( "Net Income" ) >= .70 Then 0
Else -1
End
C. Set the Status expression in the KPI to the following code segment:
Case
When KpiValue( "Net Income" ) / KpiGoal( "Net Income" ) >= .90 Then 1
When KpiValue( "Net Income" ) / KpiGoal( "Net Income" ) < .90 And
KpiValue( "Net Income" ) / KpiGoal( "Net Income" ) >= .70 Then 0
Else -1
End
D. Set the Status expression in the KPI to the following code segment:
Case
When KpiGoal( "Net Income" ) / KpiValue( "Net Income" ) >= .90
Then 1
When KpiGoal( "Net Income" ) / KpiValue( "Net Income" ) < .90 And
KpiGoal( "Net Income" ) / KpiValue( "Net Income" ) >= .70 Then 0
Else -1
End

Answer: C
Section: (none)

Explanation/Reference:

QUESTION 79
You design a SQL Server 2008 Analysis Services (SSAS) solution that contains a cube. The solution has a
measure group that contains different measures aggregated by different dimensions. Users often browse the
cube by using Microsoft Excel. You need to enable users to view additional row-level information of the
aggregated measures from Excel by using the minimum amount of development effort.
Which Action should you create?

A. DataSet
B. Statement
C. Proprietary
D. Drillthrough

Answer: D
Section: (none)

Explanation/Reference:
Drillthrough Action Defines a dataset to be returned as a drillthrough to a more detailed level.
Creating Drillthrough Actions
For the most part, Drillthrough Actions have the same properties as Actions. Drillthrough Actions do not have
Target Type or Target Object properties. In their place, the Drillthrough Action has the following:
- Drillthrough Columns Defines the objects to be included in the drillthrough dataset.

QUESTION 80
You design a SQL Server 2008 Analysis Services (SSAS) solution. Your solution has a cube. The structure of
the cube is as shown in the exhibit. (Click the Exhibit button.)
Each branch does transactions by using the local currency of the country in which it is located. Daily exchange
rates for all local currencies are recorded against the U.S. dollar in the FactCurrencyRate measure group.
All transactions must be reported in U.S. dollars.
You need to prepare the cube to define currency conversion.
What should you do?

Exhibit:

A. Create a reference relationship between FactSales and Currency.


B. Create a reference relationship between FactCurrencyRate and Branch.
C. Create a many-to-many relationship between FactSales and Currency.
D. Create a many-to-many relationship between FactCurrencyRate and Branch.

Answer: A
Section: (none)

Explanation/Reference:
http://msdn.microsoft.com/en-us/library/ms175439.aspx
All Microsoft SQL Server Analysis Services dimensions are groups of attributes based on columns from tables
or views in a data source view. Dimensions exist independent of a cube, can be used in multiple cubes, can be
used multiple times in a single cube, and can be linked between Analysis Services.instances. A dimension that
exists independent of a cube is called a database dimension and an instance of a database dimension within a
cube is called a cube dimension.
Dimension based on a Snowflake Schema Design
Frequently, a more complex structure is required because information from multiple tables is required to define
the dimension. In this structure, called a snowflake schema, each dimension is based on attributes from
columns in multiple tables linked to each other and ultimately to the fact table by primary key - foreign key
relationships.

Snowflake Dimension
The most common variation we see in production cubes is the snowflake design. As mentioned, this is usually
implemented via a dimension that links to the fact table through another dimension. To establish this type of
relationship on the Dimension Usage tab of the cube designer, you simply select the Referenced relationship
type. To create a referenced (or snowflake) dimension, you must model the two source tables with a common
key.
(Smart Business Intelligence Solutions with Microsoft SQL Server® 2008, Copyright © 2009 by Kevin Goff and Lynn Langit)

QUESTION 81
You design a SQL Server 2008 Analysis Services (SSAS) solution that contains a cube. The cube uses an
English (EN-US) locale.
The cube has calculated members that use the members named Day of Week and Month Name of a dimension
named Date to perform date-related operations. You implement translation in Spanish for the cube. You
discover that the calculated members are not working correctly only when the cube is browsed in Spanish. You
need to ensure that the calculated members work correctly when the cube is browsed in Spanish.
What should you do?

A. Set the language property of the Date dimension to Spanish.


B. Set the case sensitive value to true for the collation property of the Date dimension.
C. Ensure that the Day of Week and Month Name members have translations defined in the Date dimension.
D. Change the calculated members to use the numeric values of the Day of Week and Month Name members.

Answer: D
Section: (none)

Explanation/Reference:

QUESTION 82
You design a Business Intelligence (BI) solution by using SQL Server 2008. You design a SQL Server 2008
Analysis Services (SSAS) solution. Customer data is stored in the tables named CustomerDetails and
CustomerContact.
The solution uses the following two data sources from two different servers:
Contoso that accesses the CustomerDetails table
ContosoCRM that accesses the CustomerContact table
You plan to create a dimension named DimCustomer to analyze customer data. You need to ensure that the
DimCustomer dimension represents the tables as a snowflake schema to include attributes from the two tables.
What should you do?

A. Create a data source view named DsvContoso that is associated with the two data sources and add the
tables to the data source view.
B. Create a data source view named DsvContoso that is associated with the two data sources and create a
named query in the data source view to merge the tables.
C. Create a data source view named DsvCustomer that is associated with the Contoso data source and add
the CustomerDetails table to the data source view.
Create a data source view named DsvCustomerContact that is associated with the ContosoCRM data
source and add the CustomerContact table to the data source view.
D. Create a data source view named DsvCustomer that is associated with the Contoso data source and create
a named query in the data source view to select data from the CustomerDetails table.Create a data source
view named DsvCustomerContact that is associated with the ContosoCRM data source and create a named
query in the data source view to select data from the CustomerContact table.

Answer: A
Section: (none)

Explanation/Reference:

QUESTION 83
You design a Business Intelligence (BI) solution by using SQL Server 2008. You design a SQL Server 2008
Analysis Services (SSAS) solution by using SQL Server 2008. The solution uses a source database that
contains a table named Customer. The Customer table has multiple columns.
You have read-only access to the database.
You plan to reduce the number of columns in the Customer table. You need to split the Customer table to be
distributed across multiple table definitions.
What should you do?

A. Create multiple data sources for the SSAS solution.


B. Create multiple named queries for the SSAS solution.
C. Create multiple data source views for the SSAS solution.
D. Create multiple database views for the source database.

Answer: B
Section: (none)

Explanation/Reference:
http://technet.microsoft.com/en-us/library/ms175683.aspx
Defining Named Queries in a Data Source View (Analysis Services)
A named query is a SQL expression represented as a table. In a named query, you can specify an SQL
expression to select rows and columns returned from one or more tables in one or more data sources. A named
query is like any other table in a data source view with rows and relationships, except that the named query is
based on an expression.
A named query lets you extend the relational schema of existing tables in a data source view without modifying
the underlying data source. For example, a series of named queries can be used to split up a complex
dimension table into smaller, simpler dimension tables for use in database dimensions. A named query can also
be used to join multiple database tables from one or more data sources into a single data source view table.
When you create a named query, you specify a name, the SQL query returning the columns and data for the
table, and optionally, a description of the named query. The SQL expression can refer to other tables in the data
source view. After the named query is defined, the SQL query in a named query is sent to the provider for the
data source and validated as a whole. If the provider does not find any errors in the SQL query, the column is
added to the table.
Tables and columns referenced in the SQL query should not be qualified or should be qualified by the table
name only. For example, to refer to the SaleAmount column in a table, SaleAmount or Sales.SaleAmount is
valid, but dbo.Sales.SaleAmount generates an error.

QUESTION 84
You design a Business Intelligence (BI) solution by using SQL Server 2008. You create a SQL Server 2008
Analysis Services (SSAS) solution. Your database has a table named DimCustomer that contains columns
named FirstName and LastName. You belong only to the db_datareader role in the database. You have added
DimCustomer to a data source view. You need to design a solution that includes the following requirements:
A column named FullName in DimCustomer by using the values from FirstName and Lastname The data
source view allows you to delete columns from DimCustomer What should you do?

A. Implement a named calculation for FullName in the data source view.


B. Redesign DimCustomer to have a computed column named FullName.
C. Replace DimCustomer with a named query in the data source view. Create FullName as a column
expression in the named query.
D. Implement a view in the database with FullName as a column expression. Replace DimCustomer with the
view in the data source view.

Answer: C
Section: (none)

Explanation/Reference:

QUESTION 85
You design a Business Intelligence (BI) solution by using SQL Server 2008. A SQL Server 2008 Analysis
Services (SSAS) solution contains a cube that has the following objects:
Dimensions named DimCustomer, DimProduct, and DimGeography Measures named InternetSales and
TotalSales
Users run reports against all dimensions and measures by authenticating with their Windows accounts.
You need to provide a basic view of data to the users to display only DimGeography, DimProduct, and
TotalSales by using the least amount of storage space.
What should you do?

A. Create a new perspective for the current cube.


Select DimGeography, DimProduct, and TotalSales.
B. Create a new cube.
Add DimGeography, DimProduct, and TotalSales.
C. Create a new role.
Grant access only to DimGeography, DimProduct, and TotalSales.
D. Create a new data source view.
dd the tables used for DimGeography, DimProduct, and TotalSales.

Answer: A
Section: (none)

Explanation/Reference:
A perspective is a subset of the information in the model. Usually, a perspective coincides with a particular
job or work area within an organization. If a plus sign is to the left of the model, the model contains one or more
perspectives. Click the plus sign to view the perspectives. If you select one of these perspectives as the data
source for your report, only the entities in that perspective will be available to your report. Because perspectives
reduce the number of entities you have to look through to find the data you need on your report, it is usually a
good idea to choose a perspective, rather than using the entire Report Model.
(McGraw-Hill - Delivering Business Intelligence with Microsoft SQL Server 2008 (2009))

http://msdn.microsoft.com/en-us/library/ms345316.aspx
For models that contain many subject areas, for example, Sales, Manufacturing, and Supply data, it might be
helpful to Report Builder users if you create perspectives of the model.
A perspective is a sub-set of a model. Creating perspectives can make naviga
QUESTION 86
You design a Business Intelligence (BI) solution by using SQL Server 2008. You create a SQL Server 2008
Analysis Services (SSAS) solution by using SQL Server 2008. The solution contains a dimension named
DimCustomer that represents customers. The solution provides a list of top 10 customers based on the sales
amount. End users of the solution analyze data by using filters in Microsoft Excel worksheet. You need to
ensure that the list is updated when the filters are applied.
Which named set expression should you use?

A. CREATE SET CURRENTCUBE.[Top 10 Customer] AS


TOPCOUNT([DimCustomer].[Customer].MEMBERS,10,[Measures].[SalesAmount])
B. CREATE DYNAMIC SET CURRENTCUBE.[Top 10 Customer] AS
TOPCOUNT([DimCustomer].[Customer].MEMBERS,10,[Measures].[SalesAmount])
C. CREATE HIDDEN SET CURRENTCUBE.[Top 10 Customer] AS
TOPCOUNT([DimCustomer].[Customer].MEMBERS,10,[Measures].[SalesAmount])
D. CREATE SESSION SET CURRENTCUBE.[Top 10 Customer] AS
TOPCOUNT([DimCustomer].[Customer].MEMBERS,10,[Measures].[SalesAmount])

Answer: B
Section: (none)

Explanation/Reference:
http://msdn.microsoft.com/en-us/library/ms145963.aspx
CREATE SET Statement (MDX)
CREATE [SESSION] [ STATIC | DYNAMIC ] [HIDDEN] SET
CURRENTCUBE | Cube_Name.Set_Name AS 'Set_Expression'
[,Property_Name = Property_Value, ...n]
Set Evaluation
Set evaluation can be defined to occur differently; it can be defined to occur only once at set creation or can be
defined to occur every time the set is used.
- STATIC - Indicates that the set is evaluated only once at the time the CREATE SET statement is evaluated.
- DYNAMIC - Indicates that the set is to be evaluated every time it is used in a query.
Set Visibility - The set can be either visible or not to other users who query the cube.
- HIDDEN - Specifies that the set is not visible to users who query the cube.
Scope - A user-defined set can occur within one of the scopes listed in the following table.
- Query scope - The visibility and lifetime of the set is limited to the query. The set is defined in an individual
query. Query scope overrides session scope. For more information, see Creating Query-Scoped Named Sets
(MDX).
- Session scope - The visibility and lifetime of the set is limited to the session in which it is created. (The lifetime
is less than the session duration if a DROP SET statement is issued on the set.) The CREATE SET statement
creates a set with session scope. Use the WITH clause to create a set with query scope.

QUESTION 87
You design a SQL Server 2008 Analysis Services (SSAS) solution. Your solution has a date dimension named
Date and measures named Sales Amount and Total Product Cost.
You want to create a calculated measure named Profit. You also want to calculate the differences between the
first half and second half of the year for all the measures. You run the following Multidimensional Expressions
(MDX) query:
WITH
MEMBER [Measures].[Profit] AS
([Measures].[Sales Amount] - [Measures].[Total Product Cost])/[Measures].[Sales Amount], Format_String =
"Percent"
MEMBER [Date].[Fiscal Semester of Year].[Half Year Difference] AS [Date].[Fiscal Semester of Year].[FY H2] -
[Date].[Fiscal Semester of Year].[FY H1] SELECT
{ [Measures].[Sales Amount], [Measures].[Total Product Cost], [Measures].[Profit] } ON COLUMNS, { [Date].
[Fiscal Semester of Year].[FY H1], [Date].[Fiscal Semester of Year].[FY H2], [Date].[Fiscal Semester of Year].
[Half Year Difference] } ON ROWS
FROM [Adventure Works]
The Profit calculated measure calculates an incorrect value as shown in the exhibit. (Click the Exhibit button.)

You need to ensure that the MDX query calculates the correct value. Which code segment should you use to
replace the WITH clause in the MDX query?

A. WITH
MEMBER [Measures].[Profit] AS
([Measures].[Sales Amount] - [Measures].[Total Product Cost])/[Measures].[Sales Amount], Format_String =
"Percent", SOLVE_ORDER = 1
MEMBER [Date].[Fiscal Semester of Year].[Half Year Difference] AS [Date].[Fiscal Semester of Year].[FY
H2] - [Date].[Fiscal Semester of Year].[FY H1], SOLVE_ORDER = 2
B. WITH
MEMBER [Measures].[Profit] AS
([Measures].[Sales Amount] - [Measures].[Total Product Cost])/[Measures].[Sales Amount], Format_String =
"Percent", SOLVE_ORDER = 2
MEMBER [Date].[Fiscal Semester of Year].[Half Year Difference] AS [Date].[Fiscal Semester of Year].[FY
H2] - [Date].[Fiscal Semester of Year].[FY H1], SOLVE_ORDER = 1
C. WITH
MEMBER [Measures].[Profit] AS
([Measures].[Sales Amount] - [Measures].[Total Product Cost])/[Measures].[Sales Amount], Format_String =
"Percent", SOLVE_ORDER = 1
MEMBER [Date].[Fiscal Semester of Year].[Half Year Difference] AS [Date].[Fiscal Semester of Year].[FY
H2] - [Date].[Fiscal Semester of Year].[FY H1], SOLVE_ORDER = 2, SCOPE_ISOLATION = CUBE
D. WITH
MEMBER [Measures].[Profit] AS
([Measures].[Sales Amount] - [Measures].[Total Product Cost])/[Measures].[Sales Amount], Format_String =
"Percent", SOLVE_ORDER = 1
MEMBER [Date].[Fiscal Semester of Year].[Half Year Difference] AS [Date].[Fiscal Semester of Year].[FY
H2] - [Date].[Fiscal Semester of Year].[FY H1], SCOPE_ISOLATION = CUBE

Answer: A
Section: (none)

Explanation/Reference:
http://msdn.microsoft.com/en-us/library/ms145539.aspx
Understanding Pass Order and Solve Order (MDX)
When a cube is calculated as the result of an MDX script, it can go through many stages of computation
depending on the use of various calculation-related features. Each of these stages is referred to as a calculation
pass.
A calculation pass can be referred to by an ordinal position, called the calculation pass number. The count of
calculation passes that are required to fully compute all the cells of a cube is referred to as the calculation pass
depth of the cube.
Fact table and writeback data only impact pass 0. Scripts populate data after pass 0; each assignment and
calculate statement in a script creates a new pass. Outside the MDX script, references to absolute pass 0 refer
to the last pass created by the script for the cube.
Calculated members are created at all passes, but the expression is applied at the current pass. Prior passes
contain the calculated measure, but with a null value.
Solve Order
Solve order determines the priority of calculation in the event of competing expressions. Within a single pass,
solve order determines two things:
* The order in which Microsoft SQL Server Analysis Services evaluates dimensions, members, calculated
members, custom rollups, and calculated cells.
* The order in which Analysis Services calculates custom members, calculated members, custom rollup,
and calculated cells.
The member with the highest solve order takes precedence.
Note: The exception to this precedence is the Aggregate function. Calculated members with the Aggregate
function have a lower solve order than any intersecting calculated measure.
Solve Order Values and Precedence

Understanding SOLVe_OrDer
When you create calculated members on both the row and column axes, and one depends on the other, you
need to tell MDX in what order to perform the calculations. In our experience,
it’s quite easy to get this wrong, so we caution you to verify the results of your SOLVE_ORDER keyword.

QUESTION 88
You design a SQL Server 2008 Analysis Services (SSAS) solution. Your solution has a measure named Sales
Amount and a dimension named Date. The Date dimension has a hierarchy named Fiscal that has levels
named Fiscal Year, Fiscal Quarter, and Fiscal Month.
You need to create a calculated member to analyze the Sales Amount share percentage at different levels
compared to the total sales for a given Fiscal Year.
Which code segment should you use?

A. Case
When [Date].[Fiscal].CurrentMember.Level.Ordinal = 0 Then 1
Else
[Measures].[Sales Amount]/([Measures].[Sales Amount], Ancestor([Date].[Fiscal].CurrentMember, [Date].
[Fiscal].[Fiscal Year])) End
B. Case
When [Date].[Fiscal].CurrentMember.Level.Ordinal = 0 Then 1
Else
[Measures].[Sales Amount]/([Measures].[Sales Amount], [Date].[Fiscal].CurrentMember.Parent)
End
C. Case
When [Date].[Fiscal].CurrentMember.Level.Ordinal = 0 Then 1
Else
[Measures].[Sales Amount]/([Measures].[Sales Amount], Descendants([Date].[Fiscal].CurrentMember,
[Date].[Fiscal].[Fiscal Year], SELF_AND_AFTER))
End
D. Case
When [Date].[Fiscal].CurrentMember.Level.Ordinal = 0 Then 1
Else
[Measures].[Sales Amount]/([Measures].[Sales Amount], [Date].[Fiscal].[Fiscal Year].CurrentMember)
End

Answer: A
Section: (none)

Explanation/Reference:
ANCESTOR MDX Function
A function that returns the ancestor of a specified member at a specified level or at a specified distance from the
member.
Level syntax
Ancestor(Member_Expression, Level_Expression)
Member_Expression - A valid Multidimensional Expressions (MDX) expression that returns a member.
Level_Expression - A valid Multidimensional Expressions (MDX) expression that returns a level.

QUESTION 89
You design a Business Intelligence (BI) solution by using SQL Server 2008. You create a SQL Server 2008
Analysis Services (SSAS) solution. The solution contains a cube that has a measure named SalesAmount. The
measure contains customer sales data for the last six months. The cube has a single partition that has the
storage property set to real-time hybrid online analytical processing (HOLAP).
Queries against the cube must return current sales data that is entered one hour before cube processing. The
partition takes two hours to process and the response time for the queries is slow. You need to improve the
cube processing and query response time.
What should you do?

A. Change the storage setting of the partition to multidimensional online analytical processing (MOLAP).
B. Change the storage setting of the partition to real-time relational online analytical processing (ROLAP).
C. Create a partition for each customer. Set the storage setting of every partition to low-latency
multidimensional online analytical processing (MOLAP).
D. Create a partition for every month. Set the storage setting of the partition for the current month to low-latency
multidimensional online analytical processing (MOLAP) and that of the other partitions to MOLAP.

Answer: D
Section: (none)

Explanation/Reference:
Low-Latency MOLAP Detail data and aggregates are in multidimensional storage. When Analysis Services is
notified that the aggregates are out-of-date, it waits for a silence interval of ten seconds before beginning
processing. It uses a silence override interval of ten minutes. While the cube is processing, queries are sent to a
proactive cache. If processing takes longer than 30 minutes, the proactive cache is dropped and queries are
sent directly to the relational data source. This provides fast query response, unless processing takes longer
than 30 minutes. Maximum latency is 30 minutes. This setting is best in situations where query performance is
important but data must remain fairly current.
Medium-Latency MOLAP Detail data and aggregates are in multidimensional storage. When Analysis Services
is notified that the aggregates are out-of-date, it waits for a silence interval of ten seconds before it starts
processing. It uses a silence override interval of ten minutes. While the cube is processing, queries are sent to a
proactive cache. If processing takes longer than four hours, the proactive cache is dropped and queries are sent
directly to the relational data source. This provides fast query response, unless processing takes longer than
four hours. Maximum latency is four hours. This setting is best in situations where query performance is
important and a bit more latency can be tolerated.
Automatic MOLAP Detail data and aggregates are in multidimensional storage. When Analysis Services is
notified that the aggregates are out-of-date, it waits for a silence interval of ten seconds before it starts
processing. It uses a silence override interval of ten minutes. While the cube is processing, queries are sent to a
proactive cache. The proactive cache is not dropped, no matter how long processing takes. This provides fast
query response at all times, but it can lead to a large latency if processing is long-running. This setting is best in
situations where query performance is the most important factor and a potentially large latency can be tolerated.
(McGraw-Hill - Delivering Business Intelligence with Microsoft SQL Server 2008 (2009))

QUESTION 90
You design a Business Intelligence (BI) solution by using SQL Server 2008. You create a SQL Server 2008
Analysis Services (SSAS) solution that has a dimension table named DimCustomer.
The DimCustomer table has the following attributes:
Gender
Address
Marital Status
Phone Number
You discover that DimCustomer takes a long time to process. You need to reduce the processing time of
DimCustomer. You also need to reduce the disk space required for the DimCustomer dimension table.
What should you do?

A. Set the ProcessingGroup property of DimCustomer to ByTable.


B. Set the ProcessingGroup property of DimCustomer to ByAttribute.
C. Set the AttributeHierarchyEnabled property of the Gender and Marital Status attributes to false.
D. Set the AttributeHierarchyEnabled property of the Phone Number and Address attributes to false.

Answer: D
Section: (none)

Explanation/Reference:
http://msdn.microsoft.com/en-us/library/ms166717(SQL.90).aspx
Hiding and Disabling Attribute Hierarchies
By default in Microsoft SQL Server 2005 Analysis Services (SSAS), an attribute hierarchy is created for every
attribute in a dimension, and each hierarchy is available for dimensioning fact data. This hierarchy consists of an
All level and a detail level containing all members of the hierarchy. As you have already learned, you can
organize attributes into user-defined hierarchies to provide navigation paths in a cube. Under certain
circumstances, you may want to disable or hide some attributes and their hierarchies. For example, certain
attributes such as social security numbers or national identification numbers, pay rates, birth dates, and login
information are not attributes by which users will dimension cube information. Instead, this information is
generally only viewed as details of a particular attribute member. You may want to hide these attribute
hierarchies, leaving the attributes visible only as member properties of a specific attribute. You may also want to
make members of other attributes, such as customer names or postal codes, visible only when they are viewed
through a user hierarchy instead of independently through an attribute hierarchy. One reason to do so may be
the sheer number of distinct members in the attribute hierarchy. Finally, to improve processing performance,
you should disable attribute hierarchies that users will not use for browsing.
The value of the AttributeHierarchyEnabled property determines whether an attribute hierarchy is created. If this
property is set to False, the attribute hierarchy is not created and the attribute cannot be used as a level in a
user hierarchy; the attribute hierarchy exists as a member property only. However, a disabled attribute hierarchy
can still be used to order the members of another attribute. If the value of the AttributeHierarchyEnabled
property is set to True, the value of the AttributeHierarchyVisible property determines whether the attribute
hierarchy is visible independent of its use in a user-defined hierarchy.

You might also like