You are on page 1of 36

CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 391

Database Systems I

Data Warehousing
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 392
Introduction
Increasingly, organizations are analyzing
current and historical data to identify useful
patterns and support business strategies
(Decision Support).
Emphasis is on complex, interactive,
exploratory analysis of very large datasets
created by integrating data from across all
parts of an enterprise; data is fairly static.
Contrast such On-Line Analytic Processing
(OLAP) with traditional On-line Transaction
Processing (OLTP): mostly long queries, instead
of short update transactions.
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 393
DBS for Decision Support
Data Warehouse: Consolidate data from many
sources in one large repository.
Loading, periodic synchronization of replicas.
Semantic integration.
OLAP:
Complex SQL queries and views.
Queries based on multidimensional view of data
and spreadsheet-style operations.
Interactive and online (manual) analysis.
Data Mining: Automatic discovery of
interesting trends and other patterns.
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 394
Data Warehousing
A Data Warehouse is a subject oriented,
integrated, time variant, non volatile collection
of data for the purpose of decision support.
Integrates data from several operational
(OLTP) databases.
Keeps (relevant part of the) history of the data.
Views data at a more abstract level than OLTP
systems (aggregate over many detail records).
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 395
Data Warehouse Architecture
EXTERNAL DATA
SOURCES
EXTRACT
INTEGRATE
TRANSFORM
LOAD /
REFRESH
DATA
WAREHOUSE
Metadata
Repository
SUPPORTS
OLAP
DATA
MINING
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 396
Data Warehousing
Integrated data spanning long time periods,
often augmented with summary information.
Data warehouse keeps the history. Therefore,
several gigabytes to terabytes common.
Interactive response times expected for
complex queries.
On the other hand, ad-hoc updates uncommon.
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 397
Data Warehousing Issues
Semantic integration: When getting data from
multiple sources, must eliminate mismatches, e.g.,
different currencies, DB schemas.
Heterogeneous sources: Must access data from a
variety of source formats and repositories.
Replication capabilities can be exploited here.
Load, refresh, purge: Must load data, periodically
refresh it, and purge too-old data.
Metadata management: Must keep track of source,
loading time, and other information for all data in the
warehouse.
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 398
Multidimensional Data Model
Consists of a collection of dimensions
(independent variables) and (numeric)
measures (dependent variables).
Each entry (cell) aggregates the value(s) of the
measure(s) for all records that fall into that cell,
i.e. for all records that in each dimension have
attribute values corresponding to the value of
the cell in this dimension.
Example: dimensions Product (pid), Location
(locid), and Time (timeid) and measure Sales.
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 399
Multidimensional Data Model
11 1 1 25
11 2 1 8
11 3 1 15
12 1 1 30
12 2 1 20
12 3 1 50
13 1 1 8
13 2 1 10
13 3 1 10
11 1 2 35


p
i
d

t
i
m
e
i
d

l
o
c
i
d

s
a
l
e
s

8 10 10
30 20 50
25 8 15
1 2 3
timeid







p
i
d

1
1




1
2




1
3

locid
Slice locid=1
is shown
Tabular representation

Multidimensional representation

CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 400
Multidimensional Data Model
For each dimension, the set of values can be
organized in a concept hierarchy (subset
relationship), e.g.
PRODUCT TIME LOCATION
category week month state
pname date city
year
quarter country
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 401
Multidimensional Data Model
Multidimensional data can be stored physically in a
(disk-resident, persistent) array; called MOLAP (multi-
dimensional OLAP) systems.
Alternatively, can store as a relation; called ROLAP
(relational OLAP) systems.
The main relation, which relates dimensions to a
measure, is called the fact table.
Each dimension can have additional attributes and an
associated dimension table.
E.g., fact table Transactions(pid, locid, timeid, sales)
and (one of the) dimension table Products(pid, pname,
category, price)
Fact tables are much larger than dimensional tables.
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 402
OLAP Queries
Influenced by SQL and by spreadsheets.
A common operation is to aggregate a measure
over one or more dimensions.
Find total sales.
Find total sales for each city, or for each state.
Find top five products ranked by total sales.
We can aggregate at different levels of a
dimension hierarchy. A roll-up operation
aggregates along the next higher level of the
dimension hierarchy.
E.g., given total sales by city, we can roll-up to get
sales by state.
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 403
OLAP Queries
Drill-down: The inverse of roll-up.
E.g., given total sales by state, can drill-down to get
total sales by city.
E.g., can also drill-down on different dimension to
get total sales by product for each state.
Pivoting: Aggregation on selected dimensions.
E.g., pivoting on Location and Time
yields this cross-tabulation:

Slicing and Dicing: Equality
and range selections on one
or more dimensions.
63 81 144
38 107 145
75 35 110
WI CA Total
1995
1996
1997
176 223 339
Total
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 404
Comparison with SQL Queries
The cross-tabulation obtained by pivoting can
also be computed using a collection of SQL
queries, e.g.
SELECT SUM(S.sales)
FROM Sales S, Times T, Locations L
WHERE S.timeid=T.timeid AND S.timeid=L.timeid
GROUP BY T.year, L.state
SELECT SUM(S.sales)
FROM Sales S, Times T
WHERE S.timeid=T.timeid
GROUP BY T.year
SELECT SUM(S.sales)
FROM Sales S, Location L
WHERE S.timeid=L.timeid
GROUP BY L.state
SELECT SUM(S.sales) FROM Sales S
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 405
The Cube Operator
Generalizing the previous example, if there are
d dimensions, we have 2
d
possible SQL GROUP
BY queries that can be generated through
pivoting on a subset of dimensions (without
considering selections of specific values for
certain dimensions).
A Data Cube is a multi-dimensional model of a
datawarehouse where the domain of each
dimension is extended by the special value
ALL with the semantics of aggregating over
all values of the corresponding dimension.
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 406
The Cube Operator
An entry of a data cube is called a cell.
The number of cells of a datacube with d
dimensions is


Each SQL group corresponds to a datacube
cell.
A single of the 2
d
different SQL GROUP BY
queries can compute the measures for multiple
datacube cells.
) 1 | (|
1

d
i
i
Domain
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 407
The Cube Operator
The Cube Operator computes the measures for
all cells (evaluates all possible GROUP BY
queries) at the same time.
It can be much more efficiently processed than
the set of all corresponding (independent) SQL
GROUP BY queries.
Observation: The results of more generalized
queries (with fewer GROUP BY attributes) can
be derived from more specialized queries (with
more GROUP BY attributes) by aggregating
over the irrelevant GROUP BY attributes.
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 408
The Cube Operator
Process more specialised queries first and,
based on their results, determine the outcome
of more generalised queries.
Significant reduction of I/O cost, since
intermediate results are much smaller than
original (fact) table.

CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 409
The Cube Operator
Lattice of GROUP-BY queries of a CUBE query
w.r.t. derivability of the results
Example

{pid, locid, timeid}

{pid, locid} {pid, timeid} {locid, timeid}

{pid} {locid} {timeid}

{}
{A,B,. . .}: set of GROUP BY attributes, X Y: Y derivable from X
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 410
Implementation Issues
In the following, adopting a ROLAP
implementation.
Fact table normalized (redundancy free).
Dimension tables un-normalized.
Dimension tables are small;
updates/inserts/deletes are rare. So, anomalies
less important than query performance.
This kind of schema is very common in OLAP
applications, and is called a star schema;
computing the join of all these relations is
called a star join.

CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 411
Implementation Issues
Example star schema

price category pname pid country state city locid
sales locid timeid pid
holiday_flag week date timeid month quarter year
SALES
TIMES
PRODUCTS LOCATIONS
Fact table: Sales
Dimension tables: Times, Products, Locations
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 412
Bitmap Indexes
New indexing techniques: Bitmap indexes,
Join indexes, array representations,
compression, precomputation of aggregations,
etc.
Example Bitmap index:
10
10
01
10
112 Joe M 3
115 Ram M 5
119 Sue F 5
112 Woo M 4
00100
00001
00001
00010
sex custid name sex rating rating
M
F
Bit-vector:
1 bit for each
possible value.

One row per
record.
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 413
Bitmap Indexes
Selections can be processed using (efficient!)
bit-vector operations.
Example 1: Find all male customers





Example 2: Find all male customer with a
rating of 3 AND the relevant bit-vectors
from the bitmap indexes for sex and rating

10
10
01
10
112 Joe M 3
115 Ram M 5
119 Sue F 5
112 Woo M 4
00100
00001
00001
00010
sex custid name sex rating rating
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 414
Join Indexes
Consider the join of Sales, Products, Times, and
Locations, possibly with additional selection
conditions (e.g., country=USA).
A join index can be constructed to speed up
such joins (in a relatively static data
warehouse). It basically materializes the result
of a join.
The index contains [s,p,t,l] if there are tuples
with sid s in Sales, pid p in Products, timeid t
in Times and locid l in Locations that satisfy
the join (and selection) conditions.

CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 415
Join Indexes
Problem: Number of join indexes can grow
rapidly.
In order to efficiently support all possible
selections in a data cube, you need one join
index for each subset of the set of dimensions.
E.g, one join index each for
[s,p,t,l], [s,p,t], [s,p,l], [s,t,l], [s,p], [s,t], [s,l]

CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 416
Bitmapped Join Indexes
A variation of join indexes addresses this
problem, using the concept of Bitmap indexes.
For each attribute of each dimension table with
an additional selection (e.g., country), build a
Bitmap index.
Index contains, e.g., entry [c,s] if a dimension
table tuple with value c in the selection column
joins with a Sales tuple with sid s. Note that s
denotes the compound key of the fact table,
e.g. [pid, timeid, locid].
The Bitmap index version is especially efficient
(Bitmapped Join Index).
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 417
Bitmapped Join Indexes






Consider a query with conditions price=10 and
country=USA. Suppose tuple (with sid) s in Sales
joins with a tuple p with price=10 and a tuple l with
country =USA. There are two (Bitmap) join indexes;
one containing [10,s] and the other [USA,s].
Intersecting these indexes tells us which tuples in Sales
are in the join and satisfy the given selection.
price category pname pid country state city locid
sales locid timeid pid
holiday_flag week date timeid month quarter year
SALES
TIMES
PRODUCTS LOCATIONS
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 418
Sequences in SQL
SQL-92 supports only (unordered) sets of tuples.
Trend analysis is difficult to do in SQL-92, e.g.:
Find the % change in monthly sales
Find the top 5 product by total sales
Find the trailing n-day moving average of sales
The first two queries can be expressed with
difficulty, but the third cannot even be expressed
in SQL-92 if n is a parameter of the query.
The WINDOW clause in SQL:1999 allows us to
formulate such queries over a table viewed as a
sequence of tuples (implicitly, based on user-
specified sort keys).
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 419
The WINDOW Clause
A window is an ordered group of tuples around
each (reference) tuple of a table.
The order within a window is determined based
on an attribute specified by the SQL statement.
The width of the window is also specified by the
SQL statement.
The tuples of the window can be aggregated
using the standard (set-oriented) SQL aggregate
functions (SUM, AVG, COUNT, . . .).
SQL:1999 also introduces some new (sequence-
oriented) aggregate functions, in particular
RANK, DENSE_RANK, PERCENT_RANK.

CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 420
The WINDOW Clause







Let the result of the FROM and WHERE clauses be
Temp.
Conceptually, Temp is partitioned according to the
PARTITION BY clause.
Similar to GROUP BY, but the answer has one tuple for
each tuple in a partition, not one tuple per partition!
Each partition is sorted according to the ORDER BY
clause.
SELECT L.state, T.month, AVG(S.sales) OVER W AS movavg
FROM Sales S, Times T, Locations L
WHERE S.timeid=T.timeid AND S.locid=L.locid
WINDOW W AS (PARTITION BY L.state
ORDER BY T.month
RANGE BETWEEN INTERVAL `1 MONTH PRECEDING
AND INTERVAL `1 MONTH FOLLOWING);
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 421
The WINDOW Clause








For each tuple in a partition, the WINDOW clause creates
a window of nearby (preceding or succeeding) tuples.
Definition of window width can be value-based, as in example,
using RANGE.
Can also be based on number of tuples to include in the window,
using ROWS clause.
The aggregate function is evaluated for each tuple in the
partition based on the corresponding window.
SELECT L.state, T.month, AVG(S.sales) OVER W AS movavg
FROM Sales S, Times T, Locations L
WHERE S.timeid=T.timeid AND S.locid=L.locid
WINDOW W AS (PARTITION BY L.state
ORDER BY T.month
RANGE BETWEEN INTERVAL `1 MONTH PRECEDING
AND INTERVAL `1 MONTH FOLLOWING);
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 422
Top N Queries
Sometimes, want to find only the best answers (e.g.,
web search engines).
If you want to find only the 10 (or so) cheapest cars, the
DBMS should avoid computing the costs of all cars
before sorting to determine the 10 cheapest.
Idea: Guess a cost c such that
the 10 cheapest cars all cost less than c, and that
not too many other cars cost less than c.
Then add the selection cost<c and evaluate the query.
If the guess is right: we avoid computation for cars that
cost more than c.
If the guess is wrong: need to reset the selection and
recompute the original query.
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 423
Top N Queries











Cut-off value c is chosen by query optimizer
SELECT TOP 10 P.pid, P.pname, S.sales
FROM Sales S, Products P
WHERE S.pid=P.pid AND S.locid=1 AND S.timeid=3
ORDER BY S.sales DESC
SELECT P.pid, P.pname, S.sales
FROM Sales S, Products P
WHERE S.pid=P.pid AND S.locid=1 AND S.timeid=3
AND S.sales > c
ORDER BY S.sales DESC
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 424
Online Aggregation
Consider an aggregate query, e.g., finding the
average sales by state.
If we do not have a corresponding (materialized)
data cube, processing this query from scratch
can be very expensive.
In general, we have to scan the entire fact table.
But the user expects interactive response time.
An approximate result may be acceptable to the
user.
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 425
Online Aggregation
Can we provide the user with some approximate
results before the exact average is computed for
all states?
Can show the current running average for each
state as the computation proceeds.
Even better, we can use statistical techniques and
sample tuples to aggregate instead of simply
scanning the aggregated table.
E.g., we can provide bounds such as the
average for Wisconsin is 2000 102 with 95%
probability.
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 426
Summary
Decision support is an emerging, rapidly
growing subarea of database systems.
Involves the creation of large, consolidated data
repositories called data warehouses.
Warehouses exploited using sophisticated
analysis techniques: complex SQL queries and
OLAP multidimensional queries (or automatic
data mining methods).
New techniques for database design, indexing,
view maintenance, and interactive (online)
querying need to be developed.

You might also like