Professional Documents
Culture Documents
Granularity exhibits the detail level of records available in the OLAP database. The more
detail records are available the higher the Granularity level.
Yes. There can be more than one (even different type of DBs) physical data source.
23. Can a business model have logical table source from different physical data sources?
Yes. If different types of physical database sources are used then the SAS has to perform
merge functionality using the data sets retrieved from disparate DBs.
24. Can a Presentation Catalog have tables from different Business Model?
No. It is not possible.
25. List few reasons to have Presentation Layer?
Enables to group the dimensions and facts based on the organizational requirement.
Example: Departmental/Cross Departmental Subject Areas.
To implement Object Level Security.
To remove columns that serves the ONLY purpose of key in the Logical Model.
Siebel Analytics ODBC Client
This tool facilitates to test the business model by executing logical query against a repository.
This tool also allows connecting to physical database thru ODBC and executes SQL queries.
29. Describe the complete process when a Siebel Answers Request is executed.
When a Client application executes a Siebel Answers Request the respective SAW
communicates with the designated SAS. Using the Metadata in the repository the SAS derives
the dynamic SQL and forwards the same to Physical database server. When the database
server returns the result set to SAS, the same is send back to SAW that eventually presents
it to the Web client.
NQSConfig.INI
This is the configuration file that contains startup parameters required to start the SAS.
Modification of this file requires SAS re-start.
DBFeatures.INI
This file allows to disable/enable the database specific features.
Modification of this file requires SAS re-start.
NQServer.Log
This is the log file generated by the SAS upon start up and thereafter.
NQQuery.log This is the log file generated by SAS upon Siebel requests executed by users that
have Logging Level greater than 0.
.RPD files
Repository files that are created using Siebel Analytical Administration Tool.
Cache When caching is enabled in the NQSConfig.ini, the SAS caches the SQL Query results in
the folder specified in the NQSConfig.INI file.
What is Caching? List the merits of Caching.
Caching is a SAS feature that enables to cache the query results against the UserId for a
business model. This feature is used to improve
performance by caching query results of requests that are used repeatedly. Reduces
workload on database. Reduces network traffic.
43. Name the parameters and its significance that affects Caching feature?
Parameter
Description
DATA_STORAGE_PATHS
Indicates the name of the folder where the Query results should
be cached in files and the maximum size of the cache. Usually created in faster hard-drive to
reduce response time.
METADATA_FILE
Specifies the File Name with location that is used to store (business)
when the maximum cache size is reached. Only algorithm currently supported is LRU.
BUFFER_POOL_SIZE Size of memory that is used to store the caching entries in the memory to
improve cache response time.
MAX_ROWS_PER_CACHE_ENTRY Indicates the maximum rows that will be cached against a
user/BMM/query.
MAX_CACHE_ENTRY_SIZE Signifies the maximum allowable size for a cache entry.
MAX_CACHE_ENTRIES
44. When there are four tables involved in a query but one of the table did not have Make
table cacheable enabled. Will SAS cache the result
of this query?:
No.
Significance
Manual
Programmatically
4. Ensure that the ETL process makes appropriate entries in the Event polling table upon
successful load.
4. After making changes in the repository, activate File->Multi User Checkout -> Merge
Local Changes menu. This makes a lock on the
master repository in the shared folder, a log entry is created and the master repository
is copied into the local systems Repository
folder.
5. After the merge is completed, then invoke File-> Multi User Checkout -> Publish to
Network menu. This copies back the master
repository into shared folder, releases the lock on it and makes an entry in the log
marking the completion of changes.
Enables the SAS to combine data from different physical data sources (tables) into single
57.
61. What is the significance of Number of elements at this level in the Logical Level dialog
box?
It signifies how many rows exist for the respective dimensional attribute in the database.
This is an approximate value, but helps the SAS
to determine the best execution method. This is a noticeable performance factor.
the dimensional level attribute used in the request. Creating logical fact table that contains
logical table source and assigning aggregate level for each LTS based on a dimensional
hierarchy implements aggregate navigation.
Examples: W_Sales_Revenue_Fact
level of aggregation.
Is a way to model multiple sources of data that specifies what data is located where, so the
LastName.
Allows you to account for data changes occurs over a period in a dimensional table.
Example: Country Manager Effective Period
Type3 - One version of History is maintained. (CURRENT and PREVIOUS value columns)
67. What is Extension table?
Extension table is used to implement physical schema changes made in the OLTP system
1. Invoke the time series wizard by right clicking over the BMM.
2. All physical dimension tables are listed.
3. Select required period table
4. Select the period key that will be used for comparison (YearAgo or Quarter ago or
Month ago or Week Ago)
5. Then sales measures and calculations are listed. Choose desired measures and type of
will be added into the physical layer and included as logical table source of the
original fact table.
70. What are the Time Series Calculations available?
Change.
Percent Change
Index
Percent
71. How do you override lexical sorting of a column? Example: Month Name sorted not by
alphabetically but order of month.
By assigning another column in the Sort Order Column field of a logical column. When
this column used in a Siebel Answers Request,
sort will be performed based on the column specified in the Sort Order Column field.
72. What is authentication?
Authentication is the process that validates the credentials of the user who logs into the
Siebel Analytics.
73. What are the types of Authentication supported by SAS?
LDAP (Lightweight Directory Access Protocol).
Database Authentication.
2. Create Users that matches the login id in the database. No need to maintain password
6. When the users logs in thru Siebel Analytics, the SAS attempts to login to the database
server. If the login succeeds, then the user is
connected with Siebel Analytics or else they are not.
76. How does the External Table Authentication works?
NQS.
LOGLEVEL etc.
5. Create an initialization block that selects values from the respective database table(s)
BYPASS_NQS.
OS Authentication enables the SAS to use trusted connection feature of the OS. This
authentication is not supported when the user
Logs-in using Siebel Analytics Web client. Only applicable for applications that connect to
SAS thru ODBC.
80. What Types of Repository Variables are available?
Type
Details
Static
Example:
RSV_DW_DSN
RSV_DW_USER
Dynamic
Example:
to
RDV_CURRENT_YEAR
RDV_CURRENT_QTR
variables
in
any
programming
RDV_CURRENT_MONTH
RDV_CURRENT_WEEK
Details
System session variables are pre-declared
one. SAS does NOT allow to create a new
system session variable. Only pre-declared
session variables can be used. Initialization
Block must be assigned to the system
GROUP
WEBGROUPS
user session.
LOGLEVEL
Non-System
Example:
SV_POSITION_ID
SV_ORG_ID
5. Set Refresh time to schedule the refresh values dynamically. Only applicable for
Dynamic Repository variable.
84. What is the significance of Row-wise Initialization?
This feature enables to initialize system session values from the database table (i.e both
Variable name and its values are retrieved from
database).
Also this feature can be used to initialize a variable with list of values (i.e more than 1
value). These variables can be used in the set
operation of SQL. (Used in conjunction with IN clause of SQL).
This feature is available only for session variable.
85. What is the difference between SQL used in the Initialization Block and rest of SAS?
The SQL used in the IB refers direct physical database objects. SQL Objects referred in the
automatically.
Sales Regional Mgr can see only see sales made by reps reported to him/her
Country Mgr can see all the sales made by people reporting to him/her.
90. What is object level security?
Object level security is granting or restricting access to a repository objects, Web Objects
Usage tracking is feature in Siebel Analytics to capture the statistics over the usage of
93. What are the table types that come with Siebel Relationship Management Warehouse?
Internal
Staging
Dimension
Fact
Subset Dimension
Mini Dimension
Aggregate
Extension
94. What is Image Table? How many image tables are used in SRMW?
Image table captures the data changes occurred after last ETL load. The following are the
S_ETL_D_IMG (1 to 83)
95. What is Staging Tables?
Staging tables are the tables that store the incremental data extracted from OLTP
transactions. Before each ETL process these staging
tables are truncated prior to loading. Data in the staging tables are then transformed and
loaded into appropriate target tables by ETL
process.
96. What is Aggregate Table?
In the SRMW the Mini Dimension tables are created using the combination of most
frequently used attributes of their parent dimension. The purpose of Mini Dimension Table is
One ODBC data source for database connectivity. The connection pool in the physical
layer will use this data source to establish
connection between SAS and database server.
One ODBC data source of Siebel Analytics Server type. The SAW will use this data source
102. What is the default port no used in the ODBC data source used by SAW?
9703. This should match with RPC_SERVICE_OR_PORT entry in the SERVER section of
106. How the SAW determines that which SAS server it should communicate?
In the DSN element of the instanceconfig.xml file holds the value of ODBC data source
Global filter is used in the Dashboard to prompt the user to select report criteria on the
that are having same column or set of columns with Is Prompted assigned as filter.
112. What are the Prompts available?
Dashboard Prompt (Global Filter), Page Prompt, Column Filter Prompt, and Image
Prompt.
Page Prompt is used in the dashboard to enable the user to select criteria for requests
currently by the user. Scope of Page Prompt is limited to the dashboard page it is
associated with.
114. What is Column Filter Prompt?
Column Filter prompt is associated with a request. When this request is assigned to a
select the criteria for that column, then generates the request using the criteria selected.
115. What is Image Prompt?
Image prompt and Column Filter prompt are functionally same, instead of showing
the user to select the region (co-ordinate) in the image and takes value associated for that
image prompt is created column values are assigned to particular co-ordinates of the
image
128. Can a single answer request be created using objects from different Subject Areas?
Not possible to create a single answer request using objects from different Subject areas,
Request feature, a request can be combined with another request(s) that was created
using different subject area. The data type of
columns should match otherwise Siebel Analytics will throw Inconsistent data type
error. Also security rights should be properly handled.
129. When a report is printed from the Dashboard page by clicking on the 'Print' Report Link,
the prompts that were applied to the results in
that report also to be printed. Is this possible?
Yes. Add the Filter View to the request so that Prompts there applied. But this displays
the filter name instead of content if the request
is associated with any shared filter.
130. A dashboard prompt is shared by all requests in the dash board with Equal operator, but
for one request the operator should function as
<= rather than =. How to make this workable?
request?
When userB drilldowns from 1st level column then the request drilldowns to 3rd level
column.
Ensure that only for highest granularity (detail level) the system access facts table and for
lower level (group by a dimension attribute) it
access the aggregated fact table.
Enable the cache feature.
SAS determines this by using the hierarchy level associated for the fact table in the logical
source.
136. What is the relationship between the Group and WebGroup?
If a Webgroup is created with the same name of a Group in the Siebel Administrator Tool
then the users assigned to that Group
automatically inherits privilege assigned to the Web Group.
137. Can a Global filter be created using a computed column?
Not possible.
138. What are types of indexes available?
B-tree and Bitmap (Only in Oracle from 9i onwards).
139. When does bitmap index is used?
Bitmap indexes are used in OLAP database for columns having less selectivity. For
Yet to find.
144. How do you decide when to use Repository/Webcat file to implement object level
security?
Only when there is requirement that restrict access to particular table or column in a
148. What kind of fact tables that you had used in your recent project?
Regular fact and fact less fact table (to support negative analysis report).
149. What is materialized view?
A materialized view (MV) is similar to a view but the data is actually stored in the
database (view that materializes). Materialized views
are often used for summary and pre-joined tables, or just to make a snapshot of a table
Create Pivoted table view, move the region column into section of the pivot table and
check the Chart Pivoted Results check box.
151. How do you create request that returns TOP 10 values of a report?
Request can be created with TOPN operator filter.
152. List the Best Practices recommended by Siebel for Dashboard development
1. No horizontal scrolling.
The Activity Dimension is created in the Logical Layer with a data from Activity_F and
W_LOV_D.
Why has the Activity Star no Atvivity Dimension in the Physical Layer?
The Activity Star was wisely modeled using a Combo table. A Combo table is like an
Analytics can logically convert a physical table into both a logical dimension and a
logical fact. By using a combo table, a) the loads
are easier b) less storage is required, and c) no join is needed between _F and _D.
This is a good practice when dealing with transactional records that are used as both
Dimensions and Facts - reduces work, errors,
complexity, and especially for large tables, can greatly improve performance.
Tabletype
_A
Aggregate
_D
Dimension
_DX
Dimension Extension
_DS
_DSX
_DH
Dimension Hierarchy
_F
Fact
_FX
Fact Extension
_FS
_FSX
_G, _GS, _S
Internal Table
_H
Helper table
_MD
Mini dimension
_SCD
Slowly-changing Dimension
Where are passwords for userid? Ldap,external table authentication stored respectively?
o passwords for userid are in siebel analytics server repository Ldap authentication in Ldap
server external database in a table in external database
Can
you
bypass
siebel
analytics
server
security
?if
so
how?
o yes you can by-pass by setting authententication type in NQSCONFIG file in the security
section as:authentication_type=bypass_nqs.instanceconfig.xml and nqsconfig.ini are the 2
places
Where
can
you
add
new
groups
and
set
permissions?
o you can add groups by going to manage>security>add new groups> You can give
permissions to a group for query limitation and filter conditions.
what
o
are
the
Aggrigation
things
navigation,level
column,comlex join.
you
base
can
do
matrics,time
in
series
the
BMM
wizard,create
new
layer?
logical
What is the difference between Single Logical Table Source and Multiple Logical Table
Sources?
o If a logical table in BMM layer has only one Table as the source table then it is Single LTS.
o If the logical table in BMM layer has more than one table as the sources to it then it is called
Multiple
LTS.
o Ex: Usually Fact table has Multiple LTS, for which sources will be coming from different
Physical
tables.
Can you let me know how many aggregate tables you have in your project? On what basis have
you
created
them?
sources
for
the
corresponding
Fact
table.
o This is done by dragging and dropping the aggregate table into the corresponding fact table.
After doing that establish the column mappings and the set the aggregation levels.
How do you know which report is hitting which table, either the fact table or the aggregate
table?
o After running the report, go to Administration tab and go to click on Manage Sessions. There
you can find the queries that are run and in the View Log option in the Session Management
you can find which report is hitting which table.
Suppose I have report which is running for about 3 minutes typically. What is the first step you
take
to
improve
the
performance
of
the
query?
o Find the sql query of the report in Admin->manage Session-> run the sql query on toad >read the explain plan output ->modify the SQL based on the explain plan output
Suppose you have a report which has the option of running on aggregate table. How does the
tool know to hit the Aggregate table and for that what the steps you follow to configure them?
o
Explain
Have
you
the
heard
of
process
Implicit
of
Facts?
Aggregate
If,
so
navigation
what
are
they?
o An implicit fact column is a column that will be added to a query when it contains columns
from two or more dimension tables and no measures. You will not see the column in the
results. It is used to specify a default join path between dimension tables when there are several
possible
alternatives.
o For example, there might be many star schemas in the database that have the Campaign
dimension
and
History
star.
Campaign
Response
star.
Campaign
Order
the
Customer
dimension,
Stores
Stores
such
customers
customer
as
the
following
targeted
responses
placed orders as a
in
to
result
stars:
campaign.
campaign.
of a
campaign.
In this example, because Campaign and Customer information might appear in many
segmentation catalogs, users selecting to count customers from the targeted campaigns catalog
would be expecting to count customers that have been targeted in specific campaigns.
To make sure that the join relationship between Customers and Campaigns is through the
campaign history fact table, a campaign history implicit fact needs to be specified in Campaign
History segmentation catalog. The following guidelines should be followed in creating
segmentation
catalogs:
Each segmentation catalog should be created so that all columns come from only one physical
star.
Because the Marketing module user interface has special features that allow users to specify
their aggregations, level-based measures typically should not be exposed to segmentation users
in a segmentation catalog.
(Assume you are in BMM layer) We have 4 dimension tables, in that, 2 tables need to have
hierarchy, then in such a case is it mandatory to create hierarchies for all the dimension tables?
o No, its not mandatory to define hierarchies to other Dimension tables.
Can
you
have
multiple
data
sources
in
Siebel
Analytics?
o Yes.
How
do
you
deal
with
case
statement
and
expressions
in
siebel
analytics?
Do you know about Initialization Blocks? Can you give me an example where you used them?
o
Init
blocks
are
used
for
instantiating
session
when
is
It
allows
for
is
you
query
utility
to
example:
user
logs
repository
of
examine
search
the
for
Seibel/OBIEE
objects
repository
based
in.
tool?
Admin
metadata
on
tool
tool
name,type.
o Examine relationship between metadata objects like which column in the presentation layer
maps to which table in physical layer
Oracle doesnt recommend Opaque Views because of performance considerations, so
why/when
do
we
use
them?
o an opaque view is a physical layer table that consists of select statement. an opaque view
should be used only if there is no other solution.
Can
you
migrate
the
presentation
layer
to
different
server.
How do you identify what are the dimension tables and how do you decide them during the
Business/Data
modeling?
o Dimension tables contain descriptions that data analysts use as they query the database. For
example, the Store table contains store names and addresses; the Product table contains product
packaging information; and the Period table contains month, quarter, and year values. Every
table contains a primary key that consists of one or more columns; each row in a table is
uniquely identified by its primary-key value or values
Why
do
we
have
multiple
LTS
in
BMM
layer?What
is
the
purpose?
is
the
full
form
of
o there is no full form for rpd as such, it is just a repository file (Rapidfile Database)
how
do
disable
cache
for
only
particular
rpd?
tables?
o in the physical layer, right click on the table there we will have the option which says
cacheable
How do you split a table in the rpd given the condition. ( the condition given was Broker and
customer
in
the
same
table)
Split
Broker
and
customer.
you
use
physical
join
in
BMM
layer?
o yes we can use physical join in BMM layer.when there is SCD type 2 we need complex join in
BMM layer.
Can
you
use
outer
join
in
BMM
layer?
o yes we can.When we are doing complex join in BMM layer ,there is one option type,outer
join is there.
What are the cache management? Name all of them and their uses. For Event polling table do u
need
the
table
in
your
physical
layer?
o Monitoring and managing the cashe is cache management.There are three ways to do that.
o Disable caching for the system.(INI NQ config file), Cashe persistence time for specified
physical
o
Disable
tables
caching
and
for
the
Setting
event
system.(INI
NQ
polling
config
file
table.
:
You can disable caching for the whole system by setting the ENABLE parameter to NO in the
NQSConfig.INI file and restarting the Siebel Analytics Server. Disabling caching stops all new
cache entries and stops any new queries from using the existing cache. Disabling caching
allows you to enable it at a later time without losing any entries already stored in the cache.
o
Cashe
persistence
time
for
specified
physical
tables
You can specify a cachable attribute for each physical table; that is, if queries involving the
specified table can be added to the cache to answer future queries. To enable caching for a
particular physical table, select the table in the Physical layer of the Administration Tool and
select the option Make table cachable in the General tab of the Physical Table properties dialog
box. You can also use the Cache Persistence Time settings to specify how long the entries for
this table should persist in the query cache. This is useful for OLTP data sources and other data
sources
o
that
are
updated
Setting
frequently,
event
potentially
down
to
polling
every
few
table
seconds.
:
Siebel Analytics Server event polling tables store information about updates in the underlying
databases. An application (such as an application that loads data into a data mart) could be
configured to add rows to an event polling table each time a database table is updated. The
Analytics server polls this table at set intervals and invalidates any cache entries corresponding
to
the
updated
tables.
o For event polling table ,It is a standalone table and doesnt require to be joined with other
tables in the physical layer
What
is
data
level
security?
o This controls the type an amount of data that you can see in a report.When multiple users
run the same report the results that are returned to each depend on their access rights and
roles in the organization.For example a sales vice president sees results for alll regions, while a
sales representative for a particular region sees onlu datafor that region.
What is the difference between Data Level Security and Object Level Security?
o Data level security controls the type and amount of data that you can see in a
reports.Objectlevel security provides security for objects stored in the siebel analytics web
catlog, like dashboards,dashboards pages,folder,and reports.
How
do
you
implement
security
using
External
Tables
and
LDAP?
o Instead of storing user IDs and passwords in a Siebel Analytics Server repository, you can
maintain lists of users and their passwords in an external database table and use this table for
authentication purposes. The external database table contains user IDs and passwords, and
could contain other information, including group membership and display names used for
Siebel Analytics Web users. The table could also contain the names of specific database catalogs
or schemas to use for each user when querying data
o Instead of storing user IDs and passwords in a Siebel Analytics Server repository, you can
have the Siebel Analytics Server pass the user ID and password entered by the user to an
LDAP(Lightweight Directory Access Protocol ) server for authentication. The server uses clear
text passwords in LDAP authentication. Make sure your LDAP servers are set up to allow this.
If you have 2 fact and you want to do report on one with quarter level and the other with
month
level
how
do
you
do
that
with
just
one
time
dimension?
o Deploying the Siebel analytics platform without other Siebel applications is called Siebel
analytics Stand -Alone .If your deployment includes other siebel Analytics Application it called
integrated analytics -You can say Stand-Alone siebel analytics
How
to
sort
columns
in
rpd
and
web?
o Sorting on web column, sort in the rpd its sort order column
If you want to create new logical column where will you create (in repository or dashboard)
why?
o I will create new logical column in repository.because if it is in repository,you can use for any
report.If you create new logical column in dashboard then it is going to affect on those reports
,which are on that dashboard.you can not use that new logical column for other dashboard(or
request)
What
is
complex
join,
and
where
it
is
used?
o we can join dimention table and fact table in BMM layer using complex join.when there is
SCD type 2 we have to use complex join in Bmm layer.
If you have dimension table like customer, item, time and fact table like sale and if you want to
find out how often a customer comes to store and buys a particular item, what will you do?
o write a query as SELECT customer_name, item_name, sale_date, sum(qty) FROM
customer_dim a, item_dim b, time_dim c, sale_fact d WHERE d.cust_key = a.cust_key AND
worked
on
o Standalone.
standalone
or
integrated
system?
If you want to limit the users by the certain region to access only certain data, what would you
do?
o
using
data
level
security.
o Siebel Analytics Administrator: go to Manage -> Security in left hand pane u will find the
user,
groups,
LDAP
server,
Hierarchy
What you can do is select the user and right click and go to properties, you will find two tabs
named as users and logon, go to user tab and click at permission button in front of user name
you have selected as soon as u click at permission you will get a new window with user group
permission having three tabs named as general ,query limits and filter and you can specify
your condition at filter tab, in which you can select presentation table ,presentation columns
,logical table and logical columns where you can apply the condition according to your
requirement for the selected user or groups.
If there are 100 users accessing data, and you want to know the logging details of all the users,
where
can
o
1.
The
2.
To
In
the
set
Security
Double-click
you
Administration
the
Manager
user.s
user
find
user.s
Tool,
ID.
logging
select
Manage
dialog
The
that?
box
User
dialog
level
>
Security.
box
appears.
appears.
3. Set the logging level by clicking the Up or Down arrows next to the Logging Level field
How
do
implement
event
polling
table?
o Siebel Analytics Server event polling tables store information about updates in the underlying
databases. An application (such as an application that loads data into a data mart) could be
configured to add rows to an event polling table each time a database table is updated. The
Analytics server polls this table at set intervals and invalidates any cache entries corresponding
to
the
Can
you
migrate
the
updated
presentation
layer
only
tables.
to
different
server
o No we cant do only presentation layer. And ask him for more information and use one of the
above
o
Create
answers
a
ODBC
connection
in
the
different
serve
and
access
the
layer.
How do you define the relationship between facts and dimensions in BMM layer?
o Using complex join ,we can define relationship between facts and dimentions in BMM layer.
What is time series wizard? When and how do you use it?
o We can do comparison for certain measures ( revenue.,sales etc.. ) for current year vs
previous year, we can do for month or week and day also
o Identify the time periods need to be compared and then period table keys to the previous time
period.
o The period table needs to contain a column that will contain Year Ago information.
o The fact tables needs to have year ago totals.
o To use the Time series wizard. After creating your business model right click the business
model and click on Time Series Wizard.
o The Time Series Wizard prompts you to create names for the comparison measures that it
adds to the business model.
o The Time Series Wizard prompts you to select the period table used for the comparison
measures
o Select the column in the period table that provides the key to the comparison period. This
column would be the column containing Year Ago information in the period table.
o Select the measures you want to compare and then Select the calculations you want to
generate. For ex: Measure: Total Dollars and calculations are Change and Percent change.
o Once the Time series wizard is run the output will be:
a) Aliases for the fact tables (in the physical layer)
o In the General tab of the Logical table source etc you can find Generated by Time Series
Wizard in the description section
o Then you can add these comparision measures to the presentation layer for your reports.
o Ex: Total sales of current qtr vs previous qtr vs same qtr year ago
What are other ways of improving summary query reports other than Aggregate Navigation
and Cache Management
Indexes
Join algorithm
Where are passwords for userid? Ldap,external table authentication stored respectively?
o passwords for userid are in siebel analytics server repository Ldap authentication in Ldap
server external database in a table in external database
3. If you have more than 3 repository files mentioned in your NQSConfig.ini file as default,
which one
gets loaded to the memory when the BI Server is started?
Ex:
Star = SamplerRepository1.rpd, DEFAULT;
Logical
which Logical table sources to join together from the Logical tables.
In Pyhsical layer
12. Is it mandatory to have hierarchies defined in your repository? If Yes, where does it help?
If No,
backend
A.)Level 1 Logs the SQL statement issued from the client application and logs elapsed times for
query compilation, query execution, query cache processing, and back-end database
processing.
Logs the query status (success, failure, termination, or timeout). Logs the user ID, session ID,
and request ID for each query.
catalog (called Subject Area in Answers) name, SQL for the queries issued against physical
databases, queries issued against the cache, number of rows returned from each query against
a physical database and from queries issued against the cache, and the number of rows
returned to the client application.
19. What are the different places (files) to view the physical sql generated by an Answers
report?
together from the database into your physical layer. Is this relationship still preserved in the
OBIEE
physical layer?
A.) Yes,it will
22.Same as question 22 but what happens if you import each table seperately?
A.)Keys will be affected but not the joins.
23.If Table 1 and Table 2 are dragged from physical layer to BMM layer, which table becomes
a Fact Table and which table becomes a Dimension Table?
A.)Table with primary key becomes Dimension and table with foreign key becomes Fact table.
24.What if the tables (Table 1 and Table 2) are not joined, then what happens in BMM layer?
A.)Both acts like Fact table in BMM.
25.How many server instances can coexist in an OBIEE cluster?
A.)There are two server instances:
Master server. A master server is a clustered Oracle BI Server to which the Administration Tool
connects for online repository changes. In the NQClusterConfig.INI file, the parameter
MASTER_SERVER specifies the Oracle BI Server that functions as the master server.
Slave server. A slave server is a clustered Oracle BI Server that does not allow online repository
changes. It is used in load balancing of ODBC sessions to the Oracle BI Server cluster. If the
master server is ever down, the Administration Tool will connect to an available slave server,
but in read-only mode.
26.Aggregation rules are set on top of columns (Physical Columns or Logical
Columns or Both)
A.)Logical Columns.
31. What are Chronological Keys in OBIEE? How are they different from Logical Keys?
A.)Chronological key is the key which uniquely identifies the data at particular level.
chronological key is mostly used in time dimensions where time series functions are used.
Where as logical key is the key which is used to define the unique elements in each logical
level.A logical level may have more than one level key. When that is the case, specify the key
that is the primary key of that level.It is used to specify the columns which is used for drill
down and which is used as primary keys.
32.What are the different ways to authenticate an user in OBIEE system? Can OBIEE
authenticate a user passing through multiple authentication methods?
34. You are trying to open a repository using Admin tool and when you click to say Open
Online; a dialogue box pops up saying Your rpd is available in read-only mode. How can
you edit thisrepository by opening online?
A.)We can avoid this error by deleting the .log and .sav in repository directory and restarting
the services.
35.What is the default configuration for caching in NQSConfig.ini file? How method does the
OBIEE use for clearing its cache?
simultaneously and then check in changes. This can be done by setting up the multi user
different caches that can be used to serve its customer faster? (remember we are not talking
about cache in BI Server only) How does a dashboard request gets served from all available
caches?
enforce naming standards for all customer facing apps. What happens to all the dashboard
requests written prior ot this change? Do they function properly or do they appear broken? If
Yes, they will function How does they work? If Not, reports appears broken what can you do
to fix this? Give examples.
A.)If Alias table is avaliable for Presentaion table then all the reports work fine.
44.What is a federated query? How does OBIEE develop these federated queries?
A.)Federated queries are queries where data is being brought frommultipledatabases and
consolidated/joined in the business layer/logical layer. OBIEE does this quite a lot no matter
where the data is. All it needs is a relation between the tables coming from muliple databases.
45.What is in-memory query? How to implement this in OBIEE?
A.)I dont think OBIEE does in-memory queries. It does compensate for this by using features
like caching. Also, caching being present in two places for OBIEE like presentation cache and
server cache.
The opaque views are tables that are created with join or other query data that contain
SELECT query output. The opaque views make the logical understanding simple for
implementation but there are heavy performance constraints. They are only used when
there is no other way to get to the final solution.
How can you map each of the reports across to the different tables that are being accessed?
E-mail
The Admin tool has the Manage Sessions tab which gives you the access to the logs that are
being generated for each session.After the report generation sessions, you can easily view the
log to map each requests to the corresponding tables and databases.
How can you migrate the presentation layers across to different servers?
The presentation layer is dependent on the database that is underlying in the each server.
Therefore the presentation layer alone cannot be migrated as a stand-alone aspect of the
database. What we can do instead is have a ODBC or similar database connection
established across from the different servers to the particular main system and then carry
over the presentation semantics from the other server with that database oriented changes
in the logic layer
How will you impose access limitation to the database according to the region of access?
The Data level security imposed according to data in certain column can be used to
The Siebel Analytics admin tool will give you the control over user access to the
Creation of the logical column on the higher level of dashboard will have effect on the tables
only on that view level and not on the other dashboards and other requests. The logical
columns created on the repository level will in turn gets its effect on all the other requests
and reports from different view levels. So it is always preferable to have the logical column
created at the repository level.
How can you improve and quicken the way of dealing with summary query reports?
Implementing algorithms for joins at the business layers help you get a better speed.
Rewriting the views and other related queries according to your specific requirement.
The report designs should be pertaining what is exactly needed, nothing more and
nothing less.
How does the data pass through the three layers of view?
The three layers that are involved in the data accesses and modeling are:
Physical layer: This is the layer where the actual raw data is stored in the form tables.
These are very descriptive data and are meant for use by the business layer of logic.
Business layer: The more higher level of interface to the data sources, that makes the
Presentation layer: This is what is given out to the user. All the processed and categorized
data give the clear picture of the real world entities using the raw/combination of multiple data
from the physical sources through the business layer logics
Is it possible for inserting a new column in the BMM layer? How does it help?
The new column can be added to the fact table in the BMM layer by right clicking the specific table and selecting add New
logical Column. This comes in handy when we are dealing with same standard calculation for all the rows in the table. The
data fetches can be made quicker with the already stored calculated data in the logical layer.
What would you do if you are provided with multiple dimensions and multiple fact tables to connect to?
We create the logical fact table on the higher level BMM layer. This logical table will have the source pointing to the
multiple fact tables. This logical table will be used to be connected with the multiple dimensions.
60. How do you deal with a situation like this when data is coming from a snowflaked data warehouse.
Fact > Dimenion 1 >-< Dimension 2 >< Dimension 3
Dimension 1 and Dimension 2 is a M:M relationship and the same for Dimension 2 and Dimension3.
61. How do you resolve a M:M relationship other than using a bridge table?
62. Lets say that you have three tables joined to each other which have been set to be cacheable at physical layer with Table 1
set at cache persistence time 45 min, Table 2 with 60 min and Table 3 with 30 min. You ran your answers request at 9 AM
and again at 9:15 AM and again at 9:45 AM. Is the result set the same for all these 3 runs at different times? If so, Why? If
Not, why not? There are transactions going on and data is being updated in these tables almost every 10 min.
63. Lets say you are on your local box with a rpd and want to make sure that it can be edited only in offline mode. How can
you accomplish this? Is this possible? What settings would you change?
64. Assume there is no MUDE in your environment. Three developers have been working on three seperate projects and they
have developed their rpds. As a Server Admin, you were asked to promote these three rpds. What are the next steps for you as
an admin to take care of to move them to QA and production? Are there any OBIEE tools that can be handy with this
situation?
65. How do you get this type of interaction in your dashboard report? When clicked on a report column, I want multiple
options for drill down. Remember that I did not create any hierarchies in my rpd.
66. Lets say that you want to include a prompt to your dashboard with Start Date, End Date and some measures and
dimension attributes. You want to use SQL Results feature to automatically populate Start Date and End Date with Start
Date as trunc(sysdate 1) and End Date as trunc(sysdate). What would you do? Will you encounter any errors? How do you
rectify this problem?
67. How many business models can a presentation catalog refer to? How many presentation catalogues be created from a
single business model?
68. How can we create nested presentation folders (nested presentation tables) in your presentation catalog? Lets say we have
Facts all lumped together in one folder and sub divide these facts as Facts Logical and Facts Strategic folders? How
would you create this nested structure in presentation catalog?
69. What are logical keys? Why would you need to create them? Does the physical key gets automatically converted to
logical key when the table is moved from physical layer to business model?
70. Lets say you have a report with 4 dimensional attributes and 2 fact measures in the report. Whats the default sort behavior
of OBIEE when you try to run the report? On what column/columns does it sort? How do you know this?
71. In the above scenario, Is it better to have atleast one column defined in your criteria manually or just leave it without any
sort criteria mentioned? Whats the difference in performance?
for new entries .Below mentioned techniques could be broadly distinguished as Manual ,
Automatic and Programmatic way to manage Caching :
Cache Seeding
There are several means to do this business :
1) Set Global Cache parameter on This will cache query running on any physical tables .By
default for all tables in repository is cacheable .
2) Switch on Cacheable property This will provides table level benefit and extra
customisation that which Physical tables should participate in generating query cache . E.g :
sometime user would be more interested on giant Fact table caching rather tiny dimension tables
.
3) Scheduling iBot iBot could be properly configured and used for cache seeding purpose
.This will silently build the cache without having any manual intervention . Possibly triggered in
a time window after daily ETL load finishes or can be further customised and automated based
on result retrieved from another iBot in chain request .The second iBot necessary ping the DB to
identify whether a database update has been done(after ETL finish) before passing the request to
trigger its chained counterpart .This will build data and query cache for a dashboard request and
not for entire set of tables .
4) Running nQCmd utility :
Another fantastic way to handle query caching which doesnt have any dependency on the ETL
load .But overhead to accumulate the superset of the actual physical query needs to be fired
against a request /report and put it down in a single file to pass as parameter of nQCmd.exe .
This necessarily need to be invoked after ETL run and by the ETL job itself .It could be done
using remote login to BI server and trigger nQcmd automatically and thus iBot scheduling time
dependency could be avoided .
Cache Purging
A very important and crucial mechanism which should be proven good and perfect to make
Caching a success story :
1) Manual purging Usually a dedicated dumb Administrating job ,kind an overhead for a
company .This could be done simply by deleting the existing Cache TBL files or by firing purge
from BI Admin Tool .This purging could be done by categories i.e. (Repository , Subject areas ,
Users or by physical tables) from Admintool in online mode .
2) Calling ODBC Extension Bundled ODBC extension function like
SAPurgeCacheByDatabase()
,SAPurgeCacheByQuery(),SAPurgeCacheByTable(),SAPurgeAllCache() etc . could be called to
free the cache table for specific queries, tables,database or all .See Oracle documentation for
details .This should be called using nQCMD utility and just after ETL load and before Cache
seed to ensure there is no time window related gap and dependency .
3) Event Polling table A nice and robust concept but not so nice to manage and lead to extra
overhead .Really a good technique to make BI server aware of that DB update done and now
carry forward to do your business of purging . A Cache Polling frequency is an important step
and should be predefined and robust to make it a success .Poll table will be populated by a auto
insert DB Trigger each time target DB tables updated .Analytics server polls that table at specific
set of intervals and invalidates cache entries corresponding to updated tables.
4) Calling iBots to Purge Cache It could be done by calling a custom java scripts .This in turn
call nQCmd and ODBC extension to free cache .However the catch in this feature is again the
iBots need to be scheduled just after ETL run and before Cache seed . So you might not sure
about stale data if ETL doesnt meet the SLA .Again this could be done after setting a chained
iBots to trigger the Purging activity in proper time .So ground rule is that never rely on iBot
schedule on time.Lets pick the status from DB to trigger it .
Trade-off
Not purging the outdated caches , known as Stale Caches , can potentially return inaccurate
results over time .Think of a situation where your Cache didnt get purged on time after ETL
load finishes. In this scenario though database has been updated but the change is not going to be
reflected in your cached data as the seeded cache having outdated data at your filesystem and
thus results a stale data which would throw inconsistent and wrong result .This potentially will
cause huge confusion to the users mind .Thus Cache retention ,refresh time is important.
Not only that,in large platform , Caching should be separated and spreaded across multiple
folders/mountpoints for better utilization of I/O across the filesystems .The query Cache storage
should be on local, high-performance ,high-reliable storage devices .The size of consumption of
the Cached files would be a bottleneck for performance improvement across disk usage space .It
should be used with proper Caching replacement algorithm and the policy towards the number of
maximum cache entries defined under NQSCONFIG.ini file . A potential Cache related problem
found in Clustered environment where the Cache files build on one native host is not sharable
with other one and this leads to be the same cache build across clustered participants as it could
not be ensured that which BI server will handle which request .Until and unless the request get
processed Cluster participant cant determine there was already some Cache hit based on the
query generated and the request need not to be processed from other Clustered BI server .Again
Purging need to be done from both Clustered servers .Cluster Aware Cache is propagated across
Clustered Servers only when Cache is seeded