You are on page 1of 7

Major changes in OBIEE 11: 1 Hierarchical Columns, and Enhancements to Pivot Table Views :

You can include more than one hierarchical column in a pivot table, and indeed you can mix and match attribute columns and hierarchical columns in the same view. You can also nest hierarchical columns within each other, such as in the analysis below where Ive nested Times (Time Dimension) within the Staff (Parent-Child) hierarchical column. Pivot tables themselves have had a revamp in this release, with one of the key features being the ability to swap dimensions about when the pivot table is displayed in the dashboard (in 10g, you had to return to Answers to rearrange the layout). You can also sort the pivot table by clicking the up and down arrows that appear over columns or along rows, or you can right-click anywhere in the pivot table and access a contextual menu from there Another feature in this new release is the ability to create dynamic groups (often referred to as custom aggregates); Hierarchical columns also bring another bonus, in the form of being able to access alternate hierarchies in a dimension Theres one other major change with the introduction of hierarchical columns. For attribute columns, you can still filter in the same way, picking the column and then setting up the filter (Product Name = Shoes, or Amount < 100, for example). With hierarchical columns though you can set up step-by-step filters, which will seem familiar to anyone who used Oracle BI Beans, or Discoverer for OLAP, in the past. 2 New Visualizations, Dashboard Controls and Interactions

An updated graphing engine, new graph formatting tools and the introduction of the slider control Master-detail linking of visualizations : This allows you to set up views within an analysis that respond to values being clicked on in other views. New dashboard prompts Built-in mapping capability Enhanced set of dashboard prompt controls. In the 10g release we could use drop-down menus, text boxes, date pickers and multi-select controls to pass parameters to dashboard requests, but in 11g this has been expanded to include radio buttons and list boxes. 3 Basic System Administration A big change though is around security; now when you bring up the Security Users and Application Roles (roughly analogous to groups in 10g) are now defined in the WebLogic Server admin console, and you use the Security Manager to define additional links through to other LDAP servers, register custom authenticators, and set up filters and other constraints. There are also two additional default users; OracleSystemUser is used by the various OBIEE web services to communicate with the BI Server, and BISystemUser is used by BI Publisher to connect to the BI Server as a data source (both default to the same password as the weblogic admin user you set up during the install). 4 New Functions in the RPD CALCULATEDMEMBER, AGGREGATE AT, ISCHILD, ISPARENT, ISROOT, ISANCESTOR & ISDESCENDANT, PERIODROLLING, EVALUATE_ANALYTIC

CALCULATEDMEMBER This function provides an ability to generate custom calculated members within a hierarchy. For example, it is possible to create a calculated member from 2 members at 2 different levels in a hierarchy. AGGREGATE AT This function provides the same functionality as a level based measure i.e. the filters applied on the dimension to which the measure is aggregated at will not be applied to that measure. Hierarchical Functions BI EE 11g now supports hierarchical functions like ISROOT, ISCHILD etc that can be used to traverse a parent-child hierarchy. PERIODROLLING This is a new time-series function that can be used to do rolling time series based analysis. All time series functions are now supported directly from Answers. EVALUATE_ANALYTIC This is a new Evaluate function that can be used function ship Oracle database analytic functions. 5 Support for Standby databases in the Physical Layer 6 Multi-User Development has been enhanced significantly. 7 Supports for Vertical & Horizontal Clustering. 8 RPD Compression There will be a significant difference in the size of the RPDs in 11g release due to a new compression feature that is enabled by default. Also, now that users/groups are no longer stored in the repository that will further add to the reduction in size. 9 Import of metadata from the connection pool directly. 10 Ability to control writeback in the RPD and support for presentation layer hierarchies 11 Patching of Repositories BI EE 11g now more variation of merge for doing incremental migration. This uses the concept of merge and generates incremental XML patch files which can then be applied on to the repository that needs to be patched. 12 Ability to hide Level Based Measures while browsing a hierarchy. 13 Upgrading from BI EE 10g Repository & Web Catalog A direct copy & paste into the 11g version will not work. The upgrade process involves the following steps 1. Install the new 11g version in either the same machine or a different machine. 2. Run the upgrade assistant utility to upgrade the repository & Web Catalogs. 3. Migrate other 10g specific customizations manually to the 11g instance. The different customizations are a. Any Static files added to the 10g app server b. Custom XML messages c. Styles & Skins (The upgrades for this will have to be done carefully as some CSS & files have changed in this release) 4. Upgrade the Scheduler schema (Both BI Delivers & BI Publisher) through the upgrade utility 5. Upgrade BI Publisher 14 New Features in BI Publisher 11gR1 In 10g, the data model was defined as part of the report definition, whilst in 11gR1 its a separate object that can be re-used across multiple report definitions. If youre running BI Publisher 11gR1 integrated with

OBIEE 11gR1, report definitions (including the data model, templates and the definition itself) are stored in the web catalog along with Answers analyses, Agent definitions, dashboards, KPIs and scorecards, keeping everything in one place. Theres also now the concept of style templates and sub-templates, something thatll be useful for organizations churning out lots of reports with a similar look and feel and with re-usable, modular elements. Templates can now be created and edited online, removing the dependency on Microsoft Word or other offline template editors. Theres a new Interactive Viewer that allows you to view and interact with reports online, Templates created with the Online Template Builder and viewed using the Interactive Viewer can take advantage of the same ADF DVT graphing and visualization engine used by OBIEE 11gR1, giving a consistent look and feel to reports across both products.

OBIEE Testing: 1) First you can run the report, go to settings/administration/manage sessions /log view is there click on that verify your query. later you can write the manual query in toad environment(i.e. from your database). You need to compare the report data with database, if both results match then your report is correct else your report is not correct. 2) In OBIEE testing there are two stages of testing 1. UI Testing 2.Data testing 1. UI Testing : you need to check the all the columns labels, Filters, Hyperlinks and Condition formats these all things come under UI Testing. 2. Data Testing: Take the Query log and Try to write your own query and Check both data and make sure that both queries should hit to same tables Testing Challenges Source to target mapping :Source to target mapping helps the designers, developers and testers in the project understand where the data comes from and how it transitioned and / or transformed to its final form displayed for the end users. The source to target mapping sheet should identify the original source column name, any filter conditions or transformation rules used in the ETL processes, the destination column name in the data warehouse or data mart, the definitions used in the Repository (RPD file) for the metric / dimension. Categorizing the metrics : It is important to classify the metrics from multiple perspectives such as, their frequency of use, potential performance impacts, and complexity of calculations involved Authentication and authorization - The projects security requirements should clearly document the authorization and authentication needs. The security test cases have to be written from the perspectives of different user roles. At times, these tests can be complex when the data is accessed over the firewalls and some portion of the application is open to customers or suppliers via internet. Dashboard charts and filter criteria - User interface testing should encompass tests with multiple options in the available filter criteria. OBIEE gives enough drilldown features to verify the underlying data on the clickable components of the charts. Test cases written should be detailed enough to verify data aggregated at various layers. Testing in hops - In a typical OBIEE project, it is advisable to test in multiple hops rather than attempting to test everything at once. a) The first set of tests can verify the accuracy of the column to column transport of the data between the source and target. This verification is typically done using SQL statements on the source and target databases. b) The next step is to verify the accuracy of the repository (the RPD file.) These tests will include testing with appropriate dimensional filters on the metrics and the formula used to compute those metrics. Testers can build two sets of comparable queries within the repository interface. The first set uses the metrics and the second set uses the base measures (and the formula used to compute the corresponding metric.) The formula defined within the source to target mapping can be used for the purpose of generating the second set of queries. These tests verify the metrics defined within the repository. c) The next step in testing will be to verify the dashboard / reports against comparable queries on repository metrics. In these tests, testers verify dashboard charts / reports against corresponding results from queries they execute on metrics of the repository. d) Finally, the functional interface tests will cover tests to verify the lookups, performance, ease of use, look and feel etc. The first three types of tests are performed by testers who can create simple SQL statements. User acceptance criteria - Users typically have an existing legacy mechanism to verify if what is displayed in the new solution makes sense. Testers should dig into this and understand how the end users built the project acceptance criteria. Testers should challenge the assumptions made by the business community in deriving the acceptance criteria. This activity helps get an end user perspective built into the testing efforts from early on.

Some of the items that can be included for testing are: Validate the access to the System Security Validate the navigation to a particular screen, dashboard, page, and report Validate access to the right data, screens, dashboards, pages, and reports Validate the response time to access the application Validate the response time to access a particular dashboard, page and report Validate the capability and ease of using prompts to select data on a report with specific attributes Validate the use of drilldown on Reports Validate the use of navigation on Reports Validate that the attributes that are used to display the metrics are correct Validate that the metrics on the Reports provide helpful and valuable information to help them measure and manage their business processes

OBIEE Performance Checklist : Using aggregates -Enabling OBI to generate queries against smaller, summarized tables Using aggregate navigation -Allowing OBI to transparently intercept queries and rewrite them to optimized data sources Constraining results using a WHERE clause -Limiting the rows returned from a data source Caching -Fulfilling a query from a local cache as opposed to processing the query through a data source Limiting the number of initialization blocks -Initialization block queries are executed when OBI is started and when users log in Connection Pools Limiting select table types (opaque views) -Reduces the number of SELECT statements executed by OBI -May avoid lengthy SQL queries Modeling dimension hierarchies correctly -Ensuring the OBI chooses the most economical source Turning off logging -No overhead for OBI to generate log files Setting query limits -Enabling OBI to track and cancel runaway queries Pushing calculations to the database -Automatically pushing certain operations to the database based on database feature entries Exposing materialized views in Physical layer -Exposing materialized views explicitly guarantees that OBI choose the most economical table source to satisfy a query Using database hints -Forcing the database query optimizer to execute the statement in a more efficient way Setting the NQSConfig.ini parameters -Setting parameters that affect OBI performance: ---SORT_MEMORY_SIZE: specifies the maximum amount of memory to be used for each sort operation ---SORT_BUFFER_INCREMENT_SIZE: specifies the increment by which the sort memory is increased as more memory is needed ---VIRTUAL_TABLE_PAGE_SIZE: specifies the size of a memory page for OBI internal processing

Different approaches of data modeling


Data models are about capturing and presenting information. Every organization has information that is typically either in the operational form (such as OLTP applications) or the informational form (such as the data warehouse). Entity Relationship (E/R )and dimensional modeling, although related, are extremely different. Of course, all dimensional models are also really E/R models. However, when we refer to E/R models in this book, we mean normalized E/R models. Dimensional models are denormalized. People use E/R modeling primarily when designing for highly transaction-oriented OLTP applications. When working with data warehousing applications, E/R modeling may be good for reporting and fixed queries. Dimensional modeling is typically better for ad hoc query and analysis. For the OLTP applications, the goal of a well-designed E/R data model is to efficiently and quickly get the data inside (Insert, Update, Delete) the database. However, on the data warehousing side, the goal of the data model (dimensional) is to get the data out (select) of the warehouse.

Dimensional modeling
To overcome performance issues for large queries in the data warehouse. Provides a way to improve query performance for summary reports without affecting data integrity. Main Disadvantage : However, that performance comes with a cost for extra storage space. A dimensional database generally requires much more space than its relational Counterpart. However, with the ever decreasing costs of storage space, that cost is becoming less significant.

What is a dimensional model?


A dimensional model is also commonly called a star schema. Is very popular in data warehousing because it can provide much better query performance. It consists, typically, of a large table of facts (known as a fact table), with a number of other tables surrounding it that contain descriptive data, called dimensions.

Fact table characteristics


_ The fact table contains numerical values of what you measure. For example, a fact value of 20 might mean that 20 widgets have been sold. _ Each fact table contains the keys to associated dimension tables. These are called foreign keys in the fact table. _ Fact tables typically contain a small number of columns. _ Compared to dimension tables, fact tables have a large number of rows. _ The information in a fact table has characteristics, such as: It is numerical and used to generate aggregates and summaries. Data values need to be additive, or semi-additive, to enable summarization of a large number of values.

Dimension table characteristics


_ Dimension tables contain the details about the facts. That, as an example, enables the business analysts to better understand the data and their reports. _ The dimension tables contain descriptive information about the numerical values in the fact table. That is, they contain the attributes of the facts. For example, the dimension tables for a marketing analysis application might include attributes such as time period, marketing region, and product type. _ Since the data in a dimension table is denormalized, it typically has a large number of columns. _ The dimension tables typically contain significantly fewer rows of data than the fact table. _ The attributes in a dimension table are typically used as row and column headings in a report or query results display.

You might also like