You are on page 1of 59

Shubha.b@growelsoftech.

com
Gssiplcv3@gmail.com
ananth@kautilya.co.in

5.08

This is the file that will be loaded into our Infocube

We have alredy created the above InfoCube

7.32

Then Click th Extraction Tab:

If you have a situation where you need to extract from the files every day and without the
intervention of any consultant dynamically, then you have the option of writing an ABAB code to
facilitate this, in which case the system will pick up the file

Give the name of the Routine as under and click Editor

1
12.29

Help of the ABAPer is to be taken for coding.

24.15

The system has proposed the following fields. If you want you can retain it. But normally you
want to use the Template inf. So that the fileds are the ones that you created. See the proposed

field first.

You have reworked it and by coping the name of the fields into Template and ttha will reset the
parameters as follows;

Next go the preview ttab;

Activate it
Therew will be a log display as follows. Therre appears to be no serious problem. Continue.

With this your DATASource is ready.

If you select and click the rule editor as follows:

you will get the rule properties as follows

Please observe the Target currency and Source Currency. They are different objects.
Now have a look at the rule editor for quantity:

You will observe that the data in the transaction table is Additive/cumulative. Therefore there is no
other option other than Summation here.
Now let us look at the Revvenue field:

As you do not want Revenue field in the Target to be filled directly from the Source but instead appy
a formula, select the revenue as highlighted. And the right click the selection in the source and
delete29.08. You need to first decide the source fields that are to be used in the calculation of
revenue fields and connect it as shown below:

As there are too many joins you may be a bit confused in which case you can go the Rule Detail
screen and sort out issues as follows:

In the above screen select the field you do not need and remove it as highlighted. Next you need to
apply the formula so you need to first change the Rule Type as below and select Formula from the
drop down list.

The moment you select Formula you are presented with the following screen where you will select
the desired fields from the sources and use the operator fields. In our case PRC X QTTY

. After this Go back to previous screen.31.50


That is Transfformation.

There is another option in Transformation.

Rule Group, Start Routine and End Routine.

The above example represents two kinds of Rule Groups.


For this kind of situations you need to create Routine with the help of ABAPPers.
the above case Is created on CNo. So double click on CNO as follows:

The Routine in

That will open the following screen:

When you select Routine from the drop down list under Rule Type you get the following coding
page36.58

We will start creating one more Rule group


Select the Revenue as under:

Note that the Rule Group is now Standard Group

Click on New Rule Group as above

Click Continue and the screen will appear as ffollows:

Note that the Rule Group is now RG1


If you want to delete a Rule group click the button as under;V

That will delete the Rule Group RG1


Normally help of ABAP programmers are taken to write the codings for the Rule Group.38.58
Thats about Rule Group

Consider the following as our Source system Data

And this data need to be represented in our Target as follows:

In the source you will find that C1 appears as one record but in the Target we need C1 record to split
into two records. This can be achieved only by ABAP routine. 43.40
Here before execution of the Transformation itself The ABAP routine will be implemented and
therefore it is called START ROUTINE

Assume in another case suppose if theTarget should hold only data in respect of records where the
Revenue is more than 1 lac. Here First Transfformation takes place and then the condition (ABAP
code) takes place. This will be a case of END ROUTINE.

48.55

You can access Expert Routine as follows:

When you click on the Expert Routine button as above, you will get this popup asking whether you
want to delete all the existing transformation.:

Though Start Routine and End Routine are used frequently, Expert routine are rarely used.51.20
That means now our Transformation is ready.
Cube is getting Data from Transfformation and Transsfformation from DataSource. Now you can
observe the Dataflow from the following:

That will throw the following screen

Click on Monitor to view he processing stages

53.50

Now you have to trigger DTP

Cosider the following example

In the above example as CNo is the Symantic Key As the first record with CNo. Is an error it will go
to Error stack but though the next two records are entered correctly still it goes to Error stack as the
symantic key is identical with the one with error. The rest will go to Target.
Now consider the following example. Note that there are Two Symantic keys now CNO and MNO.

In this case only First record will go to Error stack and the rest goes to Target.
That means the larger the number of Semantic Keys, lesser will be the size of Error Stack.
We will just retain the suggested Symantic keys and click continue;

Now come to the update tab

Ensure that you tick this box.


You would think that Transaction Data will not be uploaded if there
is no Master Data record. But the truth is SAP does upload TD without MD. It will first load the
Key figures without any attribute information. From the Key values it will generate a SID Table and
from SID it will generate Dimension. But this will degrade loading performance. Therefore you
should always Tick this box.
Now just to show how it works . Let us see the P table of our Material by opening another instance:

See there is no Master data here.


But see what will happen after triggering the DTP
Go to Execute Tab:

Click on Execute button. Activate it

Now go back and see the P Table which was empty earlier.

That is the whole process of DTP for you. And the data has been loaded into Infocube

Click on Content

Here Two options are available. InfoCube Content and Facct Table. When you click on Fact Table
you will get the following screen:

Click Execute

Here Data is stored as Dimension ids3.00. This is total gibberish for us.

So let us look at our InfoCube Content

After Field selection when you Execute as above you are back on to the previous screen:

Click Execute. You are presented with the following output

10.23.. Observe that RRequestID has been added automatically by the system.

Meaning every time a Reqeust ID is loaded Fact table is partitioned. The number of partition in the
Infoube is N(no of request) +1.

Note the Table name is


FYIC_DEMO.
It is F Table. Click on Edit

15.5
Thats about loading data into InfoCube.
Let us assume after sometime there has been some changes and a new file needs to be uploaded.
The file is like this

C1 is and old record just updated. But C6 is an entirely new record. The file is saved as
ModifiedTr.csv This file needs to be loaded into InfoCube. Everything is ready we only need to do
Scheduling which is done in InfoPackage.

Click on Infopackage

Do the above changes in the Extraction Tab. And go to Schedule

Click on Start and when you get the message that Data Requested, clikc on Monitor

Datta has come to PSA. Now you have to trigger a DTP

Nowif you want to see the Cube. Go to InfoCube Manage

Click on Contents Tab and you get the following screen

Click on InfoCube content. You are presented with the following screen

Clcik on Fld Selection for output

Select the desired field for out put and click execute
And again Execute on the previous screen. You will be presented with the following screen

Observe the last two records which has been updated now.
Here the problem is with regard to C1. It is not getting updated because there is a mismatch as the
Request Ids are different. If Request id is removed and a report generated C1 M1 will add the new
quantity to old and show the qty as 7. If 5 is after adding the old record then it amounts to an
error as the system is not able to identify whether the data that is coming in from Datasource is
modified record are new record..24.13 The Datasource is not maintaining a reference in the form of
image.

41.40
44.2

Developing a Report on DSO is detailed level off Reporting and


Developing a Report on InfoCube is Aggregated level of Reporting.

20.48

It may be added that though the Write Optimised DSO and Direct Update DSO has one Active Data
Table the method off loading is different as in the Direct Update DSO it is mostly manually
done.32.26
Write Optimised DSO is also called as Corporate memory as huge amount of memory can be stored
in this.

IN BI3.5 version DSOswere called as ODS Operational Data Stores and there used to be only 2 types
of ODS Standard and Transactional (Direct Update) ODS. 37.00

You might also like