Professional Documents
Culture Documents
com
Gssiplcv3@gmail.com
ananth@kautilya.co.in
5.08
7.32
If you have a situation where you need to extract from the files every day and without the
intervention of any consultant dynamically, then you have the option of writing an ABAB code to
facilitate this, in which case the system will pick up the file
1
12.29
24.15
The system has proposed the following fields. If you want you can retain it. But normally you
want to use the Template inf. So that the fileds are the ones that you created. See the proposed
field first.
You have reworked it and by coping the name of the fields into Template and ttha will reset the
parameters as follows;
Activate it
Therew will be a log display as follows. Therre appears to be no serious problem. Continue.
Please observe the Target currency and Source Currency. They are different objects.
Now have a look at the rule editor for quantity:
You will observe that the data in the transaction table is Additive/cumulative. Therefore there is no
other option other than Summation here.
Now let us look at the Revvenue field:
As you do not want Revenue field in the Target to be filled directly from the Source but instead appy
a formula, select the revenue as highlighted. And the right click the selection in the source and
delete29.08. You need to first decide the source fields that are to be used in the calculation of
revenue fields and connect it as shown below:
As there are too many joins you may be a bit confused in which case you can go the Rule Detail
screen and sort out issues as follows:
In the above screen select the field you do not need and remove it as highlighted. Next you need to
apply the formula so you need to first change the Rule Type as below and select Formula from the
drop down list.
The moment you select Formula you are presented with the following screen where you will select
the desired fields from the sources and use the operator fields. In our case PRC X QTTY
The Routine in
When you select Routine from the drop down list under Rule Type you get the following coding
page36.58
In the source you will find that C1 appears as one record but in the Target we need C1 record to split
into two records. This can be achieved only by ABAP routine. 43.40
Here before execution of the Transformation itself The ABAP routine will be implemented and
therefore it is called START ROUTINE
Assume in another case suppose if theTarget should hold only data in respect of records where the
Revenue is more than 1 lac. Here First Transfformation takes place and then the condition (ABAP
code) takes place. This will be a case of END ROUTINE.
48.55
When you click on the Expert Routine button as above, you will get this popup asking whether you
want to delete all the existing transformation.:
Though Start Routine and End Routine are used frequently, Expert routine are rarely used.51.20
That means now our Transformation is ready.
Cube is getting Data from Transfformation and Transsfformation from DataSource. Now you can
observe the Dataflow from the following:
53.50
In the above example as CNo is the Symantic Key As the first record with CNo. Is an error it will go
to Error stack but though the next two records are entered correctly still it goes to Error stack as the
symantic key is identical with the one with error. The rest will go to Target.
Now consider the following example. Note that there are Two Symantic keys now CNO and MNO.
In this case only First record will go to Error stack and the rest goes to Target.
That means the larger the number of Semantic Keys, lesser will be the size of Error Stack.
We will just retain the suggested Symantic keys and click continue;
Now go back and see the P Table which was empty earlier.
That is the whole process of DTP for you. And the data has been loaded into Infocube
Click on Content
Here Two options are available. InfoCube Content and Facct Table. When you click on Fact Table
you will get the following screen:
Click Execute
Here Data is stored as Dimension ids3.00. This is total gibberish for us.
After Field selection when you Execute as above you are back on to the previous screen:
10.23.. Observe that RRequestID has been added automatically by the system.
Meaning every time a Reqeust ID is loaded Fact table is partitioned. The number of partition in the
Infoube is N(no of request) +1.
15.5
Thats about loading data into InfoCube.
Let us assume after sometime there has been some changes and a new file needs to be uploaded.
The file is like this
C1 is and old record just updated. But C6 is an entirely new record. The file is saved as
ModifiedTr.csv This file needs to be loaded into InfoCube. Everything is ready we only need to do
Scheduling which is done in InfoPackage.
Click on Infopackage
Click on Start and when you get the message that Data Requested, clikc on Monitor
Click on InfoCube content. You are presented with the following screen
Select the desired field for out put and click execute
And again Execute on the previous screen. You will be presented with the following screen
Observe the last two records which has been updated now.
Here the problem is with regard to C1. It is not getting updated because there is a mismatch as the
Request Ids are different. If Request id is removed and a report generated C1 M1 will add the new
quantity to old and show the qty as 7. If 5 is after adding the old record then it amounts to an
error as the system is not able to identify whether the data that is coming in from Datasource is
modified record are new record..24.13 The Datasource is not maintaining a reference in the form of
image.
41.40
44.2
20.48
It may be added that though the Write Optimised DSO and Direct Update DSO has one Active Data
Table the method off loading is different as in the Direct Update DSO it is mostly manually
done.32.26
Write Optimised DSO is also called as Corporate memory as huge amount of memory can be stored
in this.
IN BI3.5 version DSOswere called as ODS Operational Data Stores and there used to be only 2 types
of ODS Standard and Transactional (Direct Update) ODS. 37.00