Credits - Lakshminarasimhan N

ABAP performance tuning for SAP BW system

Applies to:
SAP BW 7.x system

In SAP BW system, we will be using ABAP in many places and the common used places are start routine, end routine and expert routines. This document points out the ways we can fine tune the ABAP code written in the SAP BW system.

Rule 1 – Never use “select *”.  Select * should be avoided and “select  ... end select” select must be avoided at any cost.

Rule 2 – Always check if internal table is not empty before using “For all entries”. When you use a select statement with “for all entries”, make sure the internal table is not empty.


RESULT_PACKAGE -----------  Must not be empty 

Hence we need to check if the internal table is not empty and only if it is not empty then proceed with the select statement.  

  1. ENDIF.

Rule 3 – Always use “Code Inspector” and “Extended syntax check”. Double click the transformation and then from menu option you can find “Display generated program”, select it. Then the entire program is displayed, then select the “Code Inspector” and “Extended Program Check” from the below screen shot.
Correct the warning and error messages shown.


Rule 4 – Always use the “types” statement to declare the local structure in the program and the same structure can be used in the select statement.
Example –
From the purchasing DSO if you want to read PO number, PO Item and Actual Quantity Delivered. Then we create a local structure using types statement.
Types : begin of lty_pur,
OI_EBELN   type                /BI0/OIOI_EBELN,
OI_EBELP   type                /BI0/OIOI_EBELP,
PDLV_QTY  type               /BI0/OIPDLV_QTY,
End of lty_pur.
Data : lt_pur type standard table of lty_pur. “ Internal table declared based on the local type
Select OI_EBELN OI_EBELP PDLV_QTY from /BI0/APUR_O0100 into table lt_pur.

Rule 5 – Always try use the “Hashed” internal table and “Sorted” internal table in the routines, sometimes when you are unable to use them and you are using the “Standard” internal table, make  sure you “Sort” the table in ascending order based on the keys you use in “READ” statement and then use “Binary search” in the READ statement. This improves the read statement performance. When the standard table is sorted and then used make sure that the read statement, matches the sort order otherwise you will get the correct result.

Example –
Select OI_EBELN OI_EBELP PDLV_QTY from /BI0/APUR_O0100 into table lt_pur.
If sy-subrc = 0.
Sort lt_pur by OI_EBELN OI_EBELP.     “ Sorting the table based on the key used in read statement
Loop at result_package assigning <result_fields>.
Read table lt_pur into la_pur with key EBELN = <result_fields>- OI_EBELN  EBELP = <result_fields>- OI_EBELPBinary search.
If sy-subrc = 0.
<logic to populate the fields>.

Rule 6 – Never use “into corresponding fields of table”. Follow Rule 5, to declare structure via types statement and use it to create an internal table. In the select statement do not use “into corresponding fields of table”.

Example  --
Never use the way given below, follow the example of Rule 4
Data : lt_pur type standard table of /BI0/APUR_O0100.
Select OI_EBELN OI_EBELP PDLV_QTY from /BI0/APUR_O0100 into corresponding fields of table lt_pur.

Rule 7 – In the select statement make sure you add the primary key’s. For the DSO’s with huge volume of data make sure you create index and then use them in the select statement.

Rule 8 – Never use Include program in your transformations.

Rule 9 – Try to minimize the use of 'RSDRI_INFOPROV_READ'. In case you need to use it make sure you need only the necessary characteristics and key figures.  Make sure the cube is compressed.

Rule 10 – Make sure to clear the “work area”, “temp. variables” before they are used in the loop.

Rule 11 – Always rely on the field symbols rather than the work areas. This way you can avoid the “modify” statement.

Rule 12 – When the code in the transformation is huge and complicated, make sure the DTP package size is reduced for a faster data load.

Rule 13 – Never use hard-coded “BREAK-POINT” in the transformation.

Rule 15 – Add lot of comments in the transformation along with the Developer name, Functional owner, Technical Change, CR number etc.

Rule 16 – Delete duplicated before you use the “For all entries”.

Example –

You select the “status profile” from CRM DSO.

Select CSM_CASE CSM_EXID CSM_SPRO from  /BIC/AZCSM_AGE00 into table lt_csm_pro.

Let us assume that there are 1 million records and all these come to the table lt_csm_pro
Now I need to extract from another table using the “Status profile”

Select 0CSM_TYPE 0CSM_CATE from /BIC/AZCSM_BHF00 into table lt_csm_bhf for all entries in
lt_csm_pro where CSM_SPRO = lt_csm_pro-CSM_SPRO.

The above select statement will take very long time to execute as there are 1 million records.
we know that status profile has duplicates and hence when we remove the duplicates then we
will have only 90 status profiles. So the best approach is to remove the duplicates and then use them in “For all entries”
Copy the table lt_csm_pro to another internal table lt_csm_pro_1.

lt_csm_pro_1[] = lt_csm_pro[].

Sort lt_csm_pro_1 by CSM_SPRO.

Delete adjacent duplicates from lt_csm_pro_1 comparing CSM_SPRO.

After the delete statement lt_csm_pro_1- CSM_SPRO will contain only 90 records. Hence the below statement will work fast.

Select 0CSM_TYPE 0CSM_CATE from /BIC/AZCSM_BHF00 into table lt_csm_bhf for all entries in
lt_csm_pro_1 where CSM_SPRO = lt_csm_pro_1-CSM_SPRO.

Rule 17 – Always use the method new_record__end_routine to add new records to the result_package. Manually we can sort the result_package by record number and then add the records instead it is recommended to use the method new_record__end_routine.

Rule 18 – Use the “global declaration” to declare the internal tables only when you want to maintain records between the start, transformation and end routines.

Rule 19 – Make the use of “Documents” to write detailed steps related to code in the transformation, dependent loads and any other details.

Example –


Rule 20 – Try to use the “DTP filter” and “DTP filter routines” to filter the incoming data from the source InfoProvider.

Rule 21 – Try to use SAP provided features like Master data read, DSO read in the transformations rather than the lookup using ABAP code!!!! :-)

Rule 22 – Before writing code check for the volume of data in PRD system and how frequently the data is increasing, this will allow you to foresee challenges and make you write a better code.

Rule 23 – Make sure you use BADI’s instead of CMOD’s. Make sure you write methods and classes instead of Function modules and subroutines.

Rule 24 – Always use the MONITOR_REC table to capture the exceptional records, instead of updating them into any Z table.

Rule 25 – Use the exceptions cx_rsrout_abort and cx_rsbk_errorcount cautiously.

Rule 26 -- Within the start and end routines, for every small change don't add a new "loop at result_package..endloop". avoid multiple "Loop at result_package..endloop" and use the existing "loop at result_package...endloop". Try to add the entire logic within single "loop at..endloop". This will help in maintaining the code uniformly and clearly.

Rule 27 - Use "constants" which enable you to easily maintain. Also it is even more better to maintain the constant values in Infoobject master data table and use them in the ABAP lookup. Futher the paramter tables can also be used.

Rule 28 - "For all entries" will not fetch duplicate records, so there might be a data loss, but inner join would fetch all of the records and hence "for all entries" must be used cautiously. Make sure to use all Primary keys to fetch records before you use the internal table in "For all entries"

Final  Rule - Avoid as much as ABAP code as possible !!! :-) The reason is very simple, when you go to power your BW system with HANA, if you transformations have ABAP code then the transformations will not be executed in the HANA Database.  

All About....Common Production Failures Encountered at BW Production Support....

Common Production Failures Encountered at BW Production Support -

Author at SDN by Devakar Reddy TatiReddy

Dimension design: A different perspective

Pre-requisites: An infocube is already created and active, and filled will data, which will be used for analysis of dimension tables.

Dimension to Fact Ratio Computation: This ratio is a percentage figure of the number of records that exists in the dimension table to the number of records in fact table or what percentage of fact table size is a dimension table. Mathematically putting it down, the equation would be as below:

          Ratio = No of rows in Dimension table X 100 / No of rows in Fact Table

Dimension Table Design Concept: We have been reading and hearing over and over again that the characteristics should be added into a dimension if there exists a 1:1 or 1:M relation and they should be in separate dimension if there exists a M:M relation. What is this 1:1 or 1: M? This is the relation which the characteristics share among each other.
For instance if one Plant can have only one Storage Location and one storage location can belong to only one plant at any given point of time, then the relation shared among them is 1:1.
If 1 Functional Location can have many equipment but one equipment can belong to only one functional location then the relation shared between the functional location and Equipment is 1:M.
If 1 sales order can have many materials and one material can exist in different sales orders then there absolutely is no dependence among these two and the relation between these two is many to many or M: M.

Challenges in understanding the relationship: Often we SAP BI consultants depend on the Functional consultants to help us out with the relationship shared between these characteristics / fields. Due to time constraint we generally cannot dedicate time to educate the functional consultants on the purpose of this exercise, and it takes a lot of time to understand this relationship thoroughly.

Scenario: An infocube ZPFANALYSIS had few dimensions which were way larger than the preferred 20% ratio. This had to be redesigned such that the performance was under 20% ratio.
This ratio could be either manually derived by checking the number of entries in the desired dimension table (/BIC/D<infocube name><dimension number>) to the fact table (/BIC/F<Infocube Name> or /BIC/E<Infocube name>) or a program SAP_INFOCUBE_DESIGNS can be executed in SE38 which reports this ratio for all the dimensions, for all the infocubes in the system.

We can find from the report that the total number of rows in the fact table is 643850. Dimension 2 (/BIC/DZPFANLSYS2) has around 640430 rows, which is 99% (99.49%)of the fact table rows and Dimension 4(/BIC/DZPFANLSYS4) has around 196250 rows, which is 30%  (30.48%)of the fact table rows.



Step 1: Analysis of the dimension table /BIC/DZPFANALSYS2 to plan on reducing the number of records.

Fact table:
Dimension table holds 1 record more than the fact table.
View the data in the table /BIC/DZPFANLSYS2 (Table related to Dimension 2) in SE12 and sort all the fields. This sorting will help us spot the rows which have repeated values for many columns, which will eventually lead to understanding the relationship between the characteristics (columns in dimension table).

Identifying the relationships:
Once the sorting is done we need to look out for the number of values that repeat across the columns. All the records which repeat could have been displayed in a single row with one dimension id assigned if all the columns had same data. The repetition is a result of one or more columns which contribute a unique value to each row. Such columns if removed from the table then the number of rows in the table will come down.

In the below screenshot I’ve highlighted the rows in green that were repeating themselves with new dimension IDs, as only 2 columns SID_ZABNUM and SID_0NPLDA have new values for every row. These two columns having new values for every row have resulted in rest of the columns repeating themselves and in turn increasing the data size in the dimension table. Hence it can be easily said that these two columns do not belong in this dimension tables, so the related characteristics (ZABNUM and 0NPLDA) need to be removed out of this dimension.
Few rows could be found which repeat themselves for most of the columns, but have a new value once in a while for some columns, as highlighted in yellow in the below screenshot. This indicates that these columns share a 1:M relation with the rest of the columns with repeated rows and these could be left in the same dimension.
Conclusion: The columns marked in green belong to this dimension tables and the columns marked in red needs to be in other dimension tables.
Step 2: Create a copy infocube C_ZPFAN and create new dimensions to accommodate ZABNUM and 0NPLDA.
ZABNUM was added to dimension C_ZPFAN8 and 0NPLDA was added to C_ZPFAN7. These were marked as line item dimensions as they have only one characteristic under them.
Analysed the issue with dimension 4 in the similar way and changed other dimensions to help the situation.

Post changes, loaded the data into the copy infocube C_ZPFAN and found the number of records in the dimension table /BIC/DC_ZPFAN2 to be 40286.

Ratio: 40286 / 657400 * 100 = 6.12 %


Dimension2 of the copy infocube: /BIC/DC_ZPFAN2
Even now there a few repeated rows and columns, but the ratio is within 20%. We can create up to 13 dimensions, but it is always better to keep a dimension or two free for future enhancements.

Hope this was helpful.