Transaction Codes For Filling Setup Tables LO Extractors
Credits: Todor Peev

An overview of Datasources and the programs filling  the relevant setup table (named MC*SETUP). With this handy table you can  find the status of your current job or previous initialization jobs  through SM37.

T-Code         Purpose
OLI1BW       INVCO Stat. Setup: Material Movemts
OLI2BW       INVCO Stat. Setup: Stor. Loc. Stocks
OLI3BW       Reorg.PURCHIS BW Extract Structures
OLI4BW       Reorg. PPIS Extract Structures
OLI4KBW     Initialize Kanban Data
OLI6BW        Recompilation Appl. 06 (Inv. Ver.)
OLI7BW        Reorg. of VIS Extr. Struct.: Order
OLI8BW        Reorg. VIS Extr. Str.: Delivery
OLI9BW        Reorg. VIS Extr. Str.: Invoices
OLIABW       Setup: BW agency business
OLIB             PURCHIS: StatUpdate Header Doc Level
OLID             SIS: Stat. Setup - Sales Activities
OLIE              Statistical Setup - TIS: Shipments
OLIFBW        Reorg. Rep. Manuf. Extr. Structs
OLIGBW       Reconstruct GT: External TC
OLIH             MRP Data Procurement for BW
OLIIBW         Reorg. of PM Info System for BW
OLIKBW        Setup GTM: Position Management
OLILBW        Setup GTM: Position Mngmt w. Network
OLIM             Periodic stock qty - Plant
OLIQBW       QM Infosystem Reorganization for BW
OLISBW       Reorg. of CS Info System for BW
OLIX             Stat. Setup: Copy/Delete Versions
OLIZBW       INVCO Setup: Invoice Verification

Datasource                          Tcode           Program
2LIS_02*                              OLI3BW       RMCENEUA
2LIS_03_BX                         MCNB          RMCBINIT_BW
2LIS_03_BF                         OLI1BW       RMCBNEUA
2LIS_03_UM                        OLIZBW       RMCBNERP
2LIS_04* orders                   OLI4BW       RMCFNEUA
2LIS_04* manufacturing      OLIFBW       RMCFNEUD
2LIS_05*                              OLIQBW      RMCQNEBW
2LIS_08*                              VTBW          VTRBWVTBWNEW
2LIS_08* (COSTS)              VIFBW        VTRBWVIFBW
2LIS_11_V_ITM                   OLI7BW     RMCVNEUA
2LIS_11_VAITM                   OLI7BW     RMCVNEUA
2LIS_11_VAHDR                  OLI7BW     RMCVNEUA
2LIS_12_VCHDR                 OLI8BW     RMCVNEUL
2LIS_12_VCITM                   OLI8BW     RMCVNEUL
2LIS_12_VCSCL                  OLI8BW     RMCVNEUL
2LIS_13_VDHDR                 OLI9BW     RMCVNEUF
2LIS_13_VDITM                   OLI9BW     RMCVNEUF
2LIS_17*                              OLIIBW      RMCINEBW
2LIS_18*                              OLISBW     RMCSNEBW
2LIS_45*                              OLIABW     RMCENEUB

Document update is where the transaction (documents) are updated in the application tables. This update is normally a synchronous update, i.e. if the update does not go through for what ever reason, the complete transaction is rolled back.
Statistical update is the update of statistics for the transaction – like LIS or extractors for BW.
V1 – synchronous update. If the update is set to V1, then all tables are update and if any one fails, all are rolled back. Done for all transaction plus critical statistics like credit management, etc.
V2 – asynchronous update – transactions are updated and statistical updates are done when the processor has free resources. If the statistical update fails, the transaction would have still gone through and these failures have to be addressed separately.
V3 – batch update – statistics are updated using a batch (periodic) job like every hour or end of the day. Failure behavior is same as V2 updates.

Statistical update is also used as to describe the initial setup of the statistical tables for LO/LIS. When old transactions are updated in LO/LIS as a one time exercise, then it is called a statistical update also. Once these tables are upto date will all transactions, then every transaction is updated in them using V1, V2 or V3.

By Martin Grob


In reality dimensions in an InfoCube are often designed by business terms (like material, customer etc.) This often leads to the impression that InfoCube dimensions should be designed based on business constraints. This although should not be the leading criteria and shouldn't drive the decision. 
Aside from the datavolume which depends on the granularity of the data in the InfoCube, performance is very much depending on how the InfoObject are arranged in the dimensions. Although this has no impact on the size of the fact table it certainly has one on the size of the dimensions.

How is a dimension then designed?

The main goal distributing the InfoObjects in their dimensions must be to keep the dimensions as small as possible. The decision on how many dimension and what InfoObjects go where is purely technical driven. In some cases this matches the organisational view but this would only be a conicidence and not the goal.

There is a few guidelines that should be considered assigning InfoObjects to dimensions:
  • Use as many dimensions as necessary but it's more important to minimize dimension size rather than the number of dimensions.
  • Within the dimension only characteristics that have a 1:n relation should be added (e.g. material and product hierarchy)
  • Within a dimension there shouldn't be n:m relations. (e.g. product hierarchy and customer)
  • Document level InfoObjects or big characteristics should be designed as Line-Item dimensions. Line item dimensions are not a true dimensions they have a direct link between the fact table and the SID table. 
  • The most selective characteristics should be at the top of the dimension table
  • Don't mix characteristics with values that change frequently causing large dimension tables. (e.g. material and promotions)
  • Consider also to combine unrelated characteristics it can improve performance by reducing the number of table joins. (you only have 13 dimensions so combine the small ones)

As a help the report (SE38) SAP_INFOCUBE_DESIGNS can be used.
This yellow marked dimension should be converted into a line item dimension if it contains a document level characteristic or it is simply bad design.

The maximum number of entries  a dimension potentially can have is calculated through the cartesian product of all SID's. (e.g. 10'000 customer and 1'000 product hierarchies lead to 10'000'000 possible combinations in the dimension table. It's unlikely that this is going to happen and while designing the dimension this should also be considered - analyzing the possibilities of all customers buying all products in this case.
In cases where there is an m:m relationship it usually means there is a missing entity between those two and therefore they should be stored in different dimensions.
Once data is loaded into the InfoCube a check on the actual number of records loaded into the dimension table vs. the number of record in the fact table should be done. As a rule of thumb the ratio should be between 1:10 and 1:20.

Degenerated Dimensions

If a large dimension table reaches almost the size of the fact table when measured the number of rows in the tables it's a degenerated dimension. The OLAP processer has to join two big tables which is bad for the query perfromance. Such dimensions can be marked as Line Item Dimensions causing the database not to create an actual dimension table. Checking the table /BIC/F<INFOCUBE> will then show that instead of the DMID dimension key the SID of the degenerated dimension table is placed in the fact table. (Field name RSSID). With this a join of the two tables is eliminated. Those dimensions can only hold one InfoObject as a 1:1 relationship must exist between the SID value and the DIMID.
Dimensions with a lot of unique values can be set to High Cardinality which changes the method of indexing dimensions. (ORA DB only) This results in a switch from a bitmap index to a B-Tree index.
Defining a dimension as Line Item Dimension / High Cardinality


Finding the optimal model and balancing the size and the number of dimensions is a delicate excercise.
Dimensions in MultiProvider do not have to follow the underlying InfoCubes definitions. Those can be focused on the end users need and be structrured by the organizations meaning. This does not affect the performance as the MultiProvider does not have a physically existing datamodel on the database.    
Designing the dimension in an InfoCube correctly can have a significant improvement on performance!

By Mohanavel at

Objective:    Introducing time delay in the process chain with help of standard SAP provided program RSWAITSEC.

Background:  With the use of interrupt process type in the process chain we can give the fixed delay to the subsequent process types when the interrupt step is reached.  But when we use the standard interrupt process type we have to mention the date and time or event name.
In many cases interrupt step might not help, if suppose an interrupt step is introduce to delay the subsequent processes by a definite period of time, and if all the steps above to the interrupt gets completed early then instead of passing the trigger to the subsequent step after the desired wait time, the interrupt will force the chain to wait till the conditions in the interrupt are satisfied.
In order to achieve the delay in the trigger flow from one process type to another in a process chain without any condition for the fixed time limit or event raise we can use this RSWAITSEC Program.

Scenario:   In our project one of the master data chain is getting scheduled at 23.00IST. This load supplies data to the report which is based on 0CALWEEK. The data load and an abap program in the process chain make use of SY-DATUM, so a load that starts on sunday 23:00 if doesn't complete by 23:59:59 hours (1 hour duration) then the entire data gets wrongly mapped to the next week. .This will cause discrepancy of data.
So it was required to schedule chain at 23:00 IST everyday except sunday, and at 22:45:00IST (15mins earlier) on Sundays.

Different Ways to Achieve the above Situation:
1.        Creating two different process chains and scheduling the 1st process chain at 23.00 IST for Monday to Saturday (Using factory calendar), scheduling the 2nd at 22.45 IST only for Sunday.
Disadvantage of 1st method:
Unnecessarily creating two chains for the same loads, this makes way to have multiple chains in the system.

   2.   Scheduling the same chain at 22.45IST and adding the decision step to find and give interruption of 15mins for Monday to Saturday.  So on Sunday it will  get start at 22.45.

Process Chain with Interrupt Process Type:

Description: PC with Interrpt Step(1).jpg
Disadvantage of 2nd method:
If suppose you want execute this chain with immediate in other time, then my interruption step will wait until 23.00 IST to get start the load for Monday to Saturday loads.

Better Way of Achieving with RSWAITSEC program:

Scheduling the chain at 22.45 and adding the decision step to find whether its Sunday or other.  If Sunday then next step would be directly to the local chain, if the particular day is between Monday to Saturday then the next step would be with RSWAITSEC program(SAP std program).   In the program variant we have to mention the desired time delay in Secs(900 Secs).

     Compared to above two methods, this will be the  better way to achieve the desired output.  Even though if I run the process chain with start process as         immediate on other than Sunday’s my local chain will not wait until 23.00IST to reach, it will wait for 15mins and it will get triggered. 

       As this is the SAP provided one no need to move any TP for this, even in production we will be able to use directly.

Process chain with RSWAITSEC:
Description: PC with RSWAITSEC program(2).jpg

ABAP Process type with RSWAITSEC Program(which shown in the above PC):

Description: Program Variant for RSWAITSEC(3).jpg

Setting the Variant value (required time):

In the variant value we need to mention the desired limit of delay in Seconds.  My requirement is of with 15mins of delay, so I have given 900sec in the variant value.

Description: Variant Value Screen(4).jpg

So we can use this program in any stages of process chain to give the fixed period of delay.

Hope this will be helpful.