Showing posts with label sap bi stuff. Show all posts
Showing posts with label sap bi stuff. Show all posts


By: Anonymous
I want to continue my series for beginners new to SAP BI. In this blog I write down the necessary steps how to create a process chain loading data with an infopackage and with a DTP, activation and scheduling of this chain.

1.)    Call transaction RSPC

RSPC

 RSPC is the central transaction for all your process chain maintenance. Here you find on the left existing process chains sorted by “application components”.  The default mode is planning view. There are two other views available: Check view and protocol view.
2.)    Create a new process chain
To create a new process chain, press “Create” icon in planning view. In the following pop-Up window you have to enter a technical name and a description of your new process chain.

name chain

The technical name can be as long as up to 20 characters. Usually it starts with a Z or Y. See your project internal naming conventions for it.
3.)    Define a start process
After entering a process chain name and description, a new window pop-ups. You are asked to define a start variant.
 Start variant


That’s the first step in your process chain! Every process chain does have one and only one starting step. A new step of type “Start process” will be added. To be able to define unique start processes for your chain you have to create a start variant. These steps you have to do for any other of the subsequent steps. First drag a process type on the design window. Then define a variant for this type and you have to create a process step. The formula is:
 Process Type + Process Variant = Process Step!
If you save your chain, process chain name will be saved into table RSPCCHAIN. The process chain definition with its steps is stored into table RSPCPROCESSCHAIN as a modified version.So press on the “create” button, a new pop-up appears:

start variant name

Here you define a technical name for the start variant and a description. In the n ext step you define when the process chain will start. You can choose from direct scheduling or start using meta chain or API. With direct scheduling you can define either to start immediately upon activating and scheduling or to a defined point in time like you know it from the job scheduling in any SAP system. With “start using meta chain or API” you are able to start this chain as a subchain or from an external application via a function module “RSPC_API_CHAIN_START”. Press enter and choose an existing transport request or create a new one and you have successfully created the first step of your chain.
 4.)    Add a loading step
If you have defined the starting point for your chain you can add now a loading step for loading master data or transaction data. For all of this data choose “Execute infopackage” from all available process types. See picture below:

loading step

You can easily move this step with drag & drop from the left on the right side into your design window.A new pop-up window appears. Here you can choose which infopackage you want to use. You can’t create a new one here. Press F4 help and a new window will pop-up with all available infoapckages sorted by use. At the top are infopackages used in this process chain, followed by all other available infopackages not used in the process chain. Choose one and confirm. This step will now be added to your process chain. Your chain should look now like this:

first steps

How do you connect these both steps? One way is with right mouse click on the first step and choose Connect with -> Load Data and then the infopackage you want to be the successor.

 connect step

Another possibility is to select the starting point and keep left mouse button pressed. Then move mouse down to your target step. An arrow should follow your movement. Stop pressing the mouse button and a new connection is created. From the Start process to every second step it’s a black line.
5.)    Add a DTP process In BI 7.0 systems you can also add a DTP to your chain. From the process type window ( see above.) you can choose “Data Transfer Process”. Drag & Drop it on the design window. You will be asked for a variant for this step. Again as in infopackages press F4 help and choose from the list of available DTPs the one you want to execute. Confirm your choice and a new step for the DTP is added to your chain. Now you have to connect this step again with one of its possible predecessors. As described above choose context menu and connect with -> Data transfer process. But now a new pop-up window appears.

connection red green 
Here you can choose if this successor step shall be executed only if the predecessor was successful, ended with errors or anyhow if successful or not always execute. With this connection type you can control the behaviour of your chain in case of errors. If a step ends successful or with errors is defined in the process step itself. To see the settings for each step you can go to Settings -> Maintain Process Types in the menu. In this window you see all defined (standard and custom ) process types. Choose Data transfer process and display details in the menu. In the new window you can see:

dtp setting

 DTP can have the possible event “Process ends “successful” or “incorrect”, has ID @VK@, which actually means the icon and appears under category 10, which is “Load process and post-processing”. Your process chain can now look like this:

two steps


You can now add all other steps necessary. By default the process chain itself suggests successors and predecessors for each step. For loading transaction data with an infopackage it usually adds steps for deleting and creating indexes on a cube. You can switch off this behaviour in the menu under “Settings -> Default Chains". In the pop-up choose “Do not suggest Process” and confirm.

default chains

Then you have to add all necessary steps yourself.
6.)    Check chain
Now you can check your chain with menu “Goto -> Checking View” or press the button “Check”. Your chain will now be checked if all steps are connected, have at least one predecessor. Logical errors are not detected. That’s your responsibility. If the chain checking returns with warnings or is ok you can activate it. If check carries out errors you have to remove the errors first.
7.)    Activate chain
After successful checking you can activate your process chain. In this step the entries in table RSPCPROCCESSCHAIN will be converted into an active version. You can activate your chain with menu “Process chain -> Activate” or press on the activation button in the symbol bar. You will find your new chain under application component "Not assigned". To assign it to another application component you have to change it. Choose "application component" button in change mode of the chain, save and reactivate it. Then refresh the application component hierarchy. Your process chain will now appear under new application component.
8.)    Schedule chain
After successful activation you can now schedule your chain. Press button “Schedule” or menu “Execution -> schedule”. The chain will be scheduled as background job. You can see it in SM37. You will find a job named “BI_PROCESS_TRIGGER”. Unfortunately every process chain is scheduled with a job with this name. In the job variant you will find which process chain will be executed. During execution the steps defined in RSPCPROCESSCHAIN will be executed one after each other. The execution of the next event is triggered by events defined in the table.  You can watch SM37 for new executed jobs starting with “BI_” or look at the protocol view of the chain.
9.)    Check protocol for errors
You can check chain execution for errors in the protocol or process chain log. Choose in the menu “Go to -> Log View”. You will be asked for the time interval for which you want to check chain execution. Possible options are today, yesterday and today, one week ago, this month and last month or free date. For us option “today” is sufficient.
Here is an example of another chain that ended incorrect:
  chain log

On the left side you see when the chain was executed and how it ended. On the right side you see for every step if it ended successfully or not. As you can see the two first steps were successfull and step “Load Data” of an infopackage failed. You can now check the reason with context menu “display messages” or “Process monitor”. “Display messages” displays the job log of the background job and messages created by the request monitor. With “Process monitor” you get to the request monitor and see detailed information why the loading failed. THe logs are stored in tables RSPCLOGCHAIN and RSPCPROCESSLOG. Examining request monitor will be a topic of one of my next upcoming blogs.


 10.) Comments
Here just a little feature list with comments.
- You can search for chains, but it does not work properly (at least in BI 7.0 SP15).
- You can copy existing chains to new ones. That works really fine.
- You can create subchains and integrate them into so-called meta chains. But the application component menu does not reflect this structure. There is no function available to find all meta chains for a subchain or vice versa list all subchains of a meta chain. This would be really nice to have for projects.
- Nice to have would be the possibility to schedule chains with a user defined job name and not always as "BI_PROCESS_TRIGGER".
But now it's your turn to create process chains.

INFOCUBES - ULTIMATE SAP BIW CONCEPTS EXPLAINATION

INFOCUBES - ULTIMATE SAP BIW CONCEPTS EXPLAINATION - by "Shrikanth Muthyala"

INFOCUBES - ULTIMATE EXPLAINATION

INFOCUBE
1.  INFOCUBE-INTRODUCTION:
2.  INFOCUBE - STRUCTURE
3.  INFOCUBE TYPES
3.1  Basic Cube: 2 Types
3.1.1  Standard InfoCube
3.1.2  Transactional InfoCube
3.2  Remote Cubes: 3 Types
3.2.1  SAP Remote Cube
3.2.2  General Remote Cube
3.2.3  Remote Cube With Services
4.  INFOCUBE TABLES- F,E,P,T,U,N
5.  INFOCUBE-TOOLS
5.1  PARTITIONING
5.2  ADVANTAGES OF PARTITIONING:
5.3  CLASSIFICATION OR TYPES OF PARTITIONING
5.3.1  PHYSICAL PARTITIONING/TABLE/LOW LEVEL
5.3.2  LOGICAL PARTITIONING/HIGH LEVEL PARTITIONING
5.3.3  EXAMPLES ON PARTITIONING USING 0CALMONTH & 0FISCYEAR
5.3.3.1  ERRORS ON PARTITIONING
5.3.4  REPARTITIONING
5.3.4.1  REPARTITIONING TYPES
5.3.5  Repartitioning - Limitations- errors
5.3.6  EXAMPLES ON PARTITIONING USING 0CALMONTH & 0FISCYEAR
5.4  COMPRESSION OR COLLAPSE
5.5  INDEX/INDICES
5.6  RECONSTRUCTION
5.6.1  ERRORS ON RECONSTRUCTION
5.6.2  Key Points to remember while going for reconstruction
5.6.3  Why Errors Occur in Reconstruction?
5.7  STEPS FOR RECONSTRUCTION
5.8  ROLLUP
5.9  LINE ITEM DIMENSION/DEGENERATE DIMENSION
5.9.1  LINE ITEM DIMENSION ADVANTAGES
5.9.2  LINE ITEM DIMENSION DISADVANTAGES
5.10  HIGH CARDINALITY
6.  INFOCUBE DESIGN ALTERNATIVES
6.1  ALTERNATIVE I :  TIME DEPENDENT NAVIGATIONAL ATTRIBUTES
6.2  ALTERNATIVE II :  DIMENSION CHARACTERISTICS
6.3  ALTERNATIVE III : TIME DEPENDENT ENTIRE HIERARCHIES
6.4  OTHER ALTERNATIVES:
6.4.1  COMPOUND ATTRIBUTE
6.4.2  LINE ITEM DIMENSION
7.  FEW QUESTIONS ON INFOCUBES


1. INFOCUBE-INTRODUCTION:
The central objects upon which the reports and analyses in BW are based are called InfoCubes & we can seen as InfoProviders. an InfoCube is a multidimensional data structure and a set of relational tables that contain InfoObjects.

2. INFOCUBE- STRUCTUREStructure of InfoCube is considered as ESS-Extended Star Schema/Snow Flake Schema, that contains
• 1 Fact Table
• n Dimension Tables
• n Surrogate ID (SID) tables
• n Fact Tables
• n Master Data Tables
Fact Table with KeyFigures
n Dimension Tables with characteristics
n Surrogate ID (SID) tables link Master data tables & Hierarchy Tables
n Master Data Tables are time dependent and can be shared by multiple InfoCubes. Master data table contains Attributes that are used for presenting and navigating reports in SAP(BW) system.

3. INFOCUBE TYPES:

• Basic Cubes reside on same Data Base
• Remote Cubes Reside on remote system
• SAP remote cube resides on other R/3 System uses SAPI
• General remote Cube resides on non SAP System uses BAPI
• Remote Cube wit Services reside on non SAP system

3.1. BASIC CUBE: 2 TYPES: These are physically available in the same BW system in which they are specified or their meta data exist.
3.1.1. STANDARD INFOCUBE: FREQUENTLY USEDStandard InfoCube are common & are optimized for Read Access, have update rules, that enable transformation of Source Data & loads can be scheduled

3.1.2. TRANSACTIONAL INFOCUBE:The transactional InfoCubes are not frequently used and used only by certain applications such as SEM & APO. Data are written directly into such cubes bypassing UpdateRules

3.2. REMOTE CUBES: 3 TYPES:Remote cubes reside on a remote system. Remote Cubes gather metadata from other BW systems, that are considered as Virtual Cubes. These are the remote cube types:

3.2.1. SAP REMOTE CUBE:the cube resides on non SAP R/3 system & communication is via the service API(SAPI)

3.2.2. GENERAL REMOTE CUBE:Cube resides on non SAP R/3 Source System & communication is via BAPI.

3.2.3. REMOTE CUBE WITH SERVICES:Cube resides on any remote system i.e. SAP or non SAP & is available via user defined function module.

4. INFOCUBE TABLES- F,E,P,T,U,NTransaction Code: LISTSCHEMA
LISTSCHEMA>enter name of the InfoSource OSD_C03 & Execute. Upon execution the primary (Fact) table is displayed as an unexpanded node. Expand the node and see the screen.
These are the tables we can see under expanded node:


5. INFOCUBE-UTILITIES
5.1. PARTITIONING
Partitioning is the method of dividing a table into multiple, smaller, independent or related segments(either column wise or row wise) based on the fields available which would enable a quick reference for the intended values of fields in  the table.
For Partitioning a data set, at least among 2 partitioning criteria 0CALMONTH & 0FISCPER must be there.

5.2. ADVANTAGES OF PARTITIONING:• Partitioning allows you to perform parallel data reads of multiple partitions speeding up the query execution process.
• By partitioning an InfoCube, the reporting performance is enhanced because it is easier to search in smaller tables, so maintenance becomes much easier.
• Old data can be quickly removed by dropping a partition.
you can setup partitioning in InfoCube maintenance extras>partitioning.

5.3. CLASSIFICATION OR TYPES OF PARTITIONING
5.3.1. PHYSICAL PARTITIONING/TABLE/LOW LEVEL
Physical Partitioning also called table/low level partitioning is restricted to Time Characteristics and is done at Data Base Level, only if the underlying database allows it.
Ex: Oracle, Informix, IBM, DB2/390
Here is a common way of partitioning is to create ranges. InfoCube can be partitioned on a time slice like Time Characteristics as below.
• FISCALYEAR( 0FISCYEAR)
• FISCAL YEAR VARIANT( 0FISCVARNT)
• FISCALYEAR/PERIOD(0FISCPERIOD)
• POSTING PERIOD(OFISCPER3)
By this physical partitioning old data can be quickly removed by dropping a partition.
note: No partitioning in B.I 7.0, except DB2 (as it supports)

5.3.2. LOGICAL PARTITIONING/HIGH LEVEL PARTITIONINGLogical partitioning is done at MultiCubes(several InfoCubes joined into a MultiCube) or MultiProvider level i.e. DataTarget level . in this case related data are separated & joined into a MultiCube.
Here Time Characteristics only is not a restriction, also you can make position on Plan & Actual data, Regions, Business Area etc.
Advantages:
• As per the concept, MultiCube uses parallel sub-queries, achieving query performance  ultimately.
• Logical partitioning do not consume any additional data base space.
• When a sub-query hits a constituent InfoProvider, a reduced set of data is loaded into smaller  InfoCube from large InfoCube target, even in absence of MultiProvider.

5.3.3. EXAMPLES ON PARTITIONING USING 0CALMONTH & 0FISCYEARTHERE ARE TWO PARTITIONING CRITERIA:
calendar month (0CALMONTH)
fiscal year/period (0FISCPER)
At an instance we can partition a dataset using only one type among the above two criteria:
In order to make partition, at least one of the two InfoObjects must be contained in the InfoCube.
If you want to partition an InfoCube using the fiscal year/period (0FISCPER) characteristic, you have to set the fiscal year variant characteristic to constant.
After  activating InfoCube, fact table is created on the database with one of the number of partitions corresponding to the value range.
You can set the valuerange yourself.
Partitioning InfoCubes using Characteristic 0CALMONTH:
Choose the partitioning criterion 0CALMONTH and give the value range as
From=01.1998
to=12.2003
So how many partitions are created after partitioning?
6 years * 12 months + 2 = 74 partitions are created
2 partitions for values that lay outside of the range, meaning < 01.1998 or >12.2003.
You can also determine how many partitions are created as a maximum on the database for the fact table of the InfoCube.
You choose 30 as the maximum number of partitions.
Resulting from the value range:
6 years *12 calendar months + 2 marginal partitions (up to 01.1998, from 12.2003)= 74 single values.
The system groups three months at a time together in a partition
4 Quarters Partitions = 1 Year
So, 6 years * 4 partitions/year + 2 marginal partitions = 26 partitions are created on the database.
The performance gain is only gained for the partitioned InfoCube if the time dimension of the InfoCube is consistent.
This means that all values of the 0CAL* characteristics of a data record in the time dimension must fit each other with a partitioning via 0CALMONTH.
Note: You can only change the value range when the InfoCube does not contain any data.

PARTITIONING INFOCUBES USING THE CHARACTERISTIC 0FISCPERMandatory thing here is, Set the value for the 0FISCVARNT characteristic to constant.

5.3.4. STEPS FOR PARTITIONING AN INFOCUBE USING 0CALDAY & 0FISCPER:Administrator Workbench
   >InfoSet maintenance
     >double click the InfoCube
        >Edit InfoCube
           >Characteristics screen
              >Time Characteristics tab
                 >Extras
                    >IC Specific Properties of InfoObject
                       >Structure-Specific Properties dialog box
                         >Specify constant for the characteristic 0FISCVARNT
                           >Continue
                              >In the dialog box enter the required details

5.3.5. Partition Errors:
F fact tables of partitioned InfoCube have partitions that are empty, or the empty partitions do not have a corresponding entry in the related package dimension.
Solution1: the request SAP_PARTITIONS_INFO_GET_DB4 helps you to analyze these problems. The empty partitions of the f fact table are reported . In addition, the system issues an information manage. If there is no corresponding entry for a partition in the InfoPackage dim table(orphaned).
When you compressed the affected InfoCube, a database error occurred in drop partition, after the actual compression. However, this error was not reported to the application. The logs in  the area of compression do not display any error manages. The error is not reported in the developer trace (TRANSACTION SM50), the system log ( TRANSACTION SM21) and the job overview (TRANSACTION SM37) either.
The application thinks that the data in the InfoCube is correct, the data of the affected requests or partitions is not displayed in the reporting because they do not have a corresponding entry in the package dimension.
Solution2: use the report SA P_DROP_FPARTITIONS</Z1) to remove the orphaned or empty partitions from the affected f fact tables, as described in note 1306747, to ensure that the database limit of 255 partitions per database table is not reached unnecessarily.

5.3.6. REPARTITIONING:Repartitioning is a method of partitioning, used for a cube which is already partitioned that has loaded data. Actual & Plan data versions come here. As we know, the InfoCube has actual data which is already loaded as per plan data after  partition. If we do repartition, the data in the cube will be not available/little data due to data archiving over a period of time.
You can access repartitioning in the Data Warehousing Work Bench using Administrator>Context Menu of your InfoCube.
5.3.6.1. REPARTITIONING - 3 TYPES: A) Complete repartitioning,
B) Adding partitions to an e fact table that is already partitioned and
C) Merging empty or almost empty partitions of an e fact table that is already partitioned

5.3.7. REPARTITIONING - LIMITATIONS- ERRORS:SQL 2005 partitioning limit issue: error in SM21 every minute as we reached the limit for number of partitions per SQL 2005(i.e. 1000)

5.4. COMPRESSION OR COLLAPSE:Compression reduces the number of records by combining records with the same key that has been loaded in separate requests.
Compression is critical, as the compressed data can no longer deleted from the InfoCube using its request ID's. You must be certain that the data loaded into the InfoCube  is correct.
The user defined partition is only affecting the compressed E-Fact Table.
By default  F-Fact Table contains data.
By default SAP allocates a Request ID for each posting made.
By using Request ID, we can delete/select the data.
As we know that  E-Fact Table is compressed & F-Fact Table is uncompressed.
When compressed, data from F-Fact Table transferred to E-Fact Table and all the request ID's are lost / deleted / set to null.
After compression, comparably the space used by E-Fact Table is lesser than F-Fact Table.
F-Fact Table is compressed  uses BITMAP Indexes
E-Fact Table is uncompressed uses B-TREE Indexes

5.5. INDEX/INDICES
PRIMARY INDEX
The primary Index is created automatically when the table is created in the database.
SECONDARY INDEX(Both Bitmap & B-Tree are secondary indices)
Bitmap indexes are created by default on each dimension column of a fact table
& B-Tree indices on ABAP tables.

5.6. RECONSTRUCTION:Reconstruction is the process by which you load data into the same cube/ODS or different cube/ODS from PSA. The main purpose is that after deleting the requests by Compression/Collapse by any one, so if we want the requests that are deleted (old/new) we don't need to go to source system or flat files for collecting requests, we get them from PSA.
Reconstruction of a cube is a more common requirement and is required when:1) A change to the structure of a cube:  deletion of characteristics/key figures, new characteristics/key figures that can be derived from existing chars/key figures
2) Change to update rules
3) Missing master data and request has been manually turned green - once master data has been maintained and loaded the request(s) should be reconstructed.

5.6.1. KEY POINTS TO REMEMBER WHILE GOING FOR RECONSTRUCTION:• Reconstruction must occur during posting free periods.
• Users must be locked.
• Terminate all scheduled jobs that affect application.
• Deactivate the start of RSBWV3nn update report.

5.6.2. WHY ERRORS OCCUR IN RECONSTRUCTION?Errors occur only due to document postings made during reconstruction run, which displays incorrect values in BW, because the logic of before and After images are no longer match.

5.6.3. STEPS FOR RECONSTRUCTIONTransaction Codes:
LBWE  : LO DATA EXTRACTION: CUSTOMIZING COCKPIT
LBWG  : DELETE CONTENTS OF SETUP TABLES
LBWQ  : DELTA QUEUED
SM13   : UPDATE REQUESTS/RECORDS
SMQ1  : CLEAR EXTRACTOR QUEUES
RSA7  : BW DELTA QUEUE MONITOR
SE38/SA38  : DELETE UPDATE LOG

STEPS:1. Mandatory - User locks :
2. Mandatory - (Reconstruction tables  for application 11 must be empty)
 Enter  transaction - LBWG & application = 11 for SD sales documents.
3. Depending on the selected update method, check below queues:
 SM13 – serialized or un-serialized V3 update
 LBWQ – Delta queued
 Start updating the data from the Customizing Cockpit (transaction LBWE) or
 start the corresponding application-specific update report RMBWV3nn (nn = application  number) directly  in transaction SE38/SA38 .
4. Enter RSA7 & clear delta queues of  PSA, if it contains data in queue
5. Load delta data from R/3 to BW
6. Start the reconstruction for the desired application.
 If you are carrying out a complete reconstruction, delete the contents of the  corresponding data targets in  your BW (cubes and ODS objects).
7. Use Init request (delta initialization with data transfer) or a full upload to load the data  from the reconstruction into BW.
8. Run the RMBWV3nn update report again.

5.6.4. ERRORS ON RECONSTRUCTION:Below you can see various errors on reconstruction. I had read SAP Help Website and SCN and formulated a simple thesis to make the audience, easy in understanding the concepts
ERROR 1: When I completed reconstruction, Repeated documents are coming. Why?
Solution: The reconstruction programs write data additively into the set-up tables.
If a document is entered twice from the reconstruction, it also appears twice in the set-up table. Therefore, the reconstruction tables may contain the same data from your current reconstruction and from previous reconstruction runs (for example, tests). If this data is loaded into BW, you will usually see multiple values in the queries (exception: Key figures in an ODS object whose update is at “overwrite”).

ERROR 2: Incorrect data in BW, for individual documents for a period of reconstruction run. Why?
Solution: Documents were posted during the reconstruction.
Documents created during the reconstruction run then exist in the reconstruction tables as well as in the update queues. This results in the creation of duplicate data in BW.
Example: Document 4711, quantity 15
Data in the PSA:
ROCANCEL DOCUMENT QUANTITY
‘ ‘ 4711 15 delta, new record
‘ ‘ 4711 15 reconstruction
Query result:
4711 30
Documents that are changed during the reconstruction run display incorrect values in BW because the logic of the before and after images no longer match.
Example: Document 4712, quantity 10, is changed to 12.
Data in the PSA:
ROCANCEL DOCUMENT QUANTITY
X 4712 10- delta, before image
‘ ‘ 4712 12 delta, image anus
‘ ‘ 4712 12 reconstruction
Query result:
4712 14

ERROR 3: After you perform the reconstruction and restart the update, you find duplicate documents in BW.
Solution: The reconstruction ignores the data in the update queues. A newly-created document is in the update queue awaiting transmission into the delta queue. However, the reconstruction also processes this document because its data is already in the document tables. Therefore, you can use the delta initialization or full upload to load the same document from the reconstruction and with the first delta after the reconstruction into BW.
After you perform the reconstruction and restart the update, you find duplicate documents in BW.
Solution: The same as point 2; there, the document is in the update queue, here, it is in the delta queue. The reconstruction also ignores data in the delta queues. An updated document is in the delta queue awaiting transmission into BW. However, the reconstruction processes this document because its data is already contained in the document tables. Therefore, you can use the delta initialization or full upload to load the same document from the reconstruction and with the first delta after the reconstruction into BW.

ERROR 4:Document data from time of the delta initialization request is missing from BW.
Solution: The RMBWV3nn update report was not deactivated. As a result, data from the update queue LBWQ or SM13 can be read while the data of the initialization request is being uploaded. However, since no delta queue (yet) exists in RSA7, there is no target for this data and it is lost.

5.7. ROLLUPRollup creates aggregates in an InfoCube whenever new data is loaded.

5.8. LINE ITEM DIMENSION/DEGENERATE DIMENSIONlf the size of a dimension of a cube is more than 20% of the normal fact table, then we define that dimension as a Line Item Dimension.
Ex: Sales Document Number in one dimension is Sales Cube.
Sales Cube has sales document number and usually the dimension size and the fact table size will be the same. But when you add the overhead of lookups for DIMID/SID's the performance will be very slow.
By flagging is as a Line Item Dimension, the system puts SID in the Fact Table instead of DMID for sales document Number.
This avoids one lookup into dimension table. Thus dimension table is not created in this case. The advantage is that you not only save space because the dimension table is not created but a join is made between the two tables Fact & SID table(diagram 3) instead of 3 Tables Fact, Dimension & SID tables (diagram 2)

Below image is for illustration purpose only( ESS Extended Star Schema)

Dimension Table, DIMID=Primary Key
Fact Table, DIMID-Foreign Key
Dimension Table Links Fact Table And A Group Of Similar Characteristics
Each Dimension Table Has One DIMID & 248 Characteristics In Each Row

5.8.1. LINE ITEM DIMENSION ADVANTAGES:
Saves space by not creating Dimension Table

5.8.2. LINE ITEM DIMENSION DISADVANTAGES:• Once a Dimension is flagged as Line Item, You cannot ass additional Characteristics.
• Only one characteristic is allowed per Line Item Dimension & for (F4) help, the Master Data is displayed, which takes more time.

5.9. HIGH CARDINALITY:If the Dimension exceeds 10% of the size of the fact table, then you make this as High Cardinality Dimension. High Cardinality Dimension is one that has several potential occurrences. when you flag a dimension as High Cardinality, the database is adjusted accordingly.
BTREE index is used rather than BITMAP index, Because in general, if the cardinality is expected to exceed one fifth that of a fact table, it is advisable to check this flag
NOTE: SAP converts from BITMAP index to BTREE index if we select dimension as High Cardinality.

6. INFOCUBE DESIGN ALTERNATIVES:
Refer: SAP R/3 BW Step-by-Step Guide by Biao Fu  & Henry Fu
InfoCube Design techniques of helps us to reveal automatic changes in the InfoCube. These alternatives may be office/region/sales representative.
6.1. ALTERNATIVE I   :  TIME DEPENDENT NAVIGATIONAL ATTRIBUTES
6.2. ALTERNATIVE II  :  DIMENSION CHARACTERISTICS METHOD
6.3. ALTERNATIVE III  : TIME DEPENDENT ENTIRE HIERARCHIES
6.4. OTHER ALTERNATIVE:
6.4.1. COMPOUND ATTRIBUTE
6.4.2. LINE ITEM DIMENSION

7. FEW QUESTIONS ON INFOCUBES
What are InfoCubes?
What is the structure of InfoCube?
What are InfoCube types?
Are the InfoCubes DataTargets? How?
What are virtual Cubes(Remote Cubes)?
How many Cubes you had designed?
What are the advantages of InfoCube?
Which cube do SAP implements?
What are InfoCube tables?
What are Sap Defined Dimensions?
How many tables are formed when you activate the InfoCube structure?
What are the tools or utilities of an InfoCube?
What is meant by table partitioning of an InfoCube?
What is meant by Compression of an InfoCube
Do you go for partitioning or Compression?
Advantages and Disadvantages of an InfoCube partitioning?
What happens to E-Fact Table and F Fact Table if you make partition on an InfoCube?
Why do u go for partitioning?
What is Repartitioning?
What are the types of Repartitioning?
What is Compression? Why you go for Compression?
What is Reconstruction? Why you go for Reconstruction?
What are the mandatory steps to do effective error free reconstruction, while going Reconstruction?
What are the errors occur during Reconstruction?
What is Rollup of an InfoCube?
How can you measure the InfoCube size?
What is Line Item Dimension?
What is Degenerated Dimension?
What is High Cardinality?
How can you analyze that the cube as a LineItem Dimension or HighCardinality?
What are the InfoCube design alternatives?
Can you explain the alternative time dependent navigational attributes in InfoCube design?
Can you explain the alternative dimension characteristics in InfoCube design?
Can you explain the alternative time dependent entire hierarchies in InfoCube design?
What are the other techniques of InfoCube design alternatives
What is Compound Attribute?
What is LineItem Dimension? Will it affect designing an InfoCube?
What are the maximum number of partitions you can create on an InfoCube?
What is LISTSCHEMA?
I want to see the tables of an InfoCube. How? Is there any Transaction Code?
When the InfoCube tables created ?
Are the tables created after activation or Saving the InfoCube structure ?
Did you implemented RemoteCube? Explain me the scenario?
Can you consider InfoCube as Star Schema or Extended Star Schema?
Is Repartitioning available in B.W 3.5 or B.I 7.0? Why?
On what basis you assign Characteristics to Dimensions?

By:
Chandiraban singu 
sdn.sap.com

Interviewers and the question will not remain the same, but find the pattern.

Brief of your profile
Brief of what you done in the project
Your challenging and complex situations
Your regularly faced problem and what you did to avoid the same in permanent
interviewers complex situation , recent situation will be posted for your analysis.

Some one may add
Your system landscape
System archiectecture
Release management
Team size, org str,....

If your exp has production support tthen generally about your roles, authorization and commonlly faced errors.
http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/b0c1d94f-b825-2c10-15ae-ccfc59acb291

About data source enhancement
http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/00c1f726-1dc2-2c10-f891-ddfbffdb1a46

About data flow during delta
http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/f03da665-bb6f-2c10-7da7-9e8a6684f2f9


If your exp has implementation.then
Modules which you have implemented.
Methodoloyg adopted
https://weblogs.sdn.sap.com/pub/wlg/13745
Approach to implementation
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/8917
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/8920
Testing system
Business scenario
how did you did data modellling like why std lo datasource? why dso ? why this much layers ?.....
Documentation, how your functionall spec and technical spec template, content, information will be...?

Design a SAP NetWeaver - Based System Landscape
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/50a9952d-15cc-2a10-84a9-fd9184f35366
https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/8877

BI - Soft yet Hard Challenges
https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/9068

*Best Practice for new BI project *
https://www.sdn.sap.com/irj/sdn/thread?threadID=775458&tstart=0

Guidelines to Make Your BI Implementations Easier
http://www.affine.co.uk/files/Guidelines%20to%20Make%20Your%20BI%20Implementations%20Easier.pdf


Specific bw interview questions

https://www.sdn.sap.com/irj/scn/advancedsearch?query=SAP+BW+INTERVIEW+QUESTIONS&cat=sdn_all
200 BW Questions and Answers for INTERVIEWS
http://sapdocs.info/sap-overview/sap-interview-questions/
http://www.erpmastering.com/bwfaq.htm
http://www.allinterview.com/showanswers/33349.html
http://searchsap.techtarget.com/generic/0,295582,sid21_gci1182832,00.html
http://prasheelk.blogspot.com/2008_05_12_archive.html

Best of luck for your interviews.....Be clear with what you done...
http://saptutions.com/SAPBW/BW_openhub.asp
http://www.scribd.com/doc/6343052/BW-EXAM


 

LO COCKPIT STEP BY STEP

Here is LO Cockpit Step By Step
LO EXTRACTION
- Go to Transaction LBWE (LO Customizing Cockpit)
1). Select Logistics Application
       SD Sales BW
            Extract Structures
2). Select the desired Extract Structure and deactivate it first.
3). Give the Transport Request number and continue
4). Click on `Maintenance' to maintain such Extract Structure
       Select the fields of your choice and continue
             Maintain DataSource if needed
5). Activate the extract structure
6). Give the Transport Request number and continue
- Next step is to Delete the setup tables
7). Go to T-Code SBIW
8). Select Business Information Warehouse
i. Setting for Application-Specific Datasources
ii. Logistics
iii. Managing Extract Structures
iv. Initialization
v. Delete the content of Setup tables (T-Code LBWG)
vi. Select the application (01 – Sales & Distribution) and Execute
- Now, Fill the Setup tables
9). Select Business Information Warehouse
i. Setting for Application-Specific Datasources
ii. Logistics
iii. Managing Extract Structures
iv. Initialization
v. Filling the Setup tables
vi. Application-Specific Setup of statistical data
vii. SD Sales Orders – Perform Setup (T-Code OLI7BW)
        Specify a Run Name and time and Date (put future date)
             Execute
- Check the data in Setup tables at RSA3
- Replicate the DataSource
Use of setup tables:
You should fill the setup table in the R/3 system and extract the data to BW - the setup tables is in SBIW - after that you can do delta extractions by initialize the extractor.
Full loads are always taken from the setup tables





1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
3. Within structures, make sure the filter order exists with the highest level filter first.
4. Check code for all exit variables used in a report.
5. Move Time restrictions to a global filter whenever possible.
6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
9. If Alternative UOM solution is used, turn off query cache.
10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queries---for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
11. Turn off formatting and results rows to minimize Frontend time whenever possible.
12. Check for nested hierarchies. Always a bad idea.
13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.
16. Check Sequential vs Parallel read on Multiproviders.
17. Turn off warning messages on queries.
18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
19. Check to see where currency conversions are happening if they are used.
20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
21. Avoid Cell Editor use if at all possible.
22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.
25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The "not assigned" nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.

By: leela naveen

This are questions I faced. If u have any screen shots for any one of the question provide that one also.
1. We have standard info objects given in sap why you created zinfo objects can u tell me the business scenario
2. We have standard info cubes given in sap why you created zinfo cubes can u tell me the business scenario
3. In keyfigure what is meant by cumulative value, non cumulative value change and non cumulative value in and out flow.
4. when u creating infoobject it shows reference and template what is it
5. what is meant by compounding attribute tell me the scenario?
6. I have 3 cubes for that I created multiprovider and I created a report for that but I didn’t get data in that report what happen?
7. I have 10 cubes I created multiprovider I want only 1 cube data what u do?
8. what is meant by safety upper limit and safety lower limit in all the deltas tell me one by one for time stamp, calender day and numberic pointer?
9. I have 80 queries which query is taking so much time how can you solve it
10. In compression level all requests are becoming zero which data is compressing tell me detail
11. what is meant by flat aggregate?explain in detail
12. I created process chain 1st day it taking 10 min after that 1st week it taking 1 hour after that next time it taking 1 day with a same loads what happen how can u reduce the time of loading
13. how can u know the cube size? in detail show me u have screen shots
14. where can we find transport return codes
15. I have a report it taking so much time how can I rectify
16. what is offset? Without offset we create queries?
17. I told my process chains nearly 600 are there he asked me how can u monitor I told him I will see in rspcm and bwccms he asked is there any third party tools is there to see? Any tools are there to see tell me what it is
18. how client access the reports
19. I don’t have master data it will possible to load transaction data? it is possible is there any other steps to do that one
20. what is structure in reporting?
21. which object based you created extended star schema?
22. what is line item dimension tell me brief
23. what is high cardinality tell me brief
24. process chain is running I have to stop the process for 1 hour after that re runn the process where it is stopped?
in multiprovider can I use aggregations
25. what is direct schedule and what is meta chain
26. which patch u used presently? How can I know which patch that one?
27. how can we increase data packet size
28. hierarchies are not there in bi?why
29. remodeling is applied only on info cube? why not dso/ods?
30. In jump queries we can jump any transactions just like rsa1, sm37 etc it is possible or not?
31. why ods activation fail? What types of fails are there? What are the steps to handle
32. I have a process chain is running the infopackage get error don’t process the error of that info package and then you can run the dependent variants is it possible?

Give me any performance and loading issues or support issues
Reporting errors, Loading errors, process chain errors?





BI Certification Info links

Links and information about the certifications offered by SAP in the BI Space. Common questions regarding BI certification are addressed and relevant links provided.
There are lots of question floating arounds related to BI certification. This Wiki is for those who has already thought of taking the exam. If you are still
doubtfull, check here: https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/9941.


SAP provides three levels of Certification in BI.
1.C_TBW45_04S
Certification Syllabus SAP Consultant Certification Solution Consultant SAP NetWeaver 2004s - Business Intelligence
For more details, check here :http://www.sap.com/services/education/certification/certroles/certificationtest.epx?context=FFC760B8923D16BB5150DAE63E7C1A6B331AF0B9E3A8F73CE3A9B7046E051044503600C911DBA13DCE978D3AC9057626D2B68111A7CD2D707E2EEC31213097E46EB790DD0106435EE0756F7B22F3FA4B4FF0645C06954BF3A150E023B4164DA2C3943DE02E599735DB9B4334B30B38FF20A1DC779D8F55E5F7A6893BDBFA38B94CF455E2A3E0E6851014966C90C80E173937CF7C2372A6FC%7cDA891B3C877030D9765B85CAE6AC82FC3EB6BC7DA10B7335


2 . C_TBW45_70:- SAP Certified Application Associate- Business Intelligence with SAP NetWeaver 7.0
check here
http://www.sap.com/services/education/catalog/netweaver/certificationtest.epx?context=FFC760B8923D16BB5150DAE63E7C1A6B331AF0B9E3A8F73CE3A9B7046E051044503600C911DBA13DCE978D3AC9057626D2B68111A7CD2D707E2EEC31213097E46EB790DD0106435EE0756F7B22F3FA4B4FF0645C06954BF39AD9826DEEE08116%7cDA891B3C877030D9C4305205DF5BCE39


3. P_BIE_70 - SAP Certified Application Professional - BI Enterprise DataWarehousing with SAP Net Weaver 7.0
checkhere :
http://www.sap.com/services/education/certification/certificationrole.epx?context=%5b%5bROLE_P_BIE_70%5d%5d%7c

These certification exams vary in their level of difficulties and different pattern. For example:C_TBW45_04S will have 80 questions to be answerd in 3 hours whereas C_TBW45_70 will have 90question in 3 hours. Also,
you will get partial credit in the former but no such privilegesin later exam
.There are no better study materials than TBWs. Which all TBWs to be read, check the syallabus here:http://www.sap.com/services/education/certification/certificationtest.epx?context=%5b%5bC_TBW45_04S%5d%5d%7c


These can be downloaded from Internet. Besides these TBWs, there are some important links - which is listed below:
Simulation Test:
http://www.sapbw.info/SAP-BW-Certification-Test-Simulation.html
http://www.hometutorials.com/SAP-BW
Some Important Documents on SDN:
https://www.sdn.sap.com/irj/sdn/advancedsearch?query=+BI+7.0+documents+&cat=sdn_all
Threads Related to Certification:
1. https://www.sdn.sap.com/irj/sdn/thread?threadID=857655&tstart=0
2. https://www.sdn.sap.com/irj/sdn/thread?messageID=5582263#5582263
3. https://www.sdn.sap.com/irj/sdn/thread?messageID=6053753#6053753
Miscellaneous Links:
http://www.psimedia.ws/
http://www.sap.com/uk/services/education/courses/bw.epx
http://www50.sap.com/useducation/curriculum/print.asp?jc=1&rid=285
http://www50.sap.com/useducation/curriculum/print.asp?jc=1&rid=458
http://www50.sap.com/useducation/certification/examcontent.asp
http://www50.sap.com/useducation/certification/curriculum.asp?rid=506&vid=5
http://www50.sap.com/useducation/certification/curriculum.asp?rid=420
http://csc-studentweb.lrc.edu/swp/Berg/BB_index_main.htm
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/c27a9990-0201-0010-a393-e6e8bed520fe
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/79f6d190-0201-0010-ec8b-810a969028ec
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/620b406d-0601-0010-3b9a-ac51c445860f
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/5f243d6d-0601-0010-2aae-abe0a4dcfadb
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/f0d16261-80c0-2910-149a-97b017a900e4
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/20b0d390-0201-0010-408c-f27f82427e23
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/9be229f6-0901-0010-8288-824675320301
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/508d0387-001d-2a10-a394-faa4e57f6751