Showing posts with label Extraction. Show all posts
Showing posts with label Extraction. Show all posts

INFOCUBES - ULTIMATE SAP BIW CONCEPTS EXPLAINATION

INFOCUBES - ULTIMATE SAP BIW CONCEPTS EXPLAINATION - by "Shrikanth Muthyala"

INFOCUBES - ULTIMATE EXPLAINATION

INFOCUBE
1.  INFOCUBE-INTRODUCTION:
2.  INFOCUBE - STRUCTURE
3.  INFOCUBE TYPES
3.1  Basic Cube: 2 Types
3.1.1  Standard InfoCube
3.1.2  Transactional InfoCube
3.2  Remote Cubes: 3 Types
3.2.1  SAP Remote Cube
3.2.2  General Remote Cube
3.2.3  Remote Cube With Services
4.  INFOCUBE TABLES- F,E,P,T,U,N
5.  INFOCUBE-TOOLS
5.1  PARTITIONING
5.2  ADVANTAGES OF PARTITIONING:
5.3  CLASSIFICATION OR TYPES OF PARTITIONING
5.3.1  PHYSICAL PARTITIONING/TABLE/LOW LEVEL
5.3.2  LOGICAL PARTITIONING/HIGH LEVEL PARTITIONING
5.3.3  EXAMPLES ON PARTITIONING USING 0CALMONTH & 0FISCYEAR
5.3.3.1  ERRORS ON PARTITIONING
5.3.4  REPARTITIONING
5.3.4.1  REPARTITIONING TYPES
5.3.5  Repartitioning - Limitations- errors
5.3.6  EXAMPLES ON PARTITIONING USING 0CALMONTH & 0FISCYEAR
5.4  COMPRESSION OR COLLAPSE
5.5  INDEX/INDICES
5.6  RECONSTRUCTION
5.6.1  ERRORS ON RECONSTRUCTION
5.6.2  Key Points to remember while going for reconstruction
5.6.3  Why Errors Occur in Reconstruction?
5.7  STEPS FOR RECONSTRUCTION
5.8  ROLLUP
5.9  LINE ITEM DIMENSION/DEGENERATE DIMENSION
5.9.1  LINE ITEM DIMENSION ADVANTAGES
5.9.2  LINE ITEM DIMENSION DISADVANTAGES
5.10  HIGH CARDINALITY
6.  INFOCUBE DESIGN ALTERNATIVES
6.1  ALTERNATIVE I :  TIME DEPENDENT NAVIGATIONAL ATTRIBUTES
6.2  ALTERNATIVE II :  DIMENSION CHARACTERISTICS
6.3  ALTERNATIVE III : TIME DEPENDENT ENTIRE HIERARCHIES
6.4  OTHER ALTERNATIVES:
6.4.1  COMPOUND ATTRIBUTE
6.4.2  LINE ITEM DIMENSION
7.  FEW QUESTIONS ON INFOCUBES


1. INFOCUBE-INTRODUCTION:
The central objects upon which the reports and analyses in BW are based are called InfoCubes & we can seen as InfoProviders. an InfoCube is a multidimensional data structure and a set of relational tables that contain InfoObjects.

2. INFOCUBE- STRUCTUREStructure of InfoCube is considered as ESS-Extended Star Schema/Snow Flake Schema, that contains
• 1 Fact Table
• n Dimension Tables
• n Surrogate ID (SID) tables
• n Fact Tables
• n Master Data Tables
Fact Table with KeyFigures
n Dimension Tables with characteristics
n Surrogate ID (SID) tables link Master data tables & Hierarchy Tables
n Master Data Tables are time dependent and can be shared by multiple InfoCubes. Master data table contains Attributes that are used for presenting and navigating reports in SAP(BW) system.

3. INFOCUBE TYPES:

• Basic Cubes reside on same Data Base
• Remote Cubes Reside on remote system
• SAP remote cube resides on other R/3 System uses SAPI
• General remote Cube resides on non SAP System uses BAPI
• Remote Cube wit Services reside on non SAP system

3.1. BASIC CUBE: 2 TYPES: These are physically available in the same BW system in which they are specified or their meta data exist.
3.1.1. STANDARD INFOCUBE: FREQUENTLY USEDStandard InfoCube are common & are optimized for Read Access, have update rules, that enable transformation of Source Data & loads can be scheduled

3.1.2. TRANSACTIONAL INFOCUBE:The transactional InfoCubes are not frequently used and used only by certain applications such as SEM & APO. Data are written directly into such cubes bypassing UpdateRules

3.2. REMOTE CUBES: 3 TYPES:Remote cubes reside on a remote system. Remote Cubes gather metadata from other BW systems, that are considered as Virtual Cubes. These are the remote cube types:

3.2.1. SAP REMOTE CUBE:the cube resides on non SAP R/3 system & communication is via the service API(SAPI)

3.2.2. GENERAL REMOTE CUBE:Cube resides on non SAP R/3 Source System & communication is via BAPI.

3.2.3. REMOTE CUBE WITH SERVICES:Cube resides on any remote system i.e. SAP or non SAP & is available via user defined function module.

4. INFOCUBE TABLES- F,E,P,T,U,NTransaction Code: LISTSCHEMA
LISTSCHEMA>enter name of the InfoSource OSD_C03 & Execute. Upon execution the primary (Fact) table is displayed as an unexpanded node. Expand the node and see the screen.
These are the tables we can see under expanded node:


5. INFOCUBE-UTILITIES
5.1. PARTITIONING
Partitioning is the method of dividing a table into multiple, smaller, independent or related segments(either column wise or row wise) based on the fields available which would enable a quick reference for the intended values of fields in  the table.
For Partitioning a data set, at least among 2 partitioning criteria 0CALMONTH & 0FISCPER must be there.

5.2. ADVANTAGES OF PARTITIONING:• Partitioning allows you to perform parallel data reads of multiple partitions speeding up the query execution process.
• By partitioning an InfoCube, the reporting performance is enhanced because it is easier to search in smaller tables, so maintenance becomes much easier.
• Old data can be quickly removed by dropping a partition.
you can setup partitioning in InfoCube maintenance extras>partitioning.

5.3. CLASSIFICATION OR TYPES OF PARTITIONING
5.3.1. PHYSICAL PARTITIONING/TABLE/LOW LEVEL
Physical Partitioning also called table/low level partitioning is restricted to Time Characteristics and is done at Data Base Level, only if the underlying database allows it.
Ex: Oracle, Informix, IBM, DB2/390
Here is a common way of partitioning is to create ranges. InfoCube can be partitioned on a time slice like Time Characteristics as below.
• FISCALYEAR( 0FISCYEAR)
• FISCAL YEAR VARIANT( 0FISCVARNT)
• FISCALYEAR/PERIOD(0FISCPERIOD)
• POSTING PERIOD(OFISCPER3)
By this physical partitioning old data can be quickly removed by dropping a partition.
note: No partitioning in B.I 7.0, except DB2 (as it supports)

5.3.2. LOGICAL PARTITIONING/HIGH LEVEL PARTITIONINGLogical partitioning is done at MultiCubes(several InfoCubes joined into a MultiCube) or MultiProvider level i.e. DataTarget level . in this case related data are separated & joined into a MultiCube.
Here Time Characteristics only is not a restriction, also you can make position on Plan & Actual data, Regions, Business Area etc.
Advantages:
• As per the concept, MultiCube uses parallel sub-queries, achieving query performance  ultimately.
• Logical partitioning do not consume any additional data base space.
• When a sub-query hits a constituent InfoProvider, a reduced set of data is loaded into smaller  InfoCube from large InfoCube target, even in absence of MultiProvider.

5.3.3. EXAMPLES ON PARTITIONING USING 0CALMONTH & 0FISCYEARTHERE ARE TWO PARTITIONING CRITERIA:
calendar month (0CALMONTH)
fiscal year/period (0FISCPER)
At an instance we can partition a dataset using only one type among the above two criteria:
In order to make partition, at least one of the two InfoObjects must be contained in the InfoCube.
If you want to partition an InfoCube using the fiscal year/period (0FISCPER) characteristic, you have to set the fiscal year variant characteristic to constant.
After  activating InfoCube, fact table is created on the database with one of the number of partitions corresponding to the value range.
You can set the valuerange yourself.
Partitioning InfoCubes using Characteristic 0CALMONTH:
Choose the partitioning criterion 0CALMONTH and give the value range as
From=01.1998
to=12.2003
So how many partitions are created after partitioning?
6 years * 12 months + 2 = 74 partitions are created
2 partitions for values that lay outside of the range, meaning < 01.1998 or >12.2003.
You can also determine how many partitions are created as a maximum on the database for the fact table of the InfoCube.
You choose 30 as the maximum number of partitions.
Resulting from the value range:
6 years *12 calendar months + 2 marginal partitions (up to 01.1998, from 12.2003)= 74 single values.
The system groups three months at a time together in a partition
4 Quarters Partitions = 1 Year
So, 6 years * 4 partitions/year + 2 marginal partitions = 26 partitions are created on the database.
The performance gain is only gained for the partitioned InfoCube if the time dimension of the InfoCube is consistent.
This means that all values of the 0CAL* characteristics of a data record in the time dimension must fit each other with a partitioning via 0CALMONTH.
Note: You can only change the value range when the InfoCube does not contain any data.

PARTITIONING INFOCUBES USING THE CHARACTERISTIC 0FISCPERMandatory thing here is, Set the value for the 0FISCVARNT characteristic to constant.

5.3.4. STEPS FOR PARTITIONING AN INFOCUBE USING 0CALDAY & 0FISCPER:Administrator Workbench
   >InfoSet maintenance
     >double click the InfoCube
        >Edit InfoCube
           >Characteristics screen
              >Time Characteristics tab
                 >Extras
                    >IC Specific Properties of InfoObject
                       >Structure-Specific Properties dialog box
                         >Specify constant for the characteristic 0FISCVARNT
                           >Continue
                              >In the dialog box enter the required details

5.3.5. Partition Errors:
F fact tables of partitioned InfoCube have partitions that are empty, or the empty partitions do not have a corresponding entry in the related package dimension.
Solution1: the request SAP_PARTITIONS_INFO_GET_DB4 helps you to analyze these problems. The empty partitions of the f fact table are reported . In addition, the system issues an information manage. If there is no corresponding entry for a partition in the InfoPackage dim table(orphaned).
When you compressed the affected InfoCube, a database error occurred in drop partition, after the actual compression. However, this error was not reported to the application. The logs in  the area of compression do not display any error manages. The error is not reported in the developer trace (TRANSACTION SM50), the system log ( TRANSACTION SM21) and the job overview (TRANSACTION SM37) either.
The application thinks that the data in the InfoCube is correct, the data of the affected requests or partitions is not displayed in the reporting because they do not have a corresponding entry in the package dimension.
Solution2: use the report SA P_DROP_FPARTITIONS</Z1) to remove the orphaned or empty partitions from the affected f fact tables, as described in note 1306747, to ensure that the database limit of 255 partitions per database table is not reached unnecessarily.

5.3.6. REPARTITIONING:Repartitioning is a method of partitioning, used for a cube which is already partitioned that has loaded data. Actual & Plan data versions come here. As we know, the InfoCube has actual data which is already loaded as per plan data after  partition. If we do repartition, the data in the cube will be not available/little data due to data archiving over a period of time.
You can access repartitioning in the Data Warehousing Work Bench using Administrator>Context Menu of your InfoCube.
5.3.6.1. REPARTITIONING - 3 TYPES: A) Complete repartitioning,
B) Adding partitions to an e fact table that is already partitioned and
C) Merging empty or almost empty partitions of an e fact table that is already partitioned

5.3.7. REPARTITIONING - LIMITATIONS- ERRORS:SQL 2005 partitioning limit issue: error in SM21 every minute as we reached the limit for number of partitions per SQL 2005(i.e. 1000)

5.4. COMPRESSION OR COLLAPSE:Compression reduces the number of records by combining records with the same key that has been loaded in separate requests.
Compression is critical, as the compressed data can no longer deleted from the InfoCube using its request ID's. You must be certain that the data loaded into the InfoCube  is correct.
The user defined partition is only affecting the compressed E-Fact Table.
By default  F-Fact Table contains data.
By default SAP allocates a Request ID for each posting made.
By using Request ID, we can delete/select the data.
As we know that  E-Fact Table is compressed & F-Fact Table is uncompressed.
When compressed, data from F-Fact Table transferred to E-Fact Table and all the request ID's are lost / deleted / set to null.
After compression, comparably the space used by E-Fact Table is lesser than F-Fact Table.
F-Fact Table is compressed  uses BITMAP Indexes
E-Fact Table is uncompressed uses B-TREE Indexes

5.5. INDEX/INDICES
PRIMARY INDEX
The primary Index is created automatically when the table is created in the database.
SECONDARY INDEX(Both Bitmap & B-Tree are secondary indices)
Bitmap indexes are created by default on each dimension column of a fact table
& B-Tree indices on ABAP tables.

5.6. RECONSTRUCTION:Reconstruction is the process by which you load data into the same cube/ODS or different cube/ODS from PSA. The main purpose is that after deleting the requests by Compression/Collapse by any one, so if we want the requests that are deleted (old/new) we don't need to go to source system or flat files for collecting requests, we get them from PSA.
Reconstruction of a cube is a more common requirement and is required when:1) A change to the structure of a cube:  deletion of characteristics/key figures, new characteristics/key figures that can be derived from existing chars/key figures
2) Change to update rules
3) Missing master data and request has been manually turned green - once master data has been maintained and loaded the request(s) should be reconstructed.

5.6.1. KEY POINTS TO REMEMBER WHILE GOING FOR RECONSTRUCTION:• Reconstruction must occur during posting free periods.
• Users must be locked.
• Terminate all scheduled jobs that affect application.
• Deactivate the start of RSBWV3nn update report.

5.6.2. WHY ERRORS OCCUR IN RECONSTRUCTION?Errors occur only due to document postings made during reconstruction run, which displays incorrect values in BW, because the logic of before and After images are no longer match.

5.6.3. STEPS FOR RECONSTRUCTIONTransaction Codes:
LBWE  : LO DATA EXTRACTION: CUSTOMIZING COCKPIT
LBWG  : DELETE CONTENTS OF SETUP TABLES
LBWQ  : DELTA QUEUED
SM13   : UPDATE REQUESTS/RECORDS
SMQ1  : CLEAR EXTRACTOR QUEUES
RSA7  : BW DELTA QUEUE MONITOR
SE38/SA38  : DELETE UPDATE LOG

STEPS:1. Mandatory - User locks :
2. Mandatory - (Reconstruction tables  for application 11 must be empty)
 Enter  transaction - LBWG & application = 11 for SD sales documents.
3. Depending on the selected update method, check below queues:
 SM13 – serialized or un-serialized V3 update
 LBWQ – Delta queued
 Start updating the data from the Customizing Cockpit (transaction LBWE) or
 start the corresponding application-specific update report RMBWV3nn (nn = application  number) directly  in transaction SE38/SA38 .
4. Enter RSA7 & clear delta queues of  PSA, if it contains data in queue
5. Load delta data from R/3 to BW
6. Start the reconstruction for the desired application.
 If you are carrying out a complete reconstruction, delete the contents of the  corresponding data targets in  your BW (cubes and ODS objects).
7. Use Init request (delta initialization with data transfer) or a full upload to load the data  from the reconstruction into BW.
8. Run the RMBWV3nn update report again.

5.6.4. ERRORS ON RECONSTRUCTION:Below you can see various errors on reconstruction. I had read SAP Help Website and SCN and formulated a simple thesis to make the audience, easy in understanding the concepts
ERROR 1: When I completed reconstruction, Repeated documents are coming. Why?
Solution: The reconstruction programs write data additively into the set-up tables.
If a document is entered twice from the reconstruction, it also appears twice in the set-up table. Therefore, the reconstruction tables may contain the same data from your current reconstruction and from previous reconstruction runs (for example, tests). If this data is loaded into BW, you will usually see multiple values in the queries (exception: Key figures in an ODS object whose update is at “overwrite”).

ERROR 2: Incorrect data in BW, for individual documents for a period of reconstruction run. Why?
Solution: Documents were posted during the reconstruction.
Documents created during the reconstruction run then exist in the reconstruction tables as well as in the update queues. This results in the creation of duplicate data in BW.
Example: Document 4711, quantity 15
Data in the PSA:
ROCANCEL DOCUMENT QUANTITY
‘ ‘ 4711 15 delta, new record
‘ ‘ 4711 15 reconstruction
Query result:
4711 30
Documents that are changed during the reconstruction run display incorrect values in BW because the logic of the before and after images no longer match.
Example: Document 4712, quantity 10, is changed to 12.
Data in the PSA:
ROCANCEL DOCUMENT QUANTITY
X 4712 10- delta, before image
‘ ‘ 4712 12 delta, image anus
‘ ‘ 4712 12 reconstruction
Query result:
4712 14

ERROR 3: After you perform the reconstruction and restart the update, you find duplicate documents in BW.
Solution: The reconstruction ignores the data in the update queues. A newly-created document is in the update queue awaiting transmission into the delta queue. However, the reconstruction also processes this document because its data is already in the document tables. Therefore, you can use the delta initialization or full upload to load the same document from the reconstruction and with the first delta after the reconstruction into BW.
After you perform the reconstruction and restart the update, you find duplicate documents in BW.
Solution: The same as point 2; there, the document is in the update queue, here, it is in the delta queue. The reconstruction also ignores data in the delta queues. An updated document is in the delta queue awaiting transmission into BW. However, the reconstruction processes this document because its data is already contained in the document tables. Therefore, you can use the delta initialization or full upload to load the same document from the reconstruction and with the first delta after the reconstruction into BW.

ERROR 4:Document data from time of the delta initialization request is missing from BW.
Solution: The RMBWV3nn update report was not deactivated. As a result, data from the update queue LBWQ or SM13 can be read while the data of the initialization request is being uploaded. However, since no delta queue (yet) exists in RSA7, there is no target for this data and it is lost.

5.7. ROLLUPRollup creates aggregates in an InfoCube whenever new data is loaded.

5.8. LINE ITEM DIMENSION/DEGENERATE DIMENSIONlf the size of a dimension of a cube is more than 20% of the normal fact table, then we define that dimension as a Line Item Dimension.
Ex: Sales Document Number in one dimension is Sales Cube.
Sales Cube has sales document number and usually the dimension size and the fact table size will be the same. But when you add the overhead of lookups for DIMID/SID's the performance will be very slow.
By flagging is as a Line Item Dimension, the system puts SID in the Fact Table instead of DMID for sales document Number.
This avoids one lookup into dimension table. Thus dimension table is not created in this case. The advantage is that you not only save space because the dimension table is not created but a join is made between the two tables Fact & SID table(diagram 3) instead of 3 Tables Fact, Dimension & SID tables (diagram 2)

Below image is for illustration purpose only( ESS Extended Star Schema)

Dimension Table, DIMID=Primary Key
Fact Table, DIMID-Foreign Key
Dimension Table Links Fact Table And A Group Of Similar Characteristics
Each Dimension Table Has One DIMID & 248 Characteristics In Each Row

5.8.1. LINE ITEM DIMENSION ADVANTAGES:
Saves space by not creating Dimension Table

5.8.2. LINE ITEM DIMENSION DISADVANTAGES:• Once a Dimension is flagged as Line Item, You cannot ass additional Characteristics.
• Only one characteristic is allowed per Line Item Dimension & for (F4) help, the Master Data is displayed, which takes more time.

5.9. HIGH CARDINALITY:If the Dimension exceeds 10% of the size of the fact table, then you make this as High Cardinality Dimension. High Cardinality Dimension is one that has several potential occurrences. when you flag a dimension as High Cardinality, the database is adjusted accordingly.
BTREE index is used rather than BITMAP index, Because in general, if the cardinality is expected to exceed one fifth that of a fact table, it is advisable to check this flag
NOTE: SAP converts from BITMAP index to BTREE index if we select dimension as High Cardinality.

6. INFOCUBE DESIGN ALTERNATIVES:
Refer: SAP R/3 BW Step-by-Step Guide by Biao Fu  & Henry Fu
InfoCube Design techniques of helps us to reveal automatic changes in the InfoCube. These alternatives may be office/region/sales representative.
6.1. ALTERNATIVE I   :  TIME DEPENDENT NAVIGATIONAL ATTRIBUTES
6.2. ALTERNATIVE II  :  DIMENSION CHARACTERISTICS METHOD
6.3. ALTERNATIVE III  : TIME DEPENDENT ENTIRE HIERARCHIES
6.4. OTHER ALTERNATIVE:
6.4.1. COMPOUND ATTRIBUTE
6.4.2. LINE ITEM DIMENSION

7. FEW QUESTIONS ON INFOCUBES
What are InfoCubes?
What is the structure of InfoCube?
What are InfoCube types?
Are the InfoCubes DataTargets? How?
What are virtual Cubes(Remote Cubes)?
How many Cubes you had designed?
What are the advantages of InfoCube?
Which cube do SAP implements?
What are InfoCube tables?
What are Sap Defined Dimensions?
How many tables are formed when you activate the InfoCube structure?
What are the tools or utilities of an InfoCube?
What is meant by table partitioning of an InfoCube?
What is meant by Compression of an InfoCube
Do you go for partitioning or Compression?
Advantages and Disadvantages of an InfoCube partitioning?
What happens to E-Fact Table and F Fact Table if you make partition on an InfoCube?
Why do u go for partitioning?
What is Repartitioning?
What are the types of Repartitioning?
What is Compression? Why you go for Compression?
What is Reconstruction? Why you go for Reconstruction?
What are the mandatory steps to do effective error free reconstruction, while going Reconstruction?
What are the errors occur during Reconstruction?
What is Rollup of an InfoCube?
How can you measure the InfoCube size?
What is Line Item Dimension?
What is Degenerated Dimension?
What is High Cardinality?
How can you analyze that the cube as a LineItem Dimension or HighCardinality?
What are the InfoCube design alternatives?
Can you explain the alternative time dependent navigational attributes in InfoCube design?
Can you explain the alternative dimension characteristics in InfoCube design?
Can you explain the alternative time dependent entire hierarchies in InfoCube design?
What are the other techniques of InfoCube design alternatives
What is Compound Attribute?
What is LineItem Dimension? Will it affect designing an InfoCube?
What are the maximum number of partitions you can create on an InfoCube?
What is LISTSCHEMA?
I want to see the tables of an InfoCube. How? Is there any Transaction Code?
When the InfoCube tables created ?
Are the tables created after activation or Saving the InfoCube structure ?
Did you implemented RemoteCube? Explain me the scenario?
Can you consider InfoCube as Star Schema or Extended Star Schema?
Is Repartitioning available in B.W 3.5 or B.I 7.0? Why?
On what basis you assign Characteristics to Dimensions?

The Tech details of Standard ODS / DSO in SAP DWH

Raj
Long time ago I was sitting back thinking about the architecture of ODS and how invaluble it is from its inception with BW 2.0 in the BW Data Warehouse Layer (DWH). This basically motivated me to write this blog with the necessary tech details of ODS - Well call it Data Store Object (DSO) as of NW04s or BI 7.0

"An Operational Data Store object (ODS object) is used to store consolidated and cleansed data (transaction data or master data for example) on a document level (atomic level)" - Refered from SAP Docs.It describes a consolidated dataset from one or more Info Sources / transformations (7.0) as illustrated below in Fig.1.
In this blog we will look at the Standard Data Store Object. We have other types namely Data Store Object with Direct Update (Transactional ODS in 3.x) and Write Optimized Data Store new with BI 7.x which contains only Active data table used to manage huge data loads for instance - Here is the link from Help portal Write optimised DSO

Architecture of Standard ODS /DSO (7.x)
"ODS Objects consist of three tables as shown in the Architecture graphic below" - Refered from SAP Docs:

image
Figure 1: ODS Architecture - Extracted from SAP Docs

TIP: The new data status is written to the table with active data in parallel to writing to the change log taking the advantage of parallel processes which can be customized globally or at the object level in the system

Lets go through a Scenario
In this example we will take the Master data object material and plant (0MAT_PLANT compounded with 0PLANT) with a few attributes for the demonstration purpose. Now define a ODS / DSO as below where material and plant is a key and the corresponding attributes as data fields.

image
Figure 2: ODS / DSO definition

Lets create a flat file data source or an info source with 3.x in this example to simplify the scenario with all the info objects we have defined in ODS structure

image
Figure 3: Info source definition
Lets check the flat file records, remember that the key fields are plant and material and we have a duplicate record as in the below Fig.4. The 'Unique Data Records'option is unchecked which means it can expect duplicate records.
image
Figure 4: Flat file Records
Check the monitor entries and we see that 3 records are transferred to update rules and two records are loaded in to NEWDATA table as we haven't activated the request yet. This is because we have a duplicate record for the key in the ODS which gets overwritten (Check the first two records in Fig 4)
image
Figure 5: Monitor Entries
Now check the data in the NEWDATA / ACTIVATION QUEUE table, we have only two records as the duplicate records gets overwritten with the most recent record i.e. record 2 in PSA got overwritten as it has got the same key material and plant.
image
Figure 6: Activation Queue
image
Figure 7: PSA data for comparison
Tip: The key figures will have the overwrite option by default, additionally we have the summation option to suit certain scenarios and the characteristics will overwrite always. The technical name of new data / Activation queue table is always for customer objects - /bic <name of ODS>140 and for SAP objects - /bio<name of ODS>140.
Once we activate the data we will have two records in ODS Active Data table. As we see below the Active Data table always has contains the semantic key (Material, Plant)
image
Figure 8: Active Data Table
TIP: The name of the active table /BIC/A<odsname>100 and /BI0 for SAP.
The change log table has these 2 entries with the new image (N). Remember the record mode we will look in to it later. The technical key (REQID, DATAPACKETID, RECORD NUMBER) will be part of change log
image
Figure 9: Change Log Table
TIP: The technical name is always /BIC/<internal generated number>.
Now we will add two new records material 75 plant 1, Material 80 Plant 1 and change the existing record for the key Material 1 and plant 1 as below
image
Figure 10: Add more records
When we look at the monitor there will be 3 records in the activation queue table as the duplicate records gets filtered out, in this example the first record in Fig.10
image
Figure 11: Monitor
Look at the new data table (Activation Queue) and we will have 3 records that are updated as seen in the monitor
image
Figure 12: Activation Queue
How the Change log works?
We will check the change log table to see how the deltas are handled. The highlighted records are from the first request that is uniquely identified by technical key (Request Number, Data packet number, Partition value of PSA and Data record number)
image
Figure 13: Change log Table 1

With the second load i.e. the second request the change log table puts the before and after Image for the relevant records (the non highlighted part from the Fig.13)

In the above example Material (1) and Plant (1) has the before image with record mode "x"(row 3 in the above Fig) And all the key figures will be have the "-" sign as we have opted to overwrite option and the characteristics will be overwritten always.
image
Figure 14: Change log Table 2
The after image " " which reflects the change in the data record (Check row 4 in the above fig). We have changed the characteristic Profit center with SE from SECOND and the Key figure Processing Time is changed from 1 to 2. A new record (last row in the above Fig) is added is with the Status "N" as it's a new record.

Summary
This gives us an overview of the standard ODS object and how the change log works. The various record modes available:
image
Figure 15: Record Modes
Check the note 399739 about the details of the Record Mode. The record mode(s) that a particular data source uses for the delta mechanism largely depends on the type of the extractor. Check the table RODELTM about the BW Delta Process methods with the record modes available as well our well known table ROOSOURCE for the extractor specific delta method.
For Instance LO Cockpit extractors use 'ABR' delta method that supplies After-Image, Before-Image, New Image and Reverse Image. Extractors in HR and Activity based costing uses the delta method 'ADD' i.e. with record mode 'A' and FI-GL,AR,AP extractors are based on delta method 'AIE' i.e. record mode space ' ' After image. The list goes on ..........

Raj   is a SAP certified NW BI Consultant

Useful Transactions and Notes goes with NetWeaver 7.0

RSRD_ADMIN - Broadcasting Administration - Available in the BI system for the administration of Information Broadcasting.
CHANGERUNMONI - Using this Tcode we can monitor the status of the attribute change run.
RSBATCH - Dialog and Batch Processes. BI background management functions:
  • Managing background and parallel processes in BI
  • Finding and analyzing errors in BI
  • Reports for BI system management 
are available under Batch Manager.
RRMX_CUST - Make setting directly in this transaction for which BEx Analyzer version is called.
Note: 970002 - Which BEx Analyzer version is called by RRMX?
RS_FRONTEND_INT - Use this transaction to Block new Frontend Components in field QD_EXCLUSIVE_USER from migrating to 7.0 version.
Note: 962530 - NW04s - How to restrict access to Query Designer 2004s.
WSCONFIG - This transaction is to create, test and release the Web Service definition.
WSADMIN - Administration Web Services - This transaction is to display and test the endpoint.
RSTCO_ADMIN - This transaction to install basic BI objects and check whether the installation has been carried out successfully. If the installation is red, restart the installation by calling transaction RSTCO_ADMIN again. Check the installation log.
Note 1000194 - Incorrect activation status in transaction RSTCO_ADMIN.
Note 1039381 - Error when activating the content Message no. RS062 (Error when installing BI Admin Cockpit).
Note 834280 - Installing technical BI Content after upgrade.
Note 824109 - XPRA - Activation error in NW upgrade. XPRA installs technical BW Content objects that are necessary for the productive use of the BW system. (An error occurs during the NetWeaver upgrade in the RS_TCO_Activation_XPRA XPRA. The system ends the execution of the method with status 6.
RSTCC_INST_BIAC - For activating the Technical Content for the BI admin cockpit
Run report RSTCC_ACTIVATE_ADMIN_COCKPIT in the background
Note 934848 - Collective note - (FAQ) BI Administration Cockpit
Note 965386 - Activating the technical content for the BI admin cockpit
Attachment for report RSTCC_ACTIVATE_ADMIN_COCKPIT source code
When Activating Technical Content Objects terminations and error occurs
Note 1040802 - Terminations occur when activating Technical Content Objects
RSBICA - BI Content Analyzer - Check programs to analyze inconsistencies and errors of custom-defined InfoObject, InfoProviders, etc - With central transaction RSBICA, schedule delivered check programs for the local system or remote system via RFC connection. Results of the check programs can be loaded to the local or remote BI systems to get single point of entry for analyzing the BI landscape.
RSECADMIN - Transaction for maintaining new authorizations. Management of Analysis Authorizations.
Note 820123 - New Authorization concept in BI.
Note 923176 - Support situation authorization management BI70/NW2004s.
RSSGPCLA - For the regeneration of RSDRO_* Objects. Set the status of the programs belonging to program classes "RSDRO_ACTIVATE", "RSDRO_UPDATE" and "RSDRO_EXTRACT" to "Generation required". To do this, select the program class and then activate the "Set statuses" button.
Note 518426 - ODS Object - System Copy, migration
RSDDBIAMON - BI Accelerator - Monitor with administrator tools.
  • Restart BIA server: restarts all the BI accelerator servers and services.
  • Restart BIA Index Server: restart the index server.
  • Reorganize BIA Landscape: If the BI accelerator server landscape is unevenly distributed, redistributes the loaded indexes on the BI accelerator servers.
  • Rebuild BIA Indexes: If a check discovers inconsistencies in the indexes, delete and rebuild the BI accelerator indexes.
RSDDSTAT - For Maintenance of Statistics properties for BEx Query, InfoProvider, Web Template and Workbook.
Note 964418 - Adjusting ST03N to new BI-OLAP statistics in Release 7.0
Note 934848 - Collective Note (FAQ) BI Administration Cockpit.
Note 997535 - DB02 : Problems with History Data.
Note 955990 - BI in SAP NetWeaver 7.0: Incompatibilities with SAP BW 3.X.
Note 1005238 - Migration of workload statistics data to NW2004s.
Note 1006116 - Migration of workload statistics data to NW2004s (2).
DBACOCKPIT - This new transactions replaces old transactions ST04, DB02 and comes with Support Pack 12 for database monitoring and administration.
Note 1027512 - MSSQL: DBACOCKPIT  for basis release 7.00 and later.
Note 1072066 - DBACOCKPIT - New function for DB monitoring.
Note 1027146- Database administration and monitoring in the DBA Cockpit.
Note 1028751 - MaxDB/liveCache: New functions in the DBA Cockpit.
BI 7.0 iView Migration Tool
Note 1128730 - BI 7.0 iView Migration Tool
Attachements for iView Migration Tool:
  • bi migration PAR
  • bi migration SDA
  • BI iView Migration Tool
For Setting up BEx Web
Note 917950 - SAP NetWeaver2004s : Setting Up BEx Web
Handy Attachements for Setting up BEx Web:
  • Problem Analysis
  • WDEBU7 Setting up BEx Web
  • System Upgrade Copy
  • Checklist
To Migrate BW 3.X Query Variants to NetWeaver 2004s BI:
Run Report RSR_VARIANT_XPRA from Transaction SE38 to fill the source table with BW 3.X variants that need to be migrated to SAP NetWeaver 2004s BI. After upgrading system to Support Package 12 or higher run the Migration report RSR_MIGRATE_VARIANTS to migrate the existing BW 3.x query Variants to the new NetWeaver 2004s BI Variants storage.
Note 1003481 - Variant Migration - Migrate all Variants
To check for missing elements and repairing the errors run report ANALYZE_MISSING_ELEMENTS.
Note 953346 - Problem with deleted InfoProvider in RSR_VARIANT_XPRA
Note 1028908 - BW Workbooks MSA: NW2004s upgrade looses generic variants
Note 981693 - BW Workbooks MSA: NW2004s upgrade looses old variants
For the Migration  of Web Templates from BW 3.X to SAP NetWeaver 2004s:
Note 832713 - Migration of Web Templates from BW 3.X to NetWeaver 2004s
Note 998682 - Various errors during the Web Template migration of BW 3.X
Note 832712 - BW - Migration of Web items from 3.x to 7.0
Note 970757 - Migrating BI Web Templates to NetWeaver 7.0 BI  which contain chart
Upgrade Basis Settings for SAP NetWeaver 7.0 BI
SAP NetWeaver 7.0 BI Applications with 32 bit architecture are reaching their limits. To build high quality reports on the SAP NetWeaver BI sources need an installation based on 64-bit architecture.
With SAP NetWeaver 7.0 BI upgrade basis parameter settings of SAP Kernel from 32 bit to 64 bit version. Looking at the added functionality in applications and BI reports with large data set use lot of memory which adds load to application server and application server fails to start up because the sum of all buffer allocations exceeds the 32 bit limit.
Note 996600 - 32 Bit platforms not recommended for productive NW2004s apps
Note 1044441 - Basis parameterization for NW 7.0 BI systems
Note 1044330 - Java parameterization for BI systems
Note 1030279 - Reports with very large result sets/BI Java
Note 927530 - BI Java sizing
Intermediate Support Packages for NetWeaver 7.0 BI
BI Intermediate Support Package consisits of an ABAP Support Package and a Front End Support Package and where ABAP BI intermediate Support Package is compatible with the delivered BI Java Stack.
Note 1013369 - SAP NetWeaver 7.0 BI - Intermediate Support Packages
Microsoft Excel 2007 integration with NetWeaver 7.0 BI
Microsoft Excel 2007 functionality is now fully supported by NetWeaver 7.0 BI
Advanced filtering, Pivot table, Advanced formatting, New Graphic Engine, Currencies, Query Definition, Data Mart Fields
Note 1134226 - New SAP BW OLE DB for OLAP files delivery - Version 3
Full functionality for Pivot Table to analyze NetWeaver BI data
Microsoft Excel 2007 integrated with NetWeaver 7.0 BI for building new query, defining filter values, generating a chart and creating top n analysis from NetWeaver BI Data
Microsoft Excel 2007 now provides Design Mode, Currency Conversion and Unit of Measure Conversion

LO COCKPIT STEP BY STEP

Here is LO Cockpit Step By Step
LO EXTRACTION
- Go to Transaction LBWE (LO Customizing Cockpit)
1). Select Logistics Application
       SD Sales BW
            Extract Structures
2). Select the desired Extract Structure and deactivate it first.
3). Give the Transport Request number and continue
4). Click on `Maintenance' to maintain such Extract Structure
       Select the fields of your choice and continue
             Maintain DataSource if needed
5). Activate the extract structure
6). Give the Transport Request number and continue
- Next step is to Delete the setup tables
7). Go to T-Code SBIW
8). Select Business Information Warehouse
i. Setting for Application-Specific Datasources
ii. Logistics
iii. Managing Extract Structures
iv. Initialization
v. Delete the content of Setup tables (T-Code LBWG)
vi. Select the application (01 – Sales & Distribution) and Execute
- Now, Fill the Setup tables
9). Select Business Information Warehouse
i. Setting for Application-Specific Datasources
ii. Logistics
iii. Managing Extract Structures
iv. Initialization
v. Filling the Setup tables
vi. Application-Specific Setup of statistical data
vii. SD Sales Orders – Perform Setup (T-Code OLI7BW)
        Specify a Run Name and time and Date (put future date)
             Execute
- Check the data in Setup tables at RSA3
- Replicate the DataSource
Use of setup tables:
You should fill the setup table in the R/3 system and extract the data to BW - the setup tables is in SBIW - after that you can do delta extractions by initialize the extractor.
Full loads are always taken from the setup tables

Extraction Logistics Datasources (Setup Table Filling) - TCodes and Programs

An overview of Datasources and the programs filling the relevant setup table (named MC*SETUP). With this handy table you can find the status of your current job or previous initialization jobs through SM37.

Datasource
Tcode
Program
2LIS_02*
OLI3BW
RMCENEUA
2LIS_03_BX
MCNB
RMCBINIT_BW
2LIS_03_BF
OLI1BW
RMCBNEUA
2LIS_03_UM
OLIZBW
RMCBNERP
2LIS_04* orders
OLI4BW
RMCFNEUA
2LIS_04* manufacturing
OLIFBW
RMCFNEUD
2LIS_05*
OLIQBW
RMCQNEBW
2LIS_08*
VTBW
VTRBWVTBWNEW
2LIS_08* (COSTS)
VIFBW
VTRBWVIFBW
2LIS_11_V_ITM
OLI7BW
RMCVNEUA
2LIS_11_VAITM
OLI7BW
RMCVNEUA
2LIS_11_VAHDR
OLI7BW
RMCVNEUA
2LIS_12_VCHDR
OLI8BW
RMCVNEUL
2LIS_12_VCITM
OLI8BW
RMCVNEUL
2LIS_12_VCSCL
OLI8BW
RMCVNEUL
2LIS_13_VDHDR
OLI9BW
RMCVNEUF
2LIS_13_VDITM
OLI9BW
RMCVNEUF
2LIS_17*
OLIIBW
RMCINEBW
2LIS_18*
OLISBW
RMCSNEBW
2LIS_45*
OLIABW
RMCENEUB