Showing posts with label SAP BW INTERVIEW QUESTIONS. Show all posts
Showing posts with label SAP BW INTERVIEW QUESTIONS. Show all posts

Credits - Lakshminarasimhan N


ABAP performance tuning for SAP BW system

Applies to:
SAP BW 7.x system



Details
In SAP BW system, we will be using ABAP in many places and the common used places are start routine, end routine and expert routines. This document points out the ways we can fine tune the ABAP code written in the SAP BW system.

Rule 1 – Never use “select *”.  Select * should be avoided and “select  ... end select” select must be avoided at any cost.

Rule 2 – Always check if internal table is not empty before using “For all entries”. When you use a select statement with “for all entries”, make sure the internal table is not empty.

Example:

SELECT
CSM_CASE
CSM_EXID
CSM_CRDA
CSM_TYPE
CSM_CATE
CSM_CLDA
FROM
/BIC/AZCACSMDS00 
INTO TABLE
LIT_ZCACSMDS
FOR ALL ENTRIES IN
RESULT_PACKAGE -----------  Must not be empty 
WHERE
CSM_CASE 
RESULT_PACKAGE-CSM_CASE.

Hence we need to check if the internal table is not empty and only if it is not empty then proceed with the select statement.  

IF RESULT_PACKAGE[] IS NOT INITIAL.
    SELECT
CSM_CASE
CSM_EXID
CSM_CRDA
CSM_TYPE
CSM_CATE
CSM_CLDA
FROM
/BIC/AZCACSMDS00 
INTO TABLE
LIT_ZCACSMDS
FOR ALL ENTRIES IN
RESULT_PACKAGE 
WHERE
CSM_CASE 
RESULT_PACKAGE-CSM_CASE.
  1. ENDIF.

Rule 3 – Always use “Code Inspector” and “Extended syntax check”. Double click the transformation and then from menu option you can find “Display generated program”, select it. Then the entire program is displayed, then select the “Code Inspector” and “Extended Program Check” from the below screen shot.
Correct the warning and error messages shown.

Rule_3.png

Rule 4 – Always use the “types” statement to declare the local structure in the program and the same structure can be used in the select statement.
Example –
From the purchasing DSO if you want to read PO number, PO Item and Actual Quantity Delivered. Then we create a local structure using types statement.
Types : begin of lty_pur,
OI_EBELN   type                /BI0/OIOI_EBELN,
OI_EBELP   type                /BI0/OIOI_EBELP,
PDLV_QTY  type               /BI0/OIPDLV_QTY,
End of lty_pur.
Data : lt_pur type standard table of lty_pur. “ Internal table declared based on the local type
Select OI_EBELN OI_EBELP PDLV_QTY from /BI0/APUR_O0100 into table lt_pur.

Rule 5 – Always try use the “Hashed” internal table and “Sorted” internal table in the routines, sometimes when you are unable to use them and you are using the “Standard” internal table, make  sure you “Sort” the table in ascending order based on the keys you use in “READ” statement and then use “Binary search” in the READ statement. This improves the read statement performance. When the standard table is sorted and then used make sure that the read statement, matches the sort order otherwise you will get the correct result.

Example –
Select OI_EBELN OI_EBELP PDLV_QTY from /BI0/APUR_O0100 into table lt_pur.
If sy-subrc = 0.
Sort lt_pur by OI_EBELN OI_EBELP.     “ Sorting the table based on the key used in read statement
Loop at result_package assigning <result_fields>.
Read table lt_pur into la_pur with key EBELN = <result_fields>- OI_EBELN  EBELP = <result_fields>- OI_EBELPBinary search.
If sy-subrc = 0.
<logic to populate the fields>.

Rule 6 – Never use “into corresponding fields of table”. Follow Rule 5, to declare structure via types statement and use it to create an internal table. In the select statement do not use “into corresponding fields of table”.

Example  --
Never use the way given below, follow the example of Rule 4
Data : lt_pur type standard table of /BI0/APUR_O0100.
Select OI_EBELN OI_EBELP PDLV_QTY from /BI0/APUR_O0100 into corresponding fields of table lt_pur.

Rule 7 – In the select statement make sure you add the primary key’s. For the DSO’s with huge volume of data make sure you create index and then use them in the select statement.

Rule 8 – Never use Include program in your transformations.

Rule 9 – Try to minimize the use of 'RSDRI_INFOPROV_READ'. In case you need to use it make sure you need only the necessary characteristics and key figures.  Make sure the cube is compressed.

Rule 10 – Make sure to clear the “work area”, “temp. variables” before they are used in the loop.

Rule 11 – Always rely on the field symbols rather than the work areas. This way you can avoid the “modify” statement.

Rule 12 – When the code in the transformation is huge and complicated, make sure the DTP package size is reduced for a faster data load.

Rule 13 – Never use hard-coded “BREAK-POINT” in the transformation.

Rule 15 – Add lot of comments in the transformation along with the Developer name, Functional owner, Technical Change, CR number etc.

Rule 16 – Delete duplicated before you use the “For all entries”.

Example –

You select the “status profile” from CRM DSO.

Select CSM_CASE CSM_EXID CSM_SPRO from  /BIC/AZCSM_AGE00 into table lt_csm_pro.

Let us assume that there are 1 million records and all these come to the table lt_csm_pro
Now I need to extract from another table using the “Status profile”
So,

Select 0CSM_TYPE 0CSM_CATE from /BIC/AZCSM_BHF00 into table lt_csm_bhf for all entries in
lt_csm_pro where CSM_SPRO = lt_csm_pro-CSM_SPRO.

The above select statement will take very long time to execute as there are 1 million records.
we know that status profile has duplicates and hence when we remove the duplicates then we
will have only 90 status profiles. So the best approach is to remove the duplicates and then use them in “For all entries”
Copy the table lt_csm_pro to another internal table lt_csm_pro_1.

lt_csm_pro_1[] = lt_csm_pro[].

Sort lt_csm_pro_1 by CSM_SPRO.

Delete adjacent duplicates from lt_csm_pro_1 comparing CSM_SPRO.

After the delete statement lt_csm_pro_1- CSM_SPRO will contain only 90 records. Hence the below statement will work fast.

Select 0CSM_TYPE 0CSM_CATE from /BIC/AZCSM_BHF00 into table lt_csm_bhf for all entries in
lt_csm_pro_1 where CSM_SPRO = lt_csm_pro_1-CSM_SPRO.

Rule 17 – Always use the method new_record__end_routine to add new records to the result_package. Manually we can sort the result_package by record number and then add the records instead it is recommended to use the method new_record__end_routine.

Rule 18 – Use the “global declaration” to declare the internal tables only when you want to maintain records between the start, transformation and end routines.

Rule 19 – Make the use of “Documents” to write detailed steps related to code in the transformation, dependent loads and any other details.

Example –
Rule_19.png

Rule_19_1.png


Rule 20 – Try to use the “DTP filter” and “DTP filter routines” to filter the incoming data from the source InfoProvider.

Rule 21 – Try to use SAP provided features like Master data read, DSO read in the transformations rather than the lookup using ABAP code!!!! :-)

Rule 22 – Before writing code check for the volume of data in PRD system and how frequently the data is increasing, this will allow you to foresee challenges and make you write a better code.

Rule 23 – Make sure you use BADI’s instead of CMOD’s. Make sure you write methods and classes instead of Function modules and subroutines.

Rule 24 – Always use the MONITOR_REC table to capture the exceptional records, instead of updating them into any Z table.

Rule 25 – Use the exceptions cx_rsrout_abort and cx_rsbk_errorcount cautiously.

Rule 26 -- Within the start and end routines, for every small change don't add a new "loop at result_package..endloop". avoid multiple "Loop at result_package..endloop" and use the existing "loop at result_package...endloop". Try to add the entire logic within single "loop at..endloop". This will help in maintaining the code uniformly and clearly.

Rule 27 - Use "constants" which enable you to easily maintain. Also it is even more better to maintain the constant values in Infoobject master data table and use them in the ABAP lookup. Futher the paramter tables can also be used.

Rule 28 - "For all entries" will not fetch duplicate records, so there might be a data loss, but inner join would fetch all of the records and hence "for all entries" must be used cautiously. Make sure to use all Primary keys to fetch records before you use the internal table in "For all entries"
scn.sap.com/thread/2029157

Final  Rule - Avoid as much as ABAP code as possible !!! :-) The reason is very simple, when you go to power your BW system with HANA, if you transformations have ABAP code then the transformations will not be executed in the HANA Database.  

All about Attributes….

By:Sridevi Aduri
Attributes are InfoObjects that exist already, and that are assigned logically to the new characteristic
Navigational Attributes
A Navigational Attibute is any attribute of a Characteristic which is treated in very similar way as we treat as Characteristic while Query Designer. Means one can perform drilldowns, filters etc on the same while Query designing.
Imp Note:
  • While Creating the Info Object -- Attributes Tab Page -- Nav Attri butes to be switched on .
  • While Designign the Cube we need to Check mark for the Nav. Attributes to make use of them.
Features / Advantages
  • Nav. Attr. acts like a Char while Reporting. All navigation functions in the OLAP processor are also possible
  • Filters, Drilldowns, Variables are possible while Reporting.
  • It always hold the Present Truth Data .
  • Historic Data is not possible through Nav. Attributes.
  • As the data is fetching from Master Data Tables and Not from Info Cube.
Disadvantages:
  • Leads to less Query Performance.
  • In the enhanced star schema of an InfoCube, navigation attributes lie one join further out than characteristics. This means that a query with a navigation attribute has to run an additional join
  • If a navigation attribute is used in an aggregate, the aggregate has to be adjusted using a change run as soon as new values are loaded for the navigation attribute.
  • http://help.sap.com/saphelp_nw04s/helpdata/EN/80/1a63e7e07211d2acb80000e829fbfe/frameset.htm
Transitive Attributes

A Navigational attribute of a Navigational Attributes is called Transitive Attribute. Tricky right Let me explain
  • If a Nav Attr Has the further more Nav. Attributes ( as its Attributes ) in it those are called Transitive Attributes .
  • For Example Consider there exists a characteristic ‘Material’ .It has ‘Plant’ as its navigational attribute. ‘Plant’ further has a navigational attribute ‘Material group’. Thus ‘Material group’ is the transitive attribute. A drilldown is needed on both ‘Plant’ and ‘Material group’.
  • And again we need to have both Material & Plant in your Info Cube to Drill down. (To fetch the data through Nav. Attrs. we need Master Data tables hence, we need to check mark/select both of them in the Cube )http://help.sap.com/saphelp_nw04s/helpdata/EN/6f/c7553bb1c0b562e10000000a11402f/frameset.htm
  • If Cube contains both ‘Material’ and ‘Plant’ Dimension table having both ‘Material’ and ‘Plant’ will have Dim ID, Sid of Material, and Sid of Plant. Since both the Sids exists reference of each navigational attribute is made correctly.
    • If Cube contains only 'Material’
    Dimension table having only ‘Material’ will have Dim ID, Sid of Material. Since Sid for first level navigational attribute (Plant) does not exists, reference to navigational attribute is not made correctly.
    Exclusive Attributes / Attributes Only/ Display Attribute

    If you set the Attribute Only Indicator(General Tab page for Chars / Addl Properties tabpage for Key Figs) for Characteristic while creating, it can only be used as Display Attribute for another Characteristic and not as a Nav. Attr.
    Features:

    • And it cannot be included in Info Cubes
    • It can be used in DSO, Infoset and Char as InfoProviders. In this Info Provider , the char is not visible during read access (at run time)
    • This means, it is not available in the query. If the Info Provider is being used as source of Transformation or DTP the characteristic is not visible.
    • It is just for Display at Query and cannot be used for Drill downs while reporting.
    Exclusive attributes:
    If you choose exclusively attribute, then the created key figure can only be used as an attribute for another characteristic, but cannot be used as a dedicated key figure in the InfoCube.
    • While Creating A Key Figure -- The Tabpage:Additional Properties -- Check Box for Attributies Only




  • Free xml sitemap generator
     

    How to Restore Query into Older Version

     

    • Added by Mahesh Kumar, last edited by Arun Varadarajan
      Once older verion (3.x) queries are migrated into latest version (7.0).Query can be restored to 3.x after migration. The backup for any query created in 3.x will be taken when first opened for editing in 7.0 Query Designer. The backup contains the last change done to the query using 3.x editor, any changes done in 7.0 Query Designer will be lost on restore. Also the query originally created in 7.0 can not be restored to older versions as there is no Backup in 3.x.

       Queries can be restored to 3.x version using program COMPONENT_RESTORE.

    Steps followed for Restoring Query with 3.x versions:

    Step 1 : Execute "COMPONENT_RESTORE" program in SE38.


    Step 2 :   Next screen Select InfoProvider and component type. Different component types are available like

    Step 3 : Select REP as a component type to revert back Query.  


    Step 4 :  Execute (F8). In the next screen it will displays all related Queries for that particular infoprovider.

    Step 5 : Search for the query which you want to revert back to older versions.

    Step 6 : Then say transfer selection. below message will appears in the system.

    Step 7 : Once we select Yes, Query successfully restored into older versions.

    SAP BI Inventory Management

    By: Raj Kandula and Jitu Krishna
     Source: sdn.sap.com
    Hi,
         Few points in Inventory Management.

    BW Inventory movement cubes are initialised in two stages.
    Firstly you load & compress the initial onhand stocks as of "todays" date;
    secondly you load & compress the historic stock movements.
    Once that's all done you do your regular delta loads to keep the info up to date.
    The Marker is used as a reference point during compression to keep a running
    total of what's onhand.
    When you initially load "today's" onhands you UNCHECK the "No Marker Update" box, so that the marker records those stock levels.
    When you load the historic stock movements you CHECK the "No Marker

    Update" box, so that these movements do not affect the marker (as those
    movements have already affected the current onhand level).
    For the regular delta loads you UNCHECK the the "No Marker Update" box again so that the "future" movements net off the marker as they go.



    Marker is used to reduce the time of fetching the non-cummulative key figures while reporting.

    Refer
    https://www.sdn.sap.com/irj/sdn/thread?messageID=4885115
    https://www.sdn.sap.com/irj/sdn/thread?messageID=4764397
    https://www.sdn.sap.com/irj/sdn/thread?messageID=4862257
    https://www.sdn.sap.com/irj/sdn/thread?messageID=3254753#3254753
    https://www.sdn.sap.com/irj/sdn/thread?threadID=422530
    Inventory management
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/documents/a1-8-4/How%20to%20Handle%20Inventory%20Management%20Scenarios.pdf
    How to Handle Inventory Management Scenarios in BW (NW2004)
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/f83be790-0201-0010-4fb0-98bd7c01e328
    https://www.sdn.sap.com/irj/sdn/thread?threadID=776637&tstart=0

    •• ref.to page 18 in "Upgrade and Migration Aspects for BI in SAP NetWeaver 2004s" paper
    http://www.sapfinug.fi/downloads/2007/bi02/BI_upgrade_migration.pdf
    Non-Cumulative Values / Stock Handling
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/93ed1695-0501-0010-b7a9-d4cc4ef26d31
    Non-Cumulatives
    http://help.sap.com/saphelp_nw2004s/helpdata/en/8f/da1640dc88e769e10000000a155106/frameset.htm
    http://help.sap.com/saphelp_nw2004s/helpdata/en/80/1a62ebe07211d2acb80000e829fbfe/frameset.htm
    http://help.sap.com/saphelp_nw2004s/helpdata/en/80/1a62f8e07211d2acb80000e829fbfe/frameset.htm
    Here you will find all the Inventory Management BI Contents:
    http://help.sap.com/saphelp_nw70/helpdata/en/fb/64073c52619459e10000000a114084/frameset.htm

    Hope this helps.

    LO COCKPIT STEP BY STEP

    Here is LO Cockpit Step By Step
    LO EXTRACTION
    - Go to Transaction LBWE (LO Customizing Cockpit)
    1). Select Logistics Application
           SD Sales BW
                Extract Structures
    2). Select the desired Extract Structure and deactivate it first.
    3). Give the Transport Request number and continue
    4). Click on `Maintenance' to maintain such Extract Structure
           Select the fields of your choice and continue
                 Maintain DataSource if needed
    5). Activate the extract structure
    6). Give the Transport Request number and continue
    - Next step is to Delete the setup tables
    7). Go to T-Code SBIW
    8). Select Business Information Warehouse
    i. Setting for Application-Specific Datasources
    ii. Logistics
    iii. Managing Extract Structures
    iv. Initialization
    v. Delete the content of Setup tables (T-Code LBWG)
    vi. Select the application (01 – Sales & Distribution) and Execute
    - Now, Fill the Setup tables
    9). Select Business Information Warehouse
    i. Setting for Application-Specific Datasources
    ii. Logistics
    iii. Managing Extract Structures
    iv. Initialization
    v. Filling the Setup tables
    vi. Application-Specific Setup of statistical data
    vii. SD Sales Orders – Perform Setup (T-Code OLI7BW)
            Specify a Run Name and time and Date (put future date)
                 Execute
    - Check the data in Setup tables at RSA3
    - Replicate the DataSource
    Use of setup tables:
    You should fill the setup table in the R/3 system and extract the data to BW - the setup tables is in SBIW - after that you can do delta extractions by initialize the extractor.
    Full loads are always taken from the setup tables

    SAP BI Interview Questions Cont2....

    What is table partition?
    A: SAP is using fact table partitioning to improve the performance. you can partition only on 0CALMONTH or 0FISCPER
    How would you convert a info package group into a process chain?
    A: Double Click on the info package grp, click on the ‘Process Chain Maint’ button and type in the name and descrition ; the individual info packages are inserted automatically.
    How do you replace a query result from a master query to a child query?

    A: If you select characterstic value with replacement path then it used the results from previuos query; for ex: let us assume that u have query Q1 which displaysthe top 10 customers, we have query Q2 which gets the top 10 customers for info object 0customer with as a vairable with replacement path and display detailed report on the customers list passed from Q1.
    What is modeling?
    It is an art of designing the data base. The design of DB depends on the schema and the schema is defined as representation of tables and their relationships.
    What is an info cube?
    Info cube is structured as star schema (extended) where a fact table is surrounded by different dim table that are linked with DIM’ids. And the data wise, you will have aggregated data in the cubes.
    What is extended star schema?
    In Extended Star Schema, under the BW star schema model, the dimension table does not contain master data. But it is stored externally in the master data tables (texts, attributes, hierarchies).
    The characteristic in the dimensional table points to the relevant master data by the use of SID table. The SID table points to characteristics attribute texts and hierarchies.
    This multistep navigational task adds extra overhead when executing a query. However the benefit of this model is that all fact tables (info cubes) share common master data tables between several info cubes.
    Moreover the SID table concept allows users to implement multi languages and multi hierarchy OLAP environments. And also it supports slowly changing dimension.
    delete a BEx query that is in Production system through request.
    A) Using the RSZDELETE transaction
    How would you optimize the dimensions?
    • We should define as many dimensions as possible and we have to take care that no single dimension crosses more than 20% of the fact table size.

    What are Conversion Routines for units and currencies in the update rule?

    • Using this option we can write ABAP code for Units / Currencies conversion. If we enable this flag then unit of Key Figure appears in the ABAP code as an additional parameter. For example, we can convert units in Pounds to Kilos.
    Can an InfoObject be an InfoProvider, how and why?

    • Yes, when we want to report on Characteristics or Master Data. We have to right click on the InfoArea and select “Insert characteristic as data target”. For example, we can make 0CUSTOMER as an InfoProvider and report on it.
    What is Open Hub Service?
    • The Open Hub Service enables us to distribute data from an SAP BW system into external Data Marts, analytical applications, and other applications. We can ensure controlled distribution using several systems. The central object for exporting data is the InfoSpoke. We can define the source and the target object for the data. BW becomes a hub of an enterprise data warehouse. The distribution of data becomes clear through central monitoring from the distribution status in the BW system.
    How do you transform Open Hub Data?
    • Using BADI we can transform Open Hub Data according to the destination requirement.

    What is ODS?

    • Operational DataSource is used for detailed storage of data. We can overwrite data in the ODS. The data is stored in transparent tables.
    What are BW Statistics and what is its use?

    • They are group of Business Content InfoCubes which are used to measure performance for Query and Load Monitoring. It also shows the usage of aggregates, OLAP and Warehouse management.
    What are the steps to extract data from R/3?
    • Replicate DataSources
    • Assign InfoSources
    • Maintain Communication Structure and Transfer rules
    • Create and InfoPackage
    • Load Data
    What are the delta options available when you load from flat file?
    • The 3 options for Delta Management with Flat Files:
    o Full Upload
    o New Status for Changed records (ODS Object only)
    o Additive Delta (ODS Object & InfoCube)

    What are the extractor types?
    • Application Specific
    o BW Content FI, HR, CO, SAP CRM, LO Cockpit
    o Customer-Generated Extractors
    LIS, FI-SL, CO-PA
    • Cross Application (Generic Extractors)
    o DB View, InfoSet, Function Module

    What are the steps involved in LO Extraction?
    • The steps are:
    o RSA5 Select the DataSources
    o LBWE Maintain DataSources and Activate Extract Structures
    o LBWG Delete Setup Tables
    o 0LI*BW Setup tables
    o RSA3 Check extraction and the data in Setup tables
    o LBWQ Check the extraction queue
    o LBWF Log for LO Extract Structures
    o RSA7 BW Delta Queue Monitor

    How to create a connection with LIS InfoStructures?

    • LBW0 Connecting LIS InfoStructures to BW
    What is the difference between ODS and InfoCube and MultiProvider?
    • ODS: Provides granular data, allows overwrite and data is in transparent tables, ideal for drilldown and RRI.
    • CUBE: Follows the star schema, we can only append data, ideal for primary reporting.
    • MultiProvider: Does not have physical data. It allows to access data from different InfoProviders (Cube, ODS, InfoObject). It is also preferred for reporting.

    What are Start routines, Transfer routines and Update routines?
    • Start Routines: The start routine is run for each DataPackage after the data has been written to the PSA and before the transfer rules have been executed. It allows complex computations for a key figure or a characteristic. It has no return value. Its purpose is to execute preliminary calculations and to store them in global DataStructures. This structure or table can be accessed in the other routines. The entire DataPackage in the transfer structure format is used as a parameter for the routine.
    • Transfer / Update Routines: They are defined at the InfoObject level. It is like the Start Routine. It is independent of the DataSource. We can use this to define Global Data and Global Checks.

    What is the difference between start routine and update routine, when, how and why are they called?
    • Start routine can be used to access InfoPackage while update routines are used while updating the Data Targets.
    What is Star Schema?

    In Star Schema model, Fact table is surrounded by dimensional tables. Fact table is usually very large, that means it contains millions to billions of records. On the other hand dimensional tables are very small. Hence they contain a few thousands to few million records. In practice, Fact table holds transactional data and dimensional table holds master data.
    The dimensional tables are specific to a fact table. This means that dimensional tables are not shared to across other fact tables. When other fact table such as a product needs the same product dimension data another dimension table that is specific to a new fact table is needed.
    This situation creates data management problems such as master data redundancy because the very same product is duplicated in several dimensional tables instead of sharing from one single master data table. This problem can be solved in extended star schema.

    What is slowly changing dimension?
    Dimensions those changes with time are called slowly changing dimension.
    What is fact table?
    Fact table is the collection if facts and relations that means foreign keys with the dimension. Actually fact table holds transactional data.
    What is dimension table?
    Dimension table is a collection of logically related descriptive attributes that means characteristics.
    How many tables does info cube contain?
    Actually info cube contains two tables’ E table and F (fact) table.
    What is the maximum no. of dimensions in info cube?
    16(3 are sap defines and 13 are customer defined)
    What are the minimum no of dimensions in info cube?
    4(3 Sap defined and 1 customer defined).
    What are the 3SAP defined dimensions?
    The 3 SAP defined dimensions are…..
    1. Data packet dimension (P)…..it contains 3characteristics.a) request Id (b) Record type (c) Change run id
    2. Time dimension (T)….it contains time characteristics such as 0calmonth, 0calday etc
    3. Unit Dimension (U)…it contains basically amount and quantity related units.

    What is the maximum no. of key figures?
    233
    What is the maximum no. of characteristics?
    248
    What is the model of the info cube?
    Info cube model is extended star schema.
    What are the data types for the characteristic info object?
    There are 4types:
    1. CHAR
    2. NUMC
    3. DATS
    4. TIMS

    How you’ll write date in BW?
    YYYYMMDD

    By: leela naveen

    This are questions I faced. If u have any screen shots for any one of the question provide that one also.
    1. We have standard info objects given in sap why you created zinfo objects can u tell me the business scenario
    2. We have standard info cubes given in sap why you created zinfo cubes can u tell me the business scenario
    3. In keyfigure what is meant by cumulative value, non cumulative value change and non cumulative value in and out flow.
    4. when u creating infoobject it shows reference and template what is it
    5. what is meant by compounding attribute tell me the scenario?
    6. I have 3 cubes for that I created multiprovider and I created a report for that but I didn’t get data in that report what happen?
    7. I have 10 cubes I created multiprovider I want only 1 cube data what u do?
    8. what is meant by safety upper limit and safety lower limit in all the deltas tell me one by one for time stamp, calender day and numberic pointer?
    9. I have 80 queries which query is taking so much time how can you solve it
    10. In compression level all requests are becoming zero which data is compressing tell me detail
    11. what is meant by flat aggregate?explain in detail
    12. I created process chain 1st day it taking 10 min after that 1st week it taking 1 hour after that next time it taking 1 day with a same loads what happen how can u reduce the time of loading
    13. how can u know the cube size? in detail show me u have screen shots
    14. where can we find transport return codes
    15. I have a report it taking so much time how can I rectify
    16. what is offset? Without offset we create queries?
    17. I told my process chains nearly 600 are there he asked me how can u monitor I told him I will see in rspcm and bwccms he asked is there any third party tools is there to see? Any tools are there to see tell me what it is
    18. how client access the reports
    19. I don’t have master data it will possible to load transaction data? it is possible is there any other steps to do that one
    20. what is structure in reporting?
    21. which object based you created extended star schema?
    22. what is line item dimension tell me brief
    23. what is high cardinality tell me brief
    24. process chain is running I have to stop the process for 1 hour after that re runn the process where it is stopped?
    in multiprovider can I use aggregations
    25. what is direct schedule and what is meta chain
    26. which patch u used presently? How can I know which patch that one?
    27. how can we increase data packet size
    28. hierarchies are not there in bi?why
    29. remodeling is applied only on info cube? why not dso/ods?
    30. In jump queries we can jump any transactions just like rsa1, sm37 etc it is possible or not?
    31. why ods activation fail? What types of fails are there? What are the steps to handle
    32. I have a process chain is running the infopackage get error don’t process the error of that info package and then you can run the dependent variants is it possible?

    Give me any performance and loading issues or support issues
    Reporting errors, Loading errors, process chain errors?