Showing posts with label BI. Show all posts
Showing posts with label BI. Show all posts

LO COCKPIT STEP BY STEP

Here is LO Cockpit Step By Step
LO EXTRACTION
- Go to Transaction LBWE (LO Customizing Cockpit)
1). Select Logistics Application
       SD Sales BW
            Extract Structures
2). Select the desired Extract Structure and deactivate it first.
3). Give the Transport Request number and continue
4). Click on `Maintenance' to maintain such Extract Structure
       Select the fields of your choice and continue
             Maintain DataSource if needed
5). Activate the extract structure
6). Give the Transport Request number and continue
- Next step is to Delete the setup tables
7). Go to T-Code SBIW
8). Select Business Information Warehouse
i. Setting for Application-Specific Datasources
ii. Logistics
iii. Managing Extract Structures
iv. Initialization
v. Delete the content of Setup tables (T-Code LBWG)
vi. Select the application (01 – Sales & Distribution) and Execute
- Now, Fill the Setup tables
9). Select Business Information Warehouse
i. Setting for Application-Specific Datasources
ii. Logistics
iii. Managing Extract Structures
iv. Initialization
v. Filling the Setup tables
vi. Application-Specific Setup of statistical data
vii. SD Sales Orders – Perform Setup (T-Code OLI7BW)
        Specify a Run Name and time and Date (put future date)
             Execute
- Check the data in Setup tables at RSA3
- Replicate the DataSource
Use of setup tables:
You should fill the setup table in the R/3 system and extract the data to BW - the setup tables is in SBIW - after that you can do delta extractions by initialize the extractor.
Full loads are always taken from the setup tables





1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
3. Within structures, make sure the filter order exists with the highest level filter first.
4. Check code for all exit variables used in a report.
5. Move Time restrictions to a global filter whenever possible.
6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
9. If Alternative UOM solution is used, turn off query cache.
10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queries---for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
11. Turn off formatting and results rows to minimize Frontend time whenever possible.
12. Check for nested hierarchies. Always a bad idea.
13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.
16. Check Sequential vs Parallel read on Multiproviders.
17. Turn off warning messages on queries.
18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
19. Check to see where currency conversions are happening if they are used.
20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
21. Avoid Cell Editor use if at all possible.
22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.
25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The "not assigned" nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.

By: leela naveen

This are questions I faced. If u have any screen shots for any one of the question provide that one also.
1. We have standard info objects given in sap why you created zinfo objects can u tell me the business scenario
2. We have standard info cubes given in sap why you created zinfo cubes can u tell me the business scenario
3. In keyfigure what is meant by cumulative value, non cumulative value change and non cumulative value in and out flow.
4. when u creating infoobject it shows reference and template what is it
5. what is meant by compounding attribute tell me the scenario?
6. I have 3 cubes for that I created multiprovider and I created a report for that but I didn’t get data in that report what happen?
7. I have 10 cubes I created multiprovider I want only 1 cube data what u do?
8. what is meant by safety upper limit and safety lower limit in all the deltas tell me one by one for time stamp, calender day and numberic pointer?
9. I have 80 queries which query is taking so much time how can you solve it
10. In compression level all requests are becoming zero which data is compressing tell me detail
11. what is meant by flat aggregate?explain in detail
12. I created process chain 1st day it taking 10 min after that 1st week it taking 1 hour after that next time it taking 1 day with a same loads what happen how can u reduce the time of loading
13. how can u know the cube size? in detail show me u have screen shots
14. where can we find transport return codes
15. I have a report it taking so much time how can I rectify
16. what is offset? Without offset we create queries?
17. I told my process chains nearly 600 are there he asked me how can u monitor I told him I will see in rspcm and bwccms he asked is there any third party tools is there to see? Any tools are there to see tell me what it is
18. how client access the reports
19. I don’t have master data it will possible to load transaction data? it is possible is there any other steps to do that one
20. what is structure in reporting?
21. which object based you created extended star schema?
22. what is line item dimension tell me brief
23. what is high cardinality tell me brief
24. process chain is running I have to stop the process for 1 hour after that re runn the process where it is stopped?
in multiprovider can I use aggregations
25. what is direct schedule and what is meta chain
26. which patch u used presently? How can I know which patch that one?
27. how can we increase data packet size
28. hierarchies are not there in bi?why
29. remodeling is applied only on info cube? why not dso/ods?
30. In jump queries we can jump any transactions just like rsa1, sm37 etc it is possible or not?
31. why ods activation fail? What types of fails are there? What are the steps to handle
32. I have a process chain is running the infopackage get error don’t process the error of that info package and then you can run the dependent variants is it possible?

Give me any performance and loading issues or support issues
Reporting errors, Loading errors, process chain errors?





Summary
This document will help to understand the basic idea of why DTP replaced IP in BI 7.0 for loading the data to data targets and data marts. This document highlights the main difference between DTP & IP and the significance of DTP over IP.
Author: Balamurugan A.J From SDN.

Introduction
DTP and IP are tools used to load data from external system into BI. In BW3.5 and below versions, IP is
used to load data from source system to end data target. Whereas in BI 7.0, IP is used to load data only till PSA and DTP loads into data target and data mart. This document provides understanding of why DTP
replaced IP in BI 7.0.
First let us understand the basic concept about DTP and IP.


DTP
Data Transfer Process
Data transfer process (DTP) loads data within BI from one object to another object with respect to
transformations and filters. In short, DTP determines how data is transferred between two persistent objects.
It is used to load the data from PSA to data target (cube or ods or infoobject) thus, it replaced the data mart interface and the Info Package.
DTP fundamentals:
http://help.sap.com/saphelp_nw2004s/helpdata/en/42/f98e07cc483255e10000000a1553f7/content.htm


IP
InfoPackage
The InfoPackage is an entry point for SAP BI to request data from a source system. InfoPackages are tools for organizing data requests that are extracted from the source system and are loaded into the BW system.
In short, loading data into the BW is accomplished using InfoPackages.
IP Basics:
http://help.sap.com/saphelp_bw33/helpdata/en/44/420a3b7a3cf508e10000000a114084/frameset.htm


Reasons behind why DTP is Preferred over IP


Loading
DTP follows one to one mechanism i.e. there is one DTP per data target whereas, IP loads for all data
targets at once. DTP can load Full/Delta load for same target via same DTP which is not possible via same IP.


Delta Mechanism



DTP has separate Source to Target Delta handling mechanism. Thus, it helps in achieving,
•Possibility of having different data selection for one data source via different DTP (Same / Different
target) whereas this concept is not possible in IP.
•No DTP request exists in Reconstruct tab as it has separate delta handling mechanism for each
target.
•No "Repair Full /Full Repair" concept in DTP as it has one to one delta mechanism.
•Deleting the data mart deletes the request itself not just status as in case of IP.


Filter
DTP filter's the data based on Semantic Key which is not possible in IP.
Having filters at IP, filters data before entering into BI i.e. PSA contains filtered data.
Whereas DTP-filter, filters data only at Data Target level thereby keeping all data at PSA.


Debugging
Breakpoint can be set at data packet level at all the stages (Before Extraction, Before Data Transfer and
After Data Transfer.) whereas, breakpoint doesn’t exist in IP.


Error Handling
Temporary Data Storage and Error Stack improve error handling in DTP which doesn’t exist in IP.
(Temporary Storage area contains all the data whereas Error stack will have only erroneous records.
Temporary storage area can be switched On/Off at each stage of process)
•Existence
•Error Handling for DSO is possible only in DTP.
•Reloading Bad Request without deleting request in the Data target is possible only in DTP using
Manual Update option.

Performance
DTP is faster as DTP data loading is optimized by Parallel process.
Data loading through DTP has no TRFC/LUWs generated at run time thus minimizing the loading time.


Migration
It's possible to migrate from IP flow to DTP flow if you are in BI 7.0 or above.
Create a DTP and set the Processing mode Serial Extraction, Immediate parallel processing to switch from IP to DTP without data loss.
Migrating IP flows to DTP
•Note 920644 - Date store with emulated Data Source: InfoPackages/DTP
•http://help.sap.com/saphelp_nw2004s/helpdata/en/43/03c035dc7821abe10000000a1553f6/content.
Htm





Doc SDN Link: http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/f0ada4bf-e963-2d10-aa89-abb9eeae7723






SAP BI 7 Interview Questions!!!! With Real-Time approach....

By: Chandiraban singu Source: SDN

Interviewers and the question will not remain the same, but find the pattern.

Brief of your profile
Brief of what you done in the project
Your challenging and complex situations
Your regularly faced problem and what you did to avoid the same in permanent
interviewers complex situation , recent situation will be posted for your analysis.

Some one may add
Your system landscape
System archiectecture
Release management
Team size, org str,....

If your exp has production support tthen generally about your roles, authorization and commonlly faced errors.
http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/b0c1d94f-b825-2c10-15ae-ccfc59acb291

About data source enhancement
http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/00c1f726-1dc2-2c10-f891-ddfbffdb1a46

About data flow during delta
http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/f03da665-bb6f-2c10-7da7-9e8a6684f2f9


If your exp has implementation.then
Modules which you have implemented.
Methodoloyg adopted
https://weblogs.sdn.sap.com/pub/wlg/13745
Approach to implementation
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/8917
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/8920
Testing system
Business scenario
how did you did data modellling like why std lo datasource? why dso ? why this much layers ?.....
Documentation, how your functionall spec and technical spec template, content, information will be...?

Design a SAP NetWeaver - Based System Landscape
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/50a9952d-15cc-2a10-84a9-fd9184f35366
https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/8877
BI - Soft yet Hard Challenges
https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/9068

*Best Practice for new BI project *
https://www.sdn.sap.com/irj/sdn/thread?threadID=775458&tstart=0

Guidelines to Make Your BI Implementations Easier
http://www.affine.co.uk/files/Guidelines%20to%20Make%20Your%20BI%20Implementations%20Easier.pdf

Specific bw interview questions

https://www.sdn.sap.com/irj/scn/advancedsearch?query=SAP+BW+INTERVIEW+QUESTIONS&cat=sdn_all
200 BW Questions and Answers for INTERVIEWS
http://sapdocs.info/sap-overview/sap-interview-questions/
http://www.erpmastering.com/bwfaq.htm
http://www.allinterview.com/showanswers/33349.html
http://searchsap.techtarget.com/generic/0,295582,sid21_gci1182832,00.html
http://prasheelk.blogspot.com/2008_05_12_archive.html
http://bisharath.blogspot.com/

Best of luck for your interviews.....Be clear with what you done...
http://saptutions.com/SAPBW/BW_openhub.asp
http://www.scribd.com/doc/6343052/BW-EXAM





Types of Tickets in Production

By: Purushothama Reddy

What are the types of ticket and its importance?
This depends on the SLA. It can be like:
1. Critical.
2. Urgent.
3. High.
4. Medium
5. Low.
The response times and resolution times again are defined in the SLA based on the clients requirement and the charges.
This is probably from the viewpoint of Criticality of the problem faced by the client as defined by SAP.
1) First Level Ticketing:
Not severe problem. Routine errors. Mostly handled by Service desk arrangement of the company (if have one).
Eg: a) Say Credit limit block in working on certain documents?
b) Pricing Condition Record not found even though conditions are maintained?
c) Unable to print a delivery document or Packing list?
PS: In the 4th phase of ASAP Implementation Methodology( i.e Final Preparations for GO-LIVE) SAP has clearly specified that a Service desk needs to be arranged for any sort of Implementation for better handling of Production errors.
Service desk lies with in the client.
2) Second Level Ticketing:
Some sort of serious problems. Those Could not be solved by Service Desk. Should be referred to the Service Company (or may be company as prescribed in SLA).
Eg: a) Credit Exposure (especially open values) doesn't update perfectly to KNKK Table.
b) Inter company Billing is taking a wrong value of the Bill.
c) Need a new order type to handle reservation process
d) New product has been added to our selling range. Need to include this into SAP. (Material Masters, Division attachements, Stock Handling etc.)
3) Third Level Ticketing:
Problems could not be solved by both of the above, are referred to Online Service Support (OSS) of SAP Itself. SAP tries to solve the Problem, sometimes by providing the perfect OSS Notes, fits to the error and rarely SAP logs into our Servers (via remote log-on)for post mortem the problem. (The Medical check-up client, connections, Login id and Passwords stuff are to be provided to SAP whenever they need or at the time of opening OSS Message.)
There are lots of OSS Notes on each issue, SAP Top Notes and Notes explaining about the process of raising a OSS Message.
Sometimes SAP Charges to the client / Service company depending on the Agreement made at the time of buying License from SAP.
Eg: 1) Business Transation for the Currency 'EUR' is not possible. Check OSS Note - This comes at the time of making Billing.
2) Transaction MMPI- Periods cannot be opened – See OSS Note.
There are many other examples on the issue.
4) Fourth Level Ticketing:
Where rarely, problems reach this level.
Those problem needs may be re-engineering of the business process due to change in the Business strategy. Upgradation to new Version. More or less this leads to extinction of the SAP Implementation.

How to Test BW Cube data with PSA Data ( Data Reconsilation )

How to Test BW Cube data with PSA Data
Objective
You would like to compare the contents of your InfoCube with data loaded from your source system into the PSA. In our example, we will check data that has been loaded for a specific customer.


Step 1
In the PSA tree of the Administrator Workbench, generate the Export DataSource on the PSA you want to check against via the context menu.


Step 2
In the source system tree, display the DataSource overview of the MYSELF BW system and locate your PSA Export DataSource. The technical name is the original DataSource name (to which the PSA belongs) prefixed with a “7”.


Step 3
From the context menu of the DataSource, choose “Assign InfoSource”.

Step 4
Enter a name for your new InfoSource and press the “Create”-button.

Step 5
Create an InfoSource for flexible update. This should be the default, so just press the green arrow to continue.

Step 6
Maintain the communication structure and transfer rules. Make sure that “REQUEST” is mapped to 0TCTREQUID (don`t change the default). The Transfer Method will automatically be set to “IDOC”. Activate your InfoSource.

Step 7
Create a RemoteCube to access data in the PSA.


Step 8
Choose a name and a description for the InfoCube. As InfoCube type choose “SAP RemoteCube” and enter the name of the InfoSource you have created in steps 4-6. Tick the “Clear Source System” checkbox since the BW System itself is the only source system of the RemoteCube.


Step 9
Select characteristics and key figures, create dimensions and assign the characteristics to the dimensions (no special modeling guidelines have to be followed for a RemoteCube). Activate your RemoteCube.


Step 10
From the context menu of the RemoteCube, choose “Assign Source Systems”

Step 11
Select the source system (the BW system itself) and save the assignment.

Step 12
Create a MultiProvider.

Step 13
Choose a name and a description for the new MultiProvider and press the “Create”-button.

Step 14
Select the InfoCube you want to check and the RemoteCube you have just created as the InfoProviders involved in the MultiProvider.

Step 15
Select the characteristics you need for your check reports. Make sure 0TCTREQUID is included.

Step 16
Identify characteristics and choose the key figures (from both InfoCubes) you would like to compare. Activate your MultiProvider.

Step 17
Create a query and include the characteristics you want to use, the Request ID from the DataPackage dimension and the Data Request (GUID) for the PSA request number. It is useful to include variables for your characteristics (otherwise you may get far too much data).

Step 18
Create selections for the Key Figures you want to check.

Step 19
In the first selection, use your Key Figures and the InfoProvider characteristic 0INFOPROV restricted to the InfoCube (0INFOPROV is automatically available in every MultiProvider).

Step 20
In the second selection, restrict the InfoProvider characteristic to the RemoteCube.

Step 21
Run the query. In our example, we are checking data for one customer.

Step 22
The result shows the data from PSA with corresponding Data Request (GUID) and the sum of the data from the InfoCube.

Step 23
If you drill down on Request ID, you get the data from the InfoCube displayed with the Request ID in the InfoCube (which you can see in the InfoCube Management).

Step 24
Optional: If you only want to check the data for the last request, you can add the variable 0LSTRQID to your InfoCube selection and…

Step 25
the variable 0MAPRQID to your RemoteCube selection. The latter is filled automatically with the PSA Request ID corresponding to the InfoCube Request ID filtered by in the first variable.

Step 26
The result will only show data loaded with the last request.

Listed below are some of the frequently used Function Modules within BW.

Function Module Description (Function Group RRMX)
RRMX_WORKBOOK_DELETE Delete BW Workbooks permanently from Roles & Favorites
RRMX_WORKBOOK_LIST_GET Get list of all Workbooks
RRMX_WORKBOOK_QUERIES_GET Get list of queries in a workbook
RRMX_QUERY_WHERE_USED_GET Lists where a query has been used
RRMX_JUMP_TARGET_GET Get list of all Jump Targets
RRMX_JUMP_TARGET_DELETE Delete Jump Targets




MONI_TIME_CONVERT Used for Time Conversions.
CONVERT_TO_LOCAL_CURRENCY Convert Foreign Currency to Local Currency.
CONVERT_TO_FOREIGN_CURRENCY Convert Local Currency to Foreign Currency.
TERM_TRANSLATE_TO_UPPER_CASE Used to convert all texts to UPPERCASE
UNIT_CONVERSION_SIMPLE Used to convert any unit to another unit. (Ref. table : T006)
TZ_GLOBAL_TO_LOCAL Used to convert timestamp to local time
FISCPER_FROM_CALMONTH_CALC Convert 0CALMONTH or 0CALDAY to Financial Year or Period
RSAX_BIW_GET_DATA_SIMPLE Generic Extraction via Function Module
RSAU_READ_MASTER_DATA Used in Data Transformations to read master data InfoObjects

RSDRI_INFOPROV_READ
RSDRI_INFOPROV_READ_DEMO
RSDRI_INFOPROV_READ_RFC Used to read InfoCube or ODS data through RFC

DATE_COMPUTE_DAY
DATE_TO_DAY Returns a number what day of the week the date falls on.
DATE_GET_WEEK Will return a week that the day is in.
RP_CALC_DATE_IN_INTERVAL Add/Subtract Years/Months/Days from a Date.

RP_LAST_DAY_OF_THE_MONTHS
SLS_MISC_GET_LAST_DAY_OF_MONTH Determine Last Day of the Month.
RSARCH_DATE_COVERT Used for Date Conversions. We can use in Info Package routines.
DATE_CHECK_PLAUSIBILITY - Used to validate dates

Avoid the SID Generation Error While Activating Data in a DSO

You might run into one of these error messages while activating data in a DataStore object (DSO) either manually or from a process chain:
 “Activation of M records from DataStore object terminated”
 “Resource error. No batch process available. Process terminated”
 “Time limit exceeded. No return of the split processes”
When you create a DSO, the system sets a system ID of SIDs Generation upon Activation by default. It is a check box option in the edit mode settings of the DSO. If this option is checked, the system checks the SID values for all of the characteristics in the DSO. If a SID value for the particular characteristic doesn’t exist, then the system generates the SIDs. So the SIDs Generation upon Activation option helps to improve the performance of the query as the system doesn’t have to generate SIDs at query runtime.



The general understanding is that the error messages in above Figure during activation of a DSO are due to the SIDs Generation upon Activation setting. However, we will show that the error messages are not due to this setting, but rather to incorrect parameterization of the processes to activate requests. This means that several background processes were running simultaneously (i.e., activation of requests in DSO and SID creation), resulting in the termination of the request. If a process chain is used for activation of a DSO, all the above processes still run simultaneously in the background. You can use transaction RZ04 to check how many background processes are available in the system at the time of load.



You can change the runtime parameters for this affected DSO by going to transaction RSODSO_SETTINGS. Note that transaction RSCUSTA2 is obsolete in SAP NetWeaver BI 7.0. In the RSODSO_SETTINGS screen select the DSO in question, Click on the Change button to change the runtime parameter. On the Maintenance of Runtime Param. screen click on the Change Process Params. button under Parameter for Activation as the issue right now is an activation error.

Alternatively, you can get to this screen from the context menu of the DSO by selecting Manage, which is the activation request that failed. Click on the Activate button just as you would to activate a request that is loaded to the activation queue. The Activate Data in DSO … window pops up. Click on the Activate in Parallel button. A pop-up window displays the process type ODSACTIVAT.

Maximum Wait Time for Process is set to 300 seconds by default, but you can increase it to a higher value if you think the system workload will be high. If you choose Dialog process as an option. then SAP recommends that the wait time in SAP NetWeaver BI should be three times higher than in R/3 (SAP Note 1118205). SAP Note 192658 also recommends that you set the maximum runtime for the work process as 3,600 seconds. After you click on the Change Process Params. button, you see the settings window.


Enter the Number of Processes. Under Parallel Processing, select Dialog. Select parallel generators in Server Group for Parallel Dialog Processing and then click on the save icon. You can re-initiate the failed activation again and the data should be activated now without any issues.


After the successful activation of data in the DSO, you can revert back to the normal settings for the DSO if necessary to avoid having too many dialog processes. If the activation of data in the DSO is done through process chains (transaction RSPC), you can access the settings. If many DSOs are failing in activation and the above parameters have to be changed for all of these DSOs, then SAP recommends updating the table RSBATCHPARALLEL directly. However, one should have security access to change the data in the tables directly and the steps we describe are easier to perform for individual DSOs.

MAINTAINING DATA QUALITY IN BW USING ERROR STACK

By:Ramakrishna Gattikoppula (Infosys) Article (PDF 255 KB) 10 August 2010


OVERVIEW
This Article explains the different ways in which the incorrect data records can be moved to error stack when the data record is processed in the routines(Start, End, Characteristic or Expert Routines) of Transformation (In this case a data record is marked as incorrect based on Customer-specific requirements or Conditions).

http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/20ebeb43-9e8a-2d10-b28e-825c0142ad4f?QuickLink=index&overridelayout=true

HOW TO USING EXPERT ROUTINE IN TRANSFORMATION

By: P Renjith Kumar SDN Article (PDF 382 KB) 18 August 2010

OVERVIEW
This document will give you a basic overview of creating an expert routine in transformation.


http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a056f726-59eb-2d10-9db3-eb11f0c1e9e4?QuickLink=index&overridelayout=true

IMPLEMENTING DYNAMIC FILTERS IN INFOPACKAGE - TWO SCENARIOS

By: Akashdeep Banerjee (IBM) SDN Article (PDF 647 KB) 28 September 2010


OVERVIEW
This article explains two different scenarios to implement dynamic filters in Infopackages built on Hierarchy and Transactional Datasources for restricting undesired data in the BI System

http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a03c23b1-1cb4-2d10-f1ae-9f5ed92be246?QuickLink=index&overridelayout=true