Showing posts with label sap BI support issues. Show all posts
Showing posts with label sap BI support issues. Show all posts

By Mohanavel at scn.sap.com

Objective:    Introducing time delay in the process chain with help of standard SAP provided program RSWAITSEC.


Background:  With the use of interrupt process type in the process chain we can give the fixed delay to the subsequent process types when the interrupt step is reached.  But when we use the standard interrupt process type we have to mention the date and time or event name.
In many cases interrupt step might not help, if suppose an interrupt step is introduce to delay the subsequent processes by a definite period of time, and if all the steps above to the interrupt gets completed early then instead of passing the trigger to the subsequent step after the desired wait time, the interrupt will force the chain to wait till the conditions in the interrupt are satisfied.
In order to achieve the delay in the trigger flow from one process type to another in a process chain without any condition for the fixed time limit or event raise we can use this RSWAITSEC Program.


Scenario:   In our project one of the master data chain is getting scheduled at 23.00IST. This load supplies data to the report which is based on 0CALWEEK. The data load and an abap program in the process chain make use of SY-DATUM, so a load that starts on sunday 23:00 if doesn't complete by 23:59:59 hours (1 hour duration) then the entire data gets wrongly mapped to the next week. .This will cause discrepancy of data.
So it was required to schedule chain at 23:00 IST everyday except sunday, and at 22:45:00IST (15mins earlier) on Sundays.


Different Ways to Achieve the above Situation:
1.        Creating two different process chains and scheduling the 1st process chain at 23.00 IST for Monday to Saturday (Using factory calendar), scheduling the 2nd at 22.45 IST only for Sunday.
Disadvantage of 1st method:
Unnecessarily creating two chains for the same loads, this makes way to have multiple chains in the system.


   2.   Scheduling the same chain at 22.45IST and adding the decision step to find and give interruption of 15mins for Monday to Saturday.  So on Sunday it will  get start at 22.45.


Process Chain with Interrupt Process Type:

Description: PC with Interrpt Step(1).jpg
Disadvantage of 2nd method:
If suppose you want execute this chain with immediate in other time, then my interruption step will wait until 23.00 IST to get start the load for Monday to Saturday loads.


Better Way of Achieving with RSWAITSEC program:


Scheduling the chain at 22.45 and adding the decision step to find whether its Sunday or other.  If Sunday then next step would be directly to the local chain, if the particular day is between Monday to Saturday then the next step would be with RSWAITSEC program(SAP std program).   In the program variant we have to mention the desired time delay in Secs(900 Secs).

     Compared to above two methods, this will be the  better way to achieve the desired output.  Even though if I run the process chain with start process as         immediate on other than Sunday’s my local chain will not wait until 23.00IST to reach, it will wait for 15mins and it will get triggered. 




       As this is the SAP provided one no need to move any TP for this, even in production we will be able to use directly.



Process chain with RSWAITSEC:
Description: PC with RSWAITSEC program(2).jpg


ABAP Process type with RSWAITSEC Program(which shown in the above PC):


Description: Program Variant for RSWAITSEC(3).jpg


Setting the Variant value (required time):


In the variant value we need to mention the desired limit of delay in Seconds.  My requirement is of with 15mins of delay, so I have given 900sec in the variant value.


Description: Variant Value Screen(4).jpg



So we can use this program in any stages of process chain to give the fixed period of delay.


Hope this will be helpful.



By: Anonymous
I want to continue my series for beginners new to SAP BI. In this blog I write down the necessary steps how to create a process chain loading data with an infopackage and with a DTP, activation and scheduling of this chain.

1.)    Call transaction RSPC

RSPC

 RSPC is the central transaction for all your process chain maintenance. Here you find on the left existing process chains sorted by “application components”.  The default mode is planning view. There are two other views available: Check view and protocol view.
2.)    Create a new process chain
To create a new process chain, press “Create” icon in planning view. In the following pop-Up window you have to enter a technical name and a description of your new process chain.

name chain

The technical name can be as long as up to 20 characters. Usually it starts with a Z or Y. See your project internal naming conventions for it.
3.)    Define a start process
After entering a process chain name and description, a new window pop-ups. You are asked to define a start variant.
 Start variant


That’s the first step in your process chain! Every process chain does have one and only one starting step. A new step of type “Start process” will be added. To be able to define unique start processes for your chain you have to create a start variant. These steps you have to do for any other of the subsequent steps. First drag a process type on the design window. Then define a variant for this type and you have to create a process step. The formula is:
 Process Type + Process Variant = Process Step!
If you save your chain, process chain name will be saved into table RSPCCHAIN. The process chain definition with its steps is stored into table RSPCPROCESSCHAIN as a modified version.So press on the “create” button, a new pop-up appears:

start variant name

Here you define a technical name for the start variant and a description. In the n ext step you define when the process chain will start. You can choose from direct scheduling or start using meta chain or API. With direct scheduling you can define either to start immediately upon activating and scheduling or to a defined point in time like you know it from the job scheduling in any SAP system. With “start using meta chain or API” you are able to start this chain as a subchain or from an external application via a function module “RSPC_API_CHAIN_START”. Press enter and choose an existing transport request or create a new one and you have successfully created the first step of your chain.
 4.)    Add a loading step
If you have defined the starting point for your chain you can add now a loading step for loading master data or transaction data. For all of this data choose “Execute infopackage” from all available process types. See picture below:

loading step

You can easily move this step with drag & drop from the left on the right side into your design window.A new pop-up window appears. Here you can choose which infopackage you want to use. You can’t create a new one here. Press F4 help and a new window will pop-up with all available infoapckages sorted by use. At the top are infopackages used in this process chain, followed by all other available infopackages not used in the process chain. Choose one and confirm. This step will now be added to your process chain. Your chain should look now like this:

first steps

How do you connect these both steps? One way is with right mouse click on the first step and choose Connect with -> Load Data and then the infopackage you want to be the successor.

 connect step

Another possibility is to select the starting point and keep left mouse button pressed. Then move mouse down to your target step. An arrow should follow your movement. Stop pressing the mouse button and a new connection is created. From the Start process to every second step it’s a black line.
5.)    Add a DTP process In BI 7.0 systems you can also add a DTP to your chain. From the process type window ( see above.) you can choose “Data Transfer Process”. Drag & Drop it on the design window. You will be asked for a variant for this step. Again as in infopackages press F4 help and choose from the list of available DTPs the one you want to execute. Confirm your choice and a new step for the DTP is added to your chain. Now you have to connect this step again with one of its possible predecessors. As described above choose context menu and connect with -> Data transfer process. But now a new pop-up window appears.

connection red green 
Here you can choose if this successor step shall be executed only if the predecessor was successful, ended with errors or anyhow if successful or not always execute. With this connection type you can control the behaviour of your chain in case of errors. If a step ends successful or with errors is defined in the process step itself. To see the settings for each step you can go to Settings -> Maintain Process Types in the menu. In this window you see all defined (standard and custom ) process types. Choose Data transfer process and display details in the menu. In the new window you can see:

dtp setting

 DTP can have the possible event “Process ends “successful” or “incorrect”, has ID @VK@, which actually means the icon and appears under category 10, which is “Load process and post-processing”. Your process chain can now look like this:

two steps


You can now add all other steps necessary. By default the process chain itself suggests successors and predecessors for each step. For loading transaction data with an infopackage it usually adds steps for deleting and creating indexes on a cube. You can switch off this behaviour in the menu under “Settings -> Default Chains". In the pop-up choose “Do not suggest Process” and confirm.

default chains

Then you have to add all necessary steps yourself.
6.)    Check chain
Now you can check your chain with menu “Goto -> Checking View” or press the button “Check”. Your chain will now be checked if all steps are connected, have at least one predecessor. Logical errors are not detected. That’s your responsibility. If the chain checking returns with warnings or is ok you can activate it. If check carries out errors you have to remove the errors first.
7.)    Activate chain
After successful checking you can activate your process chain. In this step the entries in table RSPCPROCCESSCHAIN will be converted into an active version. You can activate your chain with menu “Process chain -> Activate” or press on the activation button in the symbol bar. You will find your new chain under application component "Not assigned". To assign it to another application component you have to change it. Choose "application component" button in change mode of the chain, save and reactivate it. Then refresh the application component hierarchy. Your process chain will now appear under new application component.
8.)    Schedule chain
After successful activation you can now schedule your chain. Press button “Schedule” or menu “Execution -> schedule”. The chain will be scheduled as background job. You can see it in SM37. You will find a job named “BI_PROCESS_TRIGGER”. Unfortunately every process chain is scheduled with a job with this name. In the job variant you will find which process chain will be executed. During execution the steps defined in RSPCPROCESSCHAIN will be executed one after each other. The execution of the next event is triggered by events defined in the table.  You can watch SM37 for new executed jobs starting with “BI_” or look at the protocol view of the chain.
9.)    Check protocol for errors
You can check chain execution for errors in the protocol or process chain log. Choose in the menu “Go to -> Log View”. You will be asked for the time interval for which you want to check chain execution. Possible options are today, yesterday and today, one week ago, this month and last month or free date. For us option “today” is sufficient.
Here is an example of another chain that ended incorrect:
  chain log

On the left side you see when the chain was executed and how it ended. On the right side you see for every step if it ended successfully or not. As you can see the two first steps were successfull and step “Load Data” of an infopackage failed. You can now check the reason with context menu “display messages” or “Process monitor”. “Display messages” displays the job log of the background job and messages created by the request monitor. With “Process monitor” you get to the request monitor and see detailed information why the loading failed. THe logs are stored in tables RSPCLOGCHAIN and RSPCPROCESSLOG. Examining request monitor will be a topic of one of my next upcoming blogs.


 10.) Comments
Here just a little feature list with comments.
- You can search for chains, but it does not work properly (at least in BI 7.0 SP15).
- You can copy existing chains to new ones. That works really fine.
- You can create subchains and integrate them into so-called meta chains. But the application component menu does not reflect this structure. There is no function available to find all meta chains for a subchain or vice versa list all subchains of a meta chain. This would be really nice to have for projects.
- Nice to have would be the possibility to schedule chains with a user defined job name and not always as "BI_PROCESS_TRIGGER".
But now it's your turn to create process chains.

By:
Chandiraban singu 
sdn.sap.com

Interviewers and the question will not remain the same, but find the pattern.

Brief of your profile
Brief of what you done in the project
Your challenging and complex situations
Your regularly faced problem and what you did to avoid the same in permanent
interviewers complex situation , recent situation will be posted for your analysis.

Some one may add
Your system landscape
System archiectecture
Release management
Team size, org str,....

If your exp has production support tthen generally about your roles, authorization and commonlly faced errors.
http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/b0c1d94f-b825-2c10-15ae-ccfc59acb291

About data source enhancement
http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/00c1f726-1dc2-2c10-f891-ddfbffdb1a46

About data flow during delta
http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/f03da665-bb6f-2c10-7da7-9e8a6684f2f9


If your exp has implementation.then
Modules which you have implemented.
Methodoloyg adopted
https://weblogs.sdn.sap.com/pub/wlg/13745
Approach to implementation
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/8917
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/8920
Testing system
Business scenario
how did you did data modellling like why std lo datasource? why dso ? why this much layers ?.....
Documentation, how your functionall spec and technical spec template, content, information will be...?

Design a SAP NetWeaver - Based System Landscape
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/50a9952d-15cc-2a10-84a9-fd9184f35366
https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/8877

BI - Soft yet Hard Challenges
https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/9068

*Best Practice for new BI project *
https://www.sdn.sap.com/irj/sdn/thread?threadID=775458&tstart=0

Guidelines to Make Your BI Implementations Easier
http://www.affine.co.uk/files/Guidelines%20to%20Make%20Your%20BI%20Implementations%20Easier.pdf


Specific bw interview questions

https://www.sdn.sap.com/irj/scn/advancedsearch?query=SAP+BW+INTERVIEW+QUESTIONS&cat=sdn_all
200 BW Questions and Answers for INTERVIEWS
http://sapdocs.info/sap-overview/sap-interview-questions/
http://www.erpmastering.com/bwfaq.htm
http://www.allinterview.com/showanswers/33349.html
http://searchsap.techtarget.com/generic/0,295582,sid21_gci1182832,00.html
http://prasheelk.blogspot.com/2008_05_12_archive.html

Best of luck for your interviews.....Be clear with what you done...
http://saptutions.com/SAPBW/BW_openhub.asp
http://www.scribd.com/doc/6343052/BW-EXAM


 

Useful Transactions and Notes goes with NetWeaver 7.0

RSRD_ADMIN - Broadcasting Administration - Available in the BI system for the administration of Information Broadcasting.
CHANGERUNMONI - Using this Tcode we can monitor the status of the attribute change run.
RSBATCH - Dialog and Batch Processes. BI background management functions:
  • Managing background and parallel processes in BI
  • Finding and analyzing errors in BI
  • Reports for BI system management 
are available under Batch Manager.
RRMX_CUST - Make setting directly in this transaction for which BEx Analyzer version is called.
Note: 970002 - Which BEx Analyzer version is called by RRMX?
RS_FRONTEND_INT - Use this transaction to Block new Frontend Components in field QD_EXCLUSIVE_USER from migrating to 7.0 version.
Note: 962530 - NW04s - How to restrict access to Query Designer 2004s.
WSCONFIG - This transaction is to create, test and release the Web Service definition.
WSADMIN - Administration Web Services - This transaction is to display and test the endpoint.
RSTCO_ADMIN - This transaction to install basic BI objects and check whether the installation has been carried out successfully. If the installation is red, restart the installation by calling transaction RSTCO_ADMIN again. Check the installation log.
Note 1000194 - Incorrect activation status in transaction RSTCO_ADMIN.
Note 1039381 - Error when activating the content Message no. RS062 (Error when installing BI Admin Cockpit).
Note 834280 - Installing technical BI Content after upgrade.
Note 824109 - XPRA - Activation error in NW upgrade. XPRA installs technical BW Content objects that are necessary for the productive use of the BW system. (An error occurs during the NetWeaver upgrade in the RS_TCO_Activation_XPRA XPRA. The system ends the execution of the method with status 6.
RSTCC_INST_BIAC - For activating the Technical Content for the BI admin cockpit
Run report RSTCC_ACTIVATE_ADMIN_COCKPIT in the background
Note 934848 - Collective note - (FAQ) BI Administration Cockpit
Note 965386 - Activating the technical content for the BI admin cockpit
Attachment for report RSTCC_ACTIVATE_ADMIN_COCKPIT source code
When Activating Technical Content Objects terminations and error occurs
Note 1040802 - Terminations occur when activating Technical Content Objects
RSBICA - BI Content Analyzer - Check programs to analyze inconsistencies and errors of custom-defined InfoObject, InfoProviders, etc - With central transaction RSBICA, schedule delivered check programs for the local system or remote system via RFC connection. Results of the check programs can be loaded to the local or remote BI systems to get single point of entry for analyzing the BI landscape.
RSECADMIN - Transaction for maintaining new authorizations. Management of Analysis Authorizations.
Note 820123 - New Authorization concept in BI.
Note 923176 - Support situation authorization management BI70/NW2004s.
RSSGPCLA - For the regeneration of RSDRO_* Objects. Set the status of the programs belonging to program classes "RSDRO_ACTIVATE", "RSDRO_UPDATE" and "RSDRO_EXTRACT" to "Generation required". To do this, select the program class and then activate the "Set statuses" button.
Note 518426 - ODS Object - System Copy, migration
RSDDBIAMON - BI Accelerator - Monitor with administrator tools.
  • Restart BIA server: restarts all the BI accelerator servers and services.
  • Restart BIA Index Server: restart the index server.
  • Reorganize BIA Landscape: If the BI accelerator server landscape is unevenly distributed, redistributes the loaded indexes on the BI accelerator servers.
  • Rebuild BIA Indexes: If a check discovers inconsistencies in the indexes, delete and rebuild the BI accelerator indexes.
RSDDSTAT - For Maintenance of Statistics properties for BEx Query, InfoProvider, Web Template and Workbook.
Note 964418 - Adjusting ST03N to new BI-OLAP statistics in Release 7.0
Note 934848 - Collective Note (FAQ) BI Administration Cockpit.
Note 997535 - DB02 : Problems with History Data.
Note 955990 - BI in SAP NetWeaver 7.0: Incompatibilities with SAP BW 3.X.
Note 1005238 - Migration of workload statistics data to NW2004s.
Note 1006116 - Migration of workload statistics data to NW2004s (2).
DBACOCKPIT - This new transactions replaces old transactions ST04, DB02 and comes with Support Pack 12 for database monitoring and administration.
Note 1027512 - MSSQL: DBACOCKPIT  for basis release 7.00 and later.
Note 1072066 - DBACOCKPIT - New function for DB monitoring.
Note 1027146- Database administration and monitoring in the DBA Cockpit.
Note 1028751 - MaxDB/liveCache: New functions in the DBA Cockpit.
BI 7.0 iView Migration Tool
Note 1128730 - BI 7.0 iView Migration Tool
Attachements for iView Migration Tool:
  • bi migration PAR
  • bi migration SDA
  • BI iView Migration Tool
For Setting up BEx Web
Note 917950 - SAP NetWeaver2004s : Setting Up BEx Web
Handy Attachements for Setting up BEx Web:
  • Problem Analysis
  • WDEBU7 Setting up BEx Web
  • System Upgrade Copy
  • Checklist
To Migrate BW 3.X Query Variants to NetWeaver 2004s BI:
Run Report RSR_VARIANT_XPRA from Transaction SE38 to fill the source table with BW 3.X variants that need to be migrated to SAP NetWeaver 2004s BI. After upgrading system to Support Package 12 or higher run the Migration report RSR_MIGRATE_VARIANTS to migrate the existing BW 3.x query Variants to the new NetWeaver 2004s BI Variants storage.
Note 1003481 - Variant Migration - Migrate all Variants
To check for missing elements and repairing the errors run report ANALYZE_MISSING_ELEMENTS.
Note 953346 - Problem with deleted InfoProvider in RSR_VARIANT_XPRA
Note 1028908 - BW Workbooks MSA: NW2004s upgrade looses generic variants
Note 981693 - BW Workbooks MSA: NW2004s upgrade looses old variants
For the Migration  of Web Templates from BW 3.X to SAP NetWeaver 2004s:
Note 832713 - Migration of Web Templates from BW 3.X to NetWeaver 2004s
Note 998682 - Various errors during the Web Template migration of BW 3.X
Note 832712 - BW - Migration of Web items from 3.x to 7.0
Note 970757 - Migrating BI Web Templates to NetWeaver 7.0 BI  which contain chart
Upgrade Basis Settings for SAP NetWeaver 7.0 BI
SAP NetWeaver 7.0 BI Applications with 32 bit architecture are reaching their limits. To build high quality reports on the SAP NetWeaver BI sources need an installation based on 64-bit architecture.
With SAP NetWeaver 7.0 BI upgrade basis parameter settings of SAP Kernel from 32 bit to 64 bit version. Looking at the added functionality in applications and BI reports with large data set use lot of memory which adds load to application server and application server fails to start up because the sum of all buffer allocations exceeds the 32 bit limit.
Note 996600 - 32 Bit platforms not recommended for productive NW2004s apps
Note 1044441 - Basis parameterization for NW 7.0 BI systems
Note 1044330 - Java parameterization for BI systems
Note 1030279 - Reports with very large result sets/BI Java
Note 927530 - BI Java sizing
Intermediate Support Packages for NetWeaver 7.0 BI
BI Intermediate Support Package consisits of an ABAP Support Package and a Front End Support Package and where ABAP BI intermediate Support Package is compatible with the delivered BI Java Stack.
Note 1013369 - SAP NetWeaver 7.0 BI - Intermediate Support Packages
Microsoft Excel 2007 integration with NetWeaver 7.0 BI
Microsoft Excel 2007 functionality is now fully supported by NetWeaver 7.0 BI
Advanced filtering, Pivot table, Advanced formatting, New Graphic Engine, Currencies, Query Definition, Data Mart Fields
Note 1134226 - New SAP BW OLE DB for OLAP files delivery - Version 3
Full functionality for Pivot Table to analyze NetWeaver BI data
Microsoft Excel 2007 integrated with NetWeaver 7.0 BI for building new query, defining filter values, generating a chart and creating top n analysis from NetWeaver BI Data
Microsoft Excel 2007 now provides Design Mode, Currency Conversion and Unit of Measure Conversion





1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
3. Within structures, make sure the filter order exists with the highest level filter first.
4. Check code for all exit variables used in a report.
5. Move Time restrictions to a global filter whenever possible.
6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
9. If Alternative UOM solution is used, turn off query cache.
10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queries---for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
11. Turn off formatting and results rows to minimize Frontend time whenever possible.
12. Check for nested hierarchies. Always a bad idea.
13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.
16. Check Sequential vs Parallel read on Multiproviders.
17. Turn off warning messages on queries.
18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
19. Check to see where currency conversions are happening if they are used.
20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
21. Avoid Cell Editor use if at all possible.
22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.
25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The "not assigned" nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.

By: leela naveen

This are questions I faced. If u have any screen shots for any one of the question provide that one also.
1. We have standard info objects given in sap why you created zinfo objects can u tell me the business scenario
2. We have standard info cubes given in sap why you created zinfo cubes can u tell me the business scenario
3. In keyfigure what is meant by cumulative value, non cumulative value change and non cumulative value in and out flow.
4. when u creating infoobject it shows reference and template what is it
5. what is meant by compounding attribute tell me the scenario?
6. I have 3 cubes for that I created multiprovider and I created a report for that but I didn’t get data in that report what happen?
7. I have 10 cubes I created multiprovider I want only 1 cube data what u do?
8. what is meant by safety upper limit and safety lower limit in all the deltas tell me one by one for time stamp, calender day and numberic pointer?
9. I have 80 queries which query is taking so much time how can you solve it
10. In compression level all requests are becoming zero which data is compressing tell me detail
11. what is meant by flat aggregate?explain in detail
12. I created process chain 1st day it taking 10 min after that 1st week it taking 1 hour after that next time it taking 1 day with a same loads what happen how can u reduce the time of loading
13. how can u know the cube size? in detail show me u have screen shots
14. where can we find transport return codes
15. I have a report it taking so much time how can I rectify
16. what is offset? Without offset we create queries?
17. I told my process chains nearly 600 are there he asked me how can u monitor I told him I will see in rspcm and bwccms he asked is there any third party tools is there to see? Any tools are there to see tell me what it is
18. how client access the reports
19. I don’t have master data it will possible to load transaction data? it is possible is there any other steps to do that one
20. what is structure in reporting?
21. which object based you created extended star schema?
22. what is line item dimension tell me brief
23. what is high cardinality tell me brief
24. process chain is running I have to stop the process for 1 hour after that re runn the process where it is stopped?
in multiprovider can I use aggregations
25. what is direct schedule and what is meta chain
26. which patch u used presently? How can I know which patch that one?
27. how can we increase data packet size
28. hierarchies are not there in bi?why
29. remodeling is applied only on info cube? why not dso/ods?
30. In jump queries we can jump any transactions just like rsa1, sm37 etc it is possible or not?
31. why ods activation fail? What types of fails are there? What are the steps to handle
32. I have a process chain is running the infopackage get error don’t process the error of that info package and then you can run the dependent variants is it possible?

Give me any performance and loading issues or support issues
Reporting errors, Loading errors, process chain errors?





SAP BI 7 Interview Questions!!!! With Real-Time approach....

By: Chandiraban singu Source: SDN

Interviewers and the question will not remain the same, but find the pattern.

Brief of your profile
Brief of what you done in the project
Your challenging and complex situations
Your regularly faced problem and what you did to avoid the same in permanent
interviewers complex situation , recent situation will be posted for your analysis.

Some one may add
Your system landscape
System archiectecture
Release management
Team size, org str,....

If your exp has production support tthen generally about your roles, authorization and commonlly faced errors.
http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/b0c1d94f-b825-2c10-15ae-ccfc59acb291

About data source enhancement
http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/00c1f726-1dc2-2c10-f891-ddfbffdb1a46

About data flow during delta
http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/f03da665-bb6f-2c10-7da7-9e8a6684f2f9


If your exp has implementation.then
Modules which you have implemented.
Methodoloyg adopted
https://weblogs.sdn.sap.com/pub/wlg/13745
Approach to implementation
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/8917
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/8920
Testing system
Business scenario
how did you did data modellling like why std lo datasource? why dso ? why this much layers ?.....
Documentation, how your functionall spec and technical spec template, content, information will be...?

Design a SAP NetWeaver - Based System Landscape
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/50a9952d-15cc-2a10-84a9-fd9184f35366
https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/8877
BI - Soft yet Hard Challenges
https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/9068

*Best Practice for new BI project *
https://www.sdn.sap.com/irj/sdn/thread?threadID=775458&tstart=0

Guidelines to Make Your BI Implementations Easier
http://www.affine.co.uk/files/Guidelines%20to%20Make%20Your%20BI%20Implementations%20Easier.pdf

Specific bw interview questions

https://www.sdn.sap.com/irj/scn/advancedsearch?query=SAP+BW+INTERVIEW+QUESTIONS&cat=sdn_all
200 BW Questions and Answers for INTERVIEWS
http://sapdocs.info/sap-overview/sap-interview-questions/
http://www.erpmastering.com/bwfaq.htm
http://www.allinterview.com/showanswers/33349.html
http://searchsap.techtarget.com/generic/0,295582,sid21_gci1182832,00.html
http://prasheelk.blogspot.com/2008_05_12_archive.html
http://bisharath.blogspot.com/

Best of luck for your interviews.....Be clear with what you done...
http://saptutions.com/SAPBW/BW_openhub.asp
http://www.scribd.com/doc/6343052/BW-EXAM





Types of Tickets in Production

By: Purushothama Reddy

What are the types of ticket and its importance?
This depends on the SLA. It can be like:
1. Critical.
2. Urgent.
3. High.
4. Medium
5. Low.
The response times and resolution times again are defined in the SLA based on the clients requirement and the charges.
This is probably from the viewpoint of Criticality of the problem faced by the client as defined by SAP.
1) First Level Ticketing:
Not severe problem. Routine errors. Mostly handled by Service desk arrangement of the company (if have one).
Eg: a) Say Credit limit block in working on certain documents?
b) Pricing Condition Record not found even though conditions are maintained?
c) Unable to print a delivery document or Packing list?
PS: In the 4th phase of ASAP Implementation Methodology( i.e Final Preparations for GO-LIVE) SAP has clearly specified that a Service desk needs to be arranged for any sort of Implementation for better handling of Production errors.
Service desk lies with in the client.
2) Second Level Ticketing:
Some sort of serious problems. Those Could not be solved by Service Desk. Should be referred to the Service Company (or may be company as prescribed in SLA).
Eg: a) Credit Exposure (especially open values) doesn't update perfectly to KNKK Table.
b) Inter company Billing is taking a wrong value of the Bill.
c) Need a new order type to handle reservation process
d) New product has been added to our selling range. Need to include this into SAP. (Material Masters, Division attachements, Stock Handling etc.)
3) Third Level Ticketing:
Problems could not be solved by both of the above, are referred to Online Service Support (OSS) of SAP Itself. SAP tries to solve the Problem, sometimes by providing the perfect OSS Notes, fits to the error and rarely SAP logs into our Servers (via remote log-on)for post mortem the problem. (The Medical check-up client, connections, Login id and Passwords stuff are to be provided to SAP whenever they need or at the time of opening OSS Message.)
There are lots of OSS Notes on each issue, SAP Top Notes and Notes explaining about the process of raising a OSS Message.
Sometimes SAP Charges to the client / Service company depending on the Agreement made at the time of buying License from SAP.
Eg: 1) Business Transation for the Currency 'EUR' is not possible. Check OSS Note - This comes at the time of making Billing.
2) Transaction MMPI- Periods cannot be opened – See OSS Note.
There are many other examples on the issue.
4) Fourth Level Ticketing:
Where rarely, problems reach this level.
Those problem needs may be re-engineering of the business process due to change in the Business strategy. Upgradation to new Version. More or less this leads to extinction of the SAP Implementation.

How to Test BW Cube data with PSA Data ( Data Reconsilation )

How to Test BW Cube data with PSA Data
Objective
You would like to compare the contents of your InfoCube with data loaded from your source system into the PSA. In our example, we will check data that has been loaded for a specific customer.


Step 1
In the PSA tree of the Administrator Workbench, generate the Export DataSource on the PSA you want to check against via the context menu.


Step 2
In the source system tree, display the DataSource overview of the MYSELF BW system and locate your PSA Export DataSource. The technical name is the original DataSource name (to which the PSA belongs) prefixed with a “7”.


Step 3
From the context menu of the DataSource, choose “Assign InfoSource”.

Step 4
Enter a name for your new InfoSource and press the “Create”-button.

Step 5
Create an InfoSource for flexible update. This should be the default, so just press the green arrow to continue.

Step 6
Maintain the communication structure and transfer rules. Make sure that “REQUEST” is mapped to 0TCTREQUID (don`t change the default). The Transfer Method will automatically be set to “IDOC”. Activate your InfoSource.

Step 7
Create a RemoteCube to access data in the PSA.


Step 8
Choose a name and a description for the InfoCube. As InfoCube type choose “SAP RemoteCube” and enter the name of the InfoSource you have created in steps 4-6. Tick the “Clear Source System” checkbox since the BW System itself is the only source system of the RemoteCube.


Step 9
Select characteristics and key figures, create dimensions and assign the characteristics to the dimensions (no special modeling guidelines have to be followed for a RemoteCube). Activate your RemoteCube.


Step 10
From the context menu of the RemoteCube, choose “Assign Source Systems”

Step 11
Select the source system (the BW system itself) and save the assignment.

Step 12
Create a MultiProvider.

Step 13
Choose a name and a description for the new MultiProvider and press the “Create”-button.

Step 14
Select the InfoCube you want to check and the RemoteCube you have just created as the InfoProviders involved in the MultiProvider.

Step 15
Select the characteristics you need for your check reports. Make sure 0TCTREQUID is included.

Step 16
Identify characteristics and choose the key figures (from both InfoCubes) you would like to compare. Activate your MultiProvider.

Step 17
Create a query and include the characteristics you want to use, the Request ID from the DataPackage dimension and the Data Request (GUID) for the PSA request number. It is useful to include variables for your characteristics (otherwise you may get far too much data).

Step 18
Create selections for the Key Figures you want to check.

Step 19
In the first selection, use your Key Figures and the InfoProvider characteristic 0INFOPROV restricted to the InfoCube (0INFOPROV is automatically available in every MultiProvider).

Step 20
In the second selection, restrict the InfoProvider characteristic to the RemoteCube.

Step 21
Run the query. In our example, we are checking data for one customer.

Step 22
The result shows the data from PSA with corresponding Data Request (GUID) and the sum of the data from the InfoCube.

Step 23
If you drill down on Request ID, you get the data from the InfoCube displayed with the Request ID in the InfoCube (which you can see in the InfoCube Management).

Step 24
Optional: If you only want to check the data for the last request, you can add the variable 0LSTRQID to your InfoCube selection and…

Step 25
the variable 0MAPRQID to your RemoteCube selection. The latter is filled automatically with the PSA Request ID corresponding to the InfoCube Request ID filtered by in the first variable.

Step 26
The result will only show data loaded with the last request.

SAP BW / BI Support Issues Cont....

How to supress messages generated by BW Queries
Standard Solution :
You might be aware of a standard solution. In transaction RSRT, select your query and click on the "message" button. Now you can determine which messages for the chosen query are not to be shown to the user in the front-end.

Custom Solution:
Only selected messages can be suppressed using the standard solution. However, there's a clever way you can implement your own solution... and you don't need to modify the system for it!All messages are collected using function RRMS_MESSAGE_HANDLING. So all you have to do is implement an enhancement at the start of this function module. Now it's easy. Code your own logic to check the input parameters like the message class and number and skip the remainder of the processing logic if you don't want this message to show up in the front-end.

FUNCTION rrms_message_handling.
StartENHANCEMENT 1 Z_CHECK_BIA.
* Filter BIA Message
if i_class = 'RSD_TREX' and i_type = 'W' and i_number = '136'*
just testing it.*
exitend if.
ENHANCEMENT
End
IMPORTING
------------
----------
----
EXCEPTIONS
Dummy ..

How can I display attributes for the characteristic in the input help?
Attributes for the characteristic can be displayed in the respective filter dialogs in the BEx Java Web or in the BEx Tools using the settings dialogs for the characteristic. Refer to the related application documentation for more details.In addition, you can determine the initial visibility and the display sequence of the attributes in InfoObject maintenance on the tab page "Attributes" -> "Detail" -> column "Sequence F4". Attributes marked with "0" are not displayed initially in the input help.

Why do the settings for the input help from the BEx Query Designer and from the InfoProvider-specific characteristic settings not take effect on the variable screen?
On the variable screen, you use input helps for selecting characteristic values for variables that are based on characteristics. Since variables from different queries and from potentially different InfoProviders can be merged on the variable screen, you cannot clearly determine which settings should be used from the different queries or InfoProviders. For this reason, you can use only the settings on the variable screen that were made in InfoObject maintenance.

Why do the read mode settings for the characteristic and the provider-specific read mode settings not take effect during the execution of a query in the BEx Analyzer?

The query read mode settings always take effect in the BEx Analyzer during the execution of a query. If no setting was made in the BEx Query Designer, then default read mode Q (query) is used.

How can I change settings for the input help on the variable screen in the BEx Java Web?

In the BEx Java Web, at present, you can make settings for the input help only using InfoObject maintenance. You can no longer change these settings subsequently on the variable screen.

Selective Deletion in Process Chain
The standard procedure :
Use Program RSDRD_DELETE_FACTS
1. Create a variant which is stored in the table RSDRBATCHPARA for the selection to be deleted from a data target.
2. Execute the generated program.
Observations:
The generated program executes will delete the data from data target based on the given selections. The program also removes the variant created for this selective deletion in the RSDRBATCHPARA table. So this generated program wont delete on the second execution.

If we want to use this program for scheduling in the process chain we can comment the step where the program remove the deletion of the generated variant.

Eg:REPORT ZSEL_DELETE_QM_C10 .
TYPE-POOLS: RSDRD, RSDQ, RSSG.
DATA:
L_UID TYPE RSSG_UNI_IDC25,
L_T_MSG TYPE RS_T_MSG,
L_THX_SEL TYPE RSDRD_THX_SEL
L_UID = 'D2OP7A6385IJRCKQCQP6W4CCW'.
IMPORT I_THX_SEL TO L_THX_SEL
FROM DATABASE RSDRBATCHPARA(DE) ID L_UID.
* DELETE FROM DATABASE RSDRBATCHPARA(DE) ID L_UID.CALL FUNCTION 'RSDRD_SEL_DELETION'
EXPORTING
I_DATATARGET = '0QM_C10'
I_THX_SEL =
L_THX_SELI_AUTHORITY_CHECK = 'X'
I_THRESHOLD = '1.0000E-01'
I_MODE = 'C'
I_NO_LOGGING = ''
I_PARALLEL_DEGREE = 1
I_NO_COMMIT = ''
I_WORK_ON_PARTITIONS = ''
I_REBUILD_BIA = ''
I_WRITE_APPLICATION_LOG = 'X'
CHANGING
C_T_MSG =
L_T_MSG.export l_t_msg to memory id sy-repid.
UPDATE RSDRBATCHREP
SET DELETEABLE = 'X'
WHERE REPID = 'ZSEL_DELETE_QM_C10'.


ABAP program to find prev request in cube and delete
There will be cases when we cannot use the SAP built-in settings to delete previous request..The logic to determine previous request may be so customised, a requirement.In such cases you can write a ABAP program which calculates previous request basing our own defined logic.Following are the tables used : RSICCONT ---(list of all requests in any particular cube)RSSELDONE ----- ( has got Reqnumb, source , target , selection infoobject , selections ..etc)Following is one example code. Logic is to select request based on selection conditions used in the infopackage:


TCURF, TCURR and TCURX
TCURF is always used in reference to Exchange rate.( in case of currency translation ).For example, Say we want to convert fig's from FROM curr to TO curr at Daily avg rate (M) and we have an exchange rate as 2,642.34. Factors for this currency combination for M in TCURF are say 100,000:1.Now the effective exchange rate becomes 0.02642.
Question ( taken from sdn ):can't we have an exchange rate of 0.02642 and not at all use the factors from TCURF table?.I suppose we have to still maintain factors as 1:1 in TCURF table if we are using exchange rate as 0.02642. am I right?. But why is this so?. Can't I get rid off TCURF.What is the use of TCURF co-existing with TCURR.Answer :Normally it's used to allow you a greater precision in calaculationsie 0.00011 with no factors gives a different result to0.00111 with factor of 10:1So basing on the above answer, TCURF allows greater precision in calculations.Its factor shud be considered before considering exchange rate

.-------------------------------------------------------------------------------------TCURRTCURR table is generally used while we create currency conversion types.The currency conversion types will refer to the entries in TCURR defined against each currency ( with time reference) and get the exchange rate factor from source currency to target currency.

-------------------------------------------------------------------------------------
TCURXTCURX
table is used to exactly define the correct number of decimal places for any currency. It shows effect in the BEx report output.
-------------------------------------------------------------------------------------

How to define F4 Order Help for infoobject for reporting
Open attributes tab of infoobject definition.In that you will observe column for F4 order help against each attribute of that infoobject like below :
This field defines whether and where the attribute should appear in the value help.Valid values:• 00: The attribute does not appear in the value help.•
01: The attribute appears at the first position (to the left) in the value help.•
02: The attribute appears at the second position in the valuehelp.•
03: ......• Altogether, only 40 fields are permitted in the input help. In addition to the attributes, the characteristic itsel, its texts, and the compounded characteristics are also generated in the input help. The total number of these fields cannot exceed 40.
So accordingly , the inofobjects are changed> Suppose if say for infobject 0vendor, if in case 0country ( which is an attribute of 0vendor) is not be shown in the F4 help of 0vendor , then mark 0 against the attribtue 0country in the infoobject definition of 0vendor.

Dimension Size Vs Fact Size
The current size of all dimensions can be monitored in relation to fact table by t-code se38 running report SAP_INFOCUBE_DESIGNS.Also,we can test the infocube design by RSRV tests.It gives out the dimension to fact ratio.

The ratio of a dimension should be less than 10% of the fact table.In the report,Dimension table looks like /BI[C/O]/D[xxx]
Fact table looks like /BI[C/0]/[E/F][xxx]
Use T-CODE LISTSCHEMA to show the different tables associated with a cube.

When a dimension grows very large in relation to the fact table, db optimizer can't choose efficient path to the data because the guideline of each dimension having less than 10 percent of the fact table's records has been violated.

The condition of having large data growth in a dimension is called degenerative dimension.To fix, move the characteristics to different dimensions. But can only be done when no data in the InfoCube.

Note : In case if you have requirement to include item level details in the cube, then may be the Dim to Fact size will obviously be more which you cant help it.But you can make the item charecterstic to be in a line item dimension in that case.Line item dimension is a dimension having only one charecterstic in it.In this case, Since there is only one charecterstic in the dimension, the fact table entry can directly link with the SID of the charecterstic without using any DIMid (Dimid in dimension table usually connects the SID of the charecterstic with the fact) .Since link happens by ignoring dimension table ( not in real sense ) , this will have faster query performance.

BW Main tables
Extractor related tables: ROOSOURCE - On source system R/3 server, filter by: OBJVERS = 'A'
Data source / DS type / delta type/ extract method (table or function module) / etc
RODELTAM - Delta type lookup table.
ROIDOCPRMS - Control parameters for data transfer from the source system, result of "SBIW - General setting - Maintain Control Parameters for Data Transfer" on OLTP system.
maxsize: Maximum size of a data packet in kilo bytes
STATFRQU: Frequency with which status Idocs are sent
MAXPROCS: Maximum number of parallel processes for data transfer
MAXLINES: Maximum Number of Lines in a DataPacketMAXDPAKS: Maximum Number of Data Packages in a Delta RequestSLOGSYS: Source system.

Query related tables:
RSZELTDIR: filter by: OBJVERS = 'A', DEFTP: REP - query, CKF - Calculated key figureReporting component elements, query, variable, structure, formula, etc
RSZELTTXT: Similar to RSZELTDIR. Texts of reporting component elementsTo get a list of query elements built on that cube:RSZELTXREF: filter by: OBJVERS = 'A', INFOCUBE= [cubename]
To get all queries of a cube:RSRREPDIR: filter by: OBJVERS = 'A', INFOCUBE= [cubename]To get query change status (version, last changed by, owner) of a cube:RSZCOMPDIR: OBJVERS = 'A' .

Workbooks related tables:
RSRWBINDEX List of binary large objects (Excel workbooks)
RSRWBINDEXT Titles of binary objects (Excel workbooks)
RSRWBSTORE Storage for binary large objects (Excel workbooks)
RSRWBTEMPLATE Assignment of Excel workbooks as personal templatesRSRWORKBOOK 'Where-used list' for reports in workbooks.

Web templates tables:
RSZWOBJ Storage of the Web Objects
RSZWOBJTXT Texts for Templates/Items/Views
RSZWOBJXREF Structure of the BW Objects in a TemplateRSZWTEMPLATE Header Table for BW HTML Templates.

Data target loading/status tables:
rsreqdone, " Request-Data
rsseldone, " Selection for current Request
rsiccont, " Request posted to which InfoCube
rsdcube, " Directory of InfoCubes / InfoProvider
rsdcubet, " Texts for the InfoCubes
rsmonfact, " Fact table monitor
rsdodso, " Directory of all ODS Objects
rsdodsot, " Texts of ODS Objectssscrfields. " Fields on selection screens

Tables holding charactoristics:
RSDCHABAS: fields
OBJVERS -> A = active; M=modified; D=delivered
(business content characteristics that have only D version and no A version means not activated yet)TXTTABFL -> = x -> has text
ATTRIBFL -> = x -> has attribute
RODCHABAS: with fields TXTSHFL,TXTMDFL,TXTLGFL,ATTRIBFL
RSREQICODS. requests in ods
RSMONICTAB: all requestsTransfer Structures live in PSAPODSD
/BIC/B0000174000 Trannsfer Structure
Master Data lives in PSAPSTABD
/BIC/HXXXXXXX Hierarchy:XXXXXXXX
/BIC/IXXXXXXX SID Structure of hierarchies:
/BIC/JXXXXXXX Hierarchy intervals
/BIC/KXXXXXXX Conversion of hierarchy nodes - SID:
/BIC/PXXXXXXX Master data (time-independent):
/BIC/SXXXXXXX Master data IDs:
/BIC/TXXXXXXX Texts: Char./BIC/XXXXXXXX Attribute SID table:

Master Data views
/BIC/MXXXXXXX master data tables:
/BIC/RXXXXXXX View SIDs and values:
/BIC/ZXXXXXXX View hierarchy SIDs and nodes:InfoCube Names in PSAPDIMD
/BIC/Dcube_name1 Dimension 1....../BIC/Dcube_nameA Dimension 10
/BIC/Dcube_nameB Dimension 11
/BIC/Dcube_nameC Dimension 12
/BIC/Dcube_nameD Dimension 13
/BIC/Dcube_nameP Data Packet
/BIC/Dcube_nameT Time/BIC/Dcube_nameU Unit
PSAPFACTD
/BIC/Ecube_name Fact Table (inactive)/BIC/Fcube_name Fact table (active)

ODS Table names (PSAPODSD)
BW3.5/BIC/AXXXXXXX00 ODS object XXXXXXX : Actve records
/BIC/AXXXXXXX40 ODS object XXXXXXX : New records
/BIC/AXXXXXXX50 ODS object XXXXXXX : Change log

Previously:
/BIC/AXXXXXXX00 ODS object XXXXXXX : Actve records
/BIC/AXXXXXXX10 ODS object XXXXXXX : New records

T-code tables:
tstc -- table of transaction code, text and program name
tstct - t-code text .

1What is tickets? And example?
The typical tickets in a production Support work could be:
1. Loading any of the missing master data attributes/texts.
2. Create ADHOC hierarchies.
3. Validating the data in Cubes/ODS.
4. If any of the loads runs into errors then resolve it.
5. Add/remove fields in any of the master data/ODS/Cube.
6. Data source Enhancement.
7. Create ADHOC reports.
1. Loading any of the missing master data attributes/texts - This would be done by scheduling the info packages for the attributes/texts mentioned by the client.
2. Create ADHOC hierarchies. - Create hierarchies in RSA1 for the info-object.
3. Validating the data in Cubes/ODS. - By using the Validation reports or by comparing BW data with R/3.
4. If any of the loads runs into errors then resolve it. - Analyze the error and take suitable action.
5. Add/remove fields in any of the master data/ODS/Cube. - Depends upon the requirement
6. Data source Enhancement.
7. Create ADHOC reports. - Create some new reports based on the requirement of client.
Tickets are the tracking tool by which the user will track the work which we do. It can be a change requests or data loads or whatever. They will of types critical or moderate. Critical can be (Need to solve in 1 day or half a day) depends on the client. After solving the ticket will be closed by informing the client that the issue is solved. Tickets are raised at the time of support project these may be any issues, problems.....etc. If the support person faces any issues then he will ask/request to operator to raise a ticket. Operator will raise a ticket and assign it to the respective person. Critical means it is most complicated issues ....depends how you measure this...hope it helps. The concept of Ticket varies from contract to contract in between companies. Generally Ticket raised by the client can be considered based on the priority. Like High Priority, Low priority and so on. If a ticket is of high priority it has to be resolved ASAP. If the ticket is of low priority it must be considered only after attending to high priority tickets.
Checklists for a support project of BPS - To start the checklist:
1) Info Cubes / ODS / data targets 2) planning areas 3) planning levels 4) planning packages 5) planning functions 6) planning layouts 7) global planning sequences 8) profiles 9) list of reports 10) process chains 11) enhancements in update routines 12) any ABAP programs to be run and their logic 13) major bps dev issues 14) major bps production support issues and resolution .

2 What are the tools to download tickets from client? Are there any standard tools or it depends upon company or client...?
Yes there are some tools for that. We use Hpopenview. Depends on client what they use. You are right. There are so many tools available and as you said some clients will develop their own tools using JAVA, ASP and other software. Some clients use just Lotus Notes. Generally 'Vantive' is used for tracking user requests and tickets.
It has a vantive ticket ID, field for description of problem, severity for the business, priority for the user, group assigned etc.
Different technical groups will have different group ID's.
User talks to Level 1 helpdesk and they raise ticket.
If they can solve issue for the issue, fine...else helpdesk assigns ticket to the Level 2 technical group.
Ticket status keeps changing from open, working, resolved, on hold, back from hold, closed etc. The way we handle the tickets vary depending on the client. Some companies use SAP CS to handle the tickets; we have been using Vantage to handle the tickets. The ticket is handled with a change request, when you get the ticket you will have the priority level with which it is to be handled. It comes with a ticket id and all. It's totally a client specific tool. The common features here can be - A ticket Id, - Priority, - Consultant ID/Name, - User ID/Name, - Date of Post, - Resolving Time etc.
There ideally is also a knowledge repository to search for a similar problem and solutions given if it had occurred earlier. You can also have training manuals (with screen shots) for simple transactions like viewing a query, saving a workbook etc so that such queried can be addressed by using them.
When the problem is logged on to you as a consultant, you need to analyze the problem, check if you have a similar problem occurred earlier and use ready solutions, find out the exact server on which this has occurred etc.
You have to solve the problem (assuming you will have access to the dev system) and post the solution and ask the user to test after the preliminary testing from your side. Get it transported to production once tested and posts it as closed i.e. the ticket has to be closed.

3.What is User Authorizations in SAP BW?
Authorizations are very important, for example you don't want the important financial report to all the users. so, you can have authorization in Object level if you want to keep the authorization for specific in object for this you have to check the Object as an authorization relevant in RSD1 and RSSM tcodes. Similarly you set up the authorization for certain users by giving that users certain auth. in PFCG tcode. Similarly you create a role and include the tcodes; BEx reports etc into the role and assign this role to the userid.