Here we go, SAP Basis Academy material for old version, most of them are still valid though. This is based on SAP Netweaver 7.0 which I think still pervasively used in a lot of companies. The current SAP Netweaver technology certification is based on the newer version like SAP NW 7.31. Anyway, you still can learn a lot from here. Enjoy! :)
I had an issue with the fast growing LOBSEGMENTS in SAP Netweaver PI using Oracle Database 11G. Below are the procedure to check LOB tables :
- Transaction DB02
- Tables and indexes
- Detailed analysis
- Tablespace = SAP<SID> (ABAP DB Schema) or SAP<SID>DB (JAVA DB Schema)
- Sort by KBytes column descending to view largest tables in DB schema.
- Use the securefile conversion on the following PI tables that use LOB data.
- See SAP Note 1426979 Oracle database 11g: Integration in SAP environment.
- The PI tables are: SXMSCLUP, SXMSCLUP2, SXMSCLUR, SXMSCLUR2, XI_AF_MSG (<7.1), BC_MSG (≥7.1).
- SQL Command : select table_name, segment_name, securefile from dba_lobs where table_name=’SWPCMPCONT';
- Table reorg using brtools -> table reorg (action lob2lob)
You can display the consumption of various SAP Memory Types by choosing -> Goto -> Memory -> Select the user(s) that you want to display the usage for. If you do not select any, the memory overview displays the usage for all users.
The following table explains the meaning of each column.
|User||User logged on to the server (SAP user name)|
|Transaction||Executed SAP transaction (transaction code)|
|Roll||The roll area holds the user environment information needed by work processes when executing dialog steps.|
|Page||SAP Paging is only used for a limited number of ABAP commands.|
|Mem(Total)||Specifies how much of the SAP extended memory was requested by this user.|
|Mem(Priv.)||Private memory (heap memory) requested by the user.|
The order in which the various memory classes are assigned to the dialog work process is described underAllocating Memory for User Contexts.
For further information on SAP memory management, please refer to the documentation on BC – Memory Management.
FAQ, frequently asked questions
Analyzing database performance is a complex subject. This FAQ does not claim to provide the solution for all performance problems. Instead, it offers a general overview and references to more detailed notes and documentation that could help to solve the problem.
The Oracle parameter TIMED_STATISTICS should be set to TRUE to enable Oracle to log information about the time consumption. If a restart is not possible, this parameter can also be activated dynamically using:
ALTER SYSTEM SET TIMED_STATISTICS = TRUE;
To carry out a global performance analysis, it also helps if the system is up and running. To this end, the “Reads” (meaning buffer gets or logical reads) in the data buffer (refer to transaction ST04) should have reached at least 20,000,000. Otherwise you can only make limited assertions about buffer qualities and file access times.
You can use transaction ST03 or ST03N to determine the extent to which accesses to the Oracle database are involved in the overall response time. This portion is determined by using the “Database time” column and should not exceed 40% of the overall response time.
Furthermore, the value for “Time/User call” can be determined in transaction ST04. Experience indicates that values of more than 10ms show room for improvement. In individual cases, however, the value can be much higher due to the inclusion of special idle events, and this limits the relevance of this data.
The ratio of “Busy wait time” to “CPU time” in ST04 is also an indicator of optimization potential. A ratio of 60: 40 generally indicates a well-tuned system. Significantly higher values (such as 80: 20) indicate that system performance can be improved using “wait event tuning”. If the CPU share is significantly higher than 40%, a CPU bottleneck may be responsible. In this case, check the CPU utilization of the database server as described below.
Note, in certain cases, the displayed “Busy wait time” is not correct, because sometimes idle events are included as well (-> a more detailed look at the wait events in accordance with Note 619188 is required).
As of Oracle 10g, the relevant relationship between Busy Waits and CPU time for all user processes can be easily determined from V$SYS_TIME_MODEL.
ROUND((STM1.VALUE – STM2.VALUE) / 1000000) “BUSY WAIT TIME (S)”,
ROUND(STM2.VALUE / 1000000) “CPU TIME (S)”,
ROUND((STM1.VALUE – STM2.VALUE) / STM1.VALUE * 100) || ‘ : ‘ ||
ROUND(STM2.VALUE / STM1.VALUE * 100) RATIO
FROM V$SYS_TIME_MODEL STM1, V$SYS_TIME_MODEL STM2
WHERE STM1.STAT_NAME = ‘DB time’ AND STM2.STAT_NAME = ‘DB CPU';
You can derive the database time of a transaction step from the respective statistical data record in Transaction STAD (formerly: STAT).
Note that for extremely long-running programs, there overflows may occur in the components of database time (for values > 2^32 microseconds = 71 minutes), and the subsequent time is wrongly recorded as processing time.
To determine which database accesses are primarily responsible for the database response time, you can set the R/3 parameter stat/tabrec (see Note 579462) to a value greater than 0 (-> number of tables to be logged).
If this parameter is used in connection with batch jobs, the additional statistical data is not saved into the statistical data records of the actual job, but instead into a statistical data record of the sequentially-logged RSBTCRTE report in Release 6.40 and lower releases.
Oracle allows access to performance data using special V$-Views. Including:
- V$SYSTEM_EVENT: Totaling of all queue events
- V$SYSSTAT: Statistics values for the entire system
- V$SESSION_EVENT: Totaling of the queue events per session
- V$SESSION_WAIT: Current queue events per session
- V$SESSTAT: Statistics values per session
- V$FILESTAT: File system accesses
- V$LATCH: Latch information
- V$WAITSTAT: Information about Buffer Busy Waits
Oracle also provides a collection of scripts in the form of statspacks to collect and evaluate data relevant to performance. See Note 717484, which contains more precise details about using statspacks.
As of Oracle 10g, you can use AWR and ADDM to gain an overview of the database performance and possible areas for tuning. For more information, see Note 853576.
In R/3, you can use transaction “ST04 -> Detail Analysis Menu -> Display V$ Values” or the RSORAVDV report to directly access the Oracle V$ Views. In addition, other evaluation options are available within ST04 or special reports (such as RSORAVSE for the history of V$SYSTEM_EVENT).
If the new transaction ST04N is already available (Note 705608, 716000), you can use the relevant RSORADJV report to issue any SQL statement on DBA and V$ views (including joins and functions). Note however, that an alias of your choice must be added to all the functions used, for example.:
SELECT MAX(BYTES) MY_ALIAS FROM DBA_SEGMENTS;
There are various problems with the statistical data from the V$ views, meaning that you should always investigate any values that stand out, for example:
- Various time specifications in V$ views, such as V$FILESTAT or V$SYSTEM_EVENT, can be significantly too high due to errors in measurement (for example, Oracle bug 3094493). To be on the safe side, you can reset the statistic by using the RESET button in the V$ view display in ST04. Since incorrect measurement values only occur sporadically, you are very likely to receive correct values immediately after a RESET.
- The write times from V$FILESTAT are generally unsuitable for an analysis.
- In many cases, direct path operations are not recorded or their entry is restricted, meaning that “direct path read” and “direct path write” are often not analyzed sufficiently (see Oracle bugs 4217134 and 4217055, and metalink documents 50415.1 and 50416.1).
- The write activities of the DBWR process (“db file parallel write”) are also not recorded sufficiently.
- The times measured for write activities of the LGWR process (“log file parallel write”) are often incorrect with Oracle 9i (Oracle bug 2669566).
- To avoid incorrect time measurements on AMD x64 systems, the driver described in Note 1036194 must be implemented. See also Note 1047734.
- Time specifications in V$SYSSTAT such as “DB time” or “CPU used by this session” may be incorrect. The relevant values from (“DB time”, “DB CPU”) from V$SYS_TIME_MODEL (Oracle 10g or higher) are more reliable.
Oracle usually stores times internally in 1/100 seconds (that is, in hundredths of seconds). In R/3 transactions such as ST04, on the other hand, times are generally specified in 1/1000 seconds. In more recent Oracle releases, more and more times are stated in 1/1,000,000 seconds (that is, microseconds).
- Oracle parameter setting
Using Note 124361 (for Oracle 9i and lower), Note 830576 (for Oracle 10g) or Note 1431798 (for Oracle 11g) and the notes to which they refer, check whether the parameters correspond to the recommendations, and correct any that do not. For R/3 3.x, see also Note 160063. For I/O configuration, see Note 793113. The following parameters are particularly relevant to performance: DB_FILE_MULTIBLOCK_READ_COUNT, OPTIMIZER_MODE, OPTIMIZER_INDEX_COST_ADJ, OPTIMIZER_INDEX_CACHING, HASH_JOIN_ENABLED, STAR_TRANSFORMATION_ENABLED, DISK_ASYNCH_IO, EVENT, FILESYSTEMIO_OPTIONS, _FIX_CONTROL, and other underscore parameters.
Also check if special parameters are set that were used for a temporary solution or analysis of a previous problem, but are now no longer required. These include underscore and event parameters that do not comply with SAP standard recommendations. Since some of these settings lead to severe performance problems, remove these settings when you no longer need them (for example after a related bug fix has been implemented or an Oracle upgrade has been performed).
- R/3 parameter setting
Make sure that the parameters rsdb/max_blocking_factor and rsdb/max_in_blocking_factor are set in accordance with the recommendations from Note 881083. (in accordance with the SAP standard recommendations if in any doubt).
To ensure that the database interface proposes useful hints, do not explicitly set the parameter dbs/ora/use_hints so that the program uses the default values. Refer to Note 135048, which describes how you need to manually adjust the parameter in earlier releases.
The parameter dbs/ora/array_buf_size defines the size of the buffer used to send and receive SQL data. You should not normally change the default value from 65KB. The maximum possible value is 650 KB; in newer releases, it is 1 MB (Note 1164043). Note that if you increase this parameter, the SAP System may require a great deal more memory, since this area may be allocated several times for each workprocess, depending on the number of currently open SQL statements (15 or lower). However, in the BW environment, operations such as array INSERTs can be accelerated by increasing the parameter, since less database communication is then required (see Note 192658).
- Statistics (R/3 4.0B or higher)
Use Note 132861 to check whether statistical runs are scheduled on a regular basis and whether their parameters are set correctly.
Use Note 588668 to ensure that no index or column statistics are missing and that all statistics were created with sufficient accuracy.
Ensure that you only maintain customer-specific entries that are actually still necessary in the statistical exceptions table DBSTATC. See also Note 122718, which contains the SAP standard exceptions. Avoid entries with OPTIO values with P<number> or R<number>, since this may cause the program to create statistics that are not sufficiently accurate. The DBSTATC entries suggested as standard by SAP can be taken from the dbstatc.sql script (for Oracle 9i or lower) or dbstatc10.sql (Oracle 10g or higher) , which is attached to Note 403704 and can be executed to initialize the exceptions table.
Only create histograms when you have fulfilled the prerequisites for correct histogram usage that are described in Note 797629.
Apart from the exception described in Notes 138639 and 558746 for BW systems, R3SZCHK runs and Oracle 10g or higher, no statistics should be created for the Oracle DDIC objects. You can use the following statement to determine if there are Oracle DDIC tables for which statistics exist:
SELECT TABLE_NAME FROM DBA_TABLES
WHERE LAST_ANALYZED IS NOT NULL AND OWNER = ‘SYS';
If tables are returned and there is no valid reason for the statistics to exist, delete the statistics (refer to Note 558746 below).
As of Oracle 10g, you must ensure that the new statistic types described in Note 838725 have been created and are regularly updated.
As of Oracle 10g, you should also implement the statistics that SAP delivered in Note 1020260.
As of Oracle 10g, you must ensure that the standard Oracle statistics job GATHER_STATS_JOB is inactive. For this, use DBA_SCHEDULER_JOBS to check that ENABLED=FALSE is set for the entry GATHER_STATS_JOB. If the job is still active, proceed as follows to deactivate it:
Ensure that you no longer use the report SAP_ANALYZE_ALL_INFOCUBES in the BW environment and that, instead, you use BRCONNECT 6.10 or higher to create statistics (Note 421795).
- Oracle patch set
Some older Oracle patches still contain bugs that can cause performance problems. Therefore, check whether there is a newer version of the Oracle patch set than the one you are using (Note 362060 for 8.1.7, Note 539921 for 9.2).
Under Oracle 10g, you must also ensure that the obligatory bug fixes from Note 871096 (10.2.0.2) or 1137346 have been implemented.
- Disk access times
Use Transaction ST04 -> Detail Analysis Menu -> File System Requests to check whether the average read time (“Avg(ms)” for “Blk Reads”) for individual data files or in total (Total under the column) is too high (>> 10 ms). If so, check whether the problem can be solved by improving the data distribution (for example, reorganization) or whether it is an I/O problem caused by hardware. Contact your hardware partner if necessary. Also check the Oracle I/O configuration as described in Note 793113.
You can use transaction ST06/OS07 (-> Detail Analysis Menu -> Disk) to determine the load on the hard disks of the database server. The load of a hard disk should not exceed 80%. If higher values appear, there may be a “hot spot”, that can be removed by relocation or better distribution of the data.
Note that access times for data files with few read accesses may sometimes be significantly higher than 10ms. Relevant here are the times for frequently accessed data files. Note also that the data displayed may be incorrect if the system has been running for a long time. In this case, SAP recommends that you clear the data by using “reset” and that you then check the new values after a while by using “Since reset”.
Increased I/O times could also be the result of a high load. In addition to SQL optimization (Note 766349), also make sure that no more than one log switch per minute occurs during peak times. It may also be necessary to increase the size of the online redo logs until they are large enough. Too many log switches lead to too many checkpoints, which in turn lead to a high writing load in the I/O subsystem.
A high workload on the Redo Logs can be caused by individual data files continually being in active backup mode, since this means that the system always has to write complete blocks in the Redo Logs rather than just the areas that were actually changed within the blocks. Therefore ensure that no data files are in backup mode unless an online backup is being implemented (Note 4162).
- Hardware Bottlenecks
Use transaction ST06 or OS07 to check whether there are sufficient CPU and memory resources on the database server.
The CPU IDLE time should usually appear on the hour average at 30% or more. In this case, you must note that, in certain cases, we seek an optimal load of the CPU for certain tasks and this results in significantly lower IDLE times, without this being critical.
If possible, this should result in minimum paging or in no paging at all. Whilst the page-out rates are pertinent on UNIX, you must give priority to checking the page-in on Windows.
The page-in on Windows also contains accesses to normal files (for example, during backups that do not work with fread/fwrite, see Note 689818). This means that increased page-in values cannot automatically be rated as critical here.
However, in certain circumstances, increased page-out on Windows may lead to problems (for example, due to the problems regarding Windows 2003 that are described in Note 1009297).
As of Oracle 10g, you can also read important operating system information (CPU number, CPU load, physical memory, paging) from V$OSSTAT. You can take details about the CPU (CPUs, CPU cores, CPU sockets) from DBA_CPU_USAGE_STATISTICS.
- Size of the buffer pool
Use transaction ST04 to check the hit ratio (“Quality”) of the data buffer. This should be at least 94% for normal operation. If this is not the case, check whether the hit ratio is low because the specified buffer data size is too small, because there are a lot of Direct Path operations or because there are unfavorable SQL statements. If necessary, you can consider enlarging the buffer pool. Refer to Note 789011 for more information.
Note that the hit ratio is only a rough decision criterion. A wait event analysis, in accordance with Note 619188, is more important for the evaluation of the buffer pool that is currently configured, and for the influence of the buffer pool on database performance.
- Size of the shared pool
Use transaction ST04 to check whether the conditions
- DD-cache quality is greater than 80%
- pinratio is 95 or higher
- reloads/pin is 0.04 or lower
- User/recursive calls are 2 or higher
are fulfilled in normal operation. If not, the value set for the shared pool may be too low. In this case, it may help to increase the Oracle parameter shared_pool_size.
Activities such as creating statistics or structure validation can lead to a significant deterioration in the values. You should therefore check (for example using the Reset function in transaction ST04) whether the values meet the approximations in normal operation without such actions. If so, the shared pool does not need to be enlarged.
If there is always a large amount of free space in the Shared Pool, you can reduce the size of the Shared Pool. The current free space can be determined as follows:
SELECT BYTES FROM V$SGASTAT
WHERE POOL = ‘shared pool’ AND NAME = ‘free memory';
Since the free space is unused memory, you can reduce the Shared Pool and memory usage in other places if significant free space values (for example 50 MB or more) are reported, and you do not go below the minimum amount mentioned in Note 690241.
See Note 789011 for further information.
- Expensive SQL statements
As described in Note 766349, check which SQL statements are responsible for most Buffer Gets or Disk Reads and whether they can be optimized.
The “Reads/User Calls” value in ST04 is an indication of expensive SQL statements. If this value is above 15, the SQL statements should be checked in detail.
- Fragmented tables
Check whether there are tables that are greatly fragmented as described in Note 821687 (question 5). If over a period of time, the data quantity in a table is smaller than the allocated space, you should consider a reorganization.
- Fragmented indexes
Check regularly whether fragmented indexes exist and reconstruct these indexes (see Note 771929).
- Missing indexes
Using “DB02 -> Missing indexes”, you can determine whether indexes that should exist from the R/3 Dictionary view are missing at database level. If indexes are listed here, determine the cause and create the indexes where necessary.
- Network problems
Network problems between R/3 instances and the database server can result in seemingly long database response times even though there are no obvious problems on Oracle. The NIPING tool is a good way of locating problems in the network area. See Note 500235 for details and start a NIPING server on the database server and NIPING clients on the R/3 instances. Check whether the resulting bandwidth meets your expectations. If not (or if it causes errors), consult your hardware partner.
If you are using Oracle Server or Oracle Client 9i or lower, check (in accordance with Note 562403) whether TCP.NODELAY is set correctly, since otherwise delays can occur when transferring TCP packages.
It also makes sense to set the maximum SDU size in tnsnames. ora and listener.ora, and, if possible, to define the maximum DEFAULT_SDU_SIZE in sqlnet.ora (refer to Note 562403).
- Table buffering
Use transaction ST10 to check the following points in all instances (“all tables”/”since startup”/”this server”):
Tables to be buffered that cannot be loaded and have the status “error” are critical. If such tables exist in your system, you can use syslog entries to help determine the cause of the error. An entry such as
BB8 Buffer overrun when loading table <table>
BB8 Buffer overflow loading table <table>
indicates that there was no more space available to accept the table in the table buffer. If the table buffering is important for system performance, you should enlarge the buffer accordingly. Otherwise you may wish to consider deactivating the table buffering. You can find more precise information about table buffering in transaction ST02. See Note 703035 for more details.
- Sort the list in accordance with the “Rows affected” column, which is a measure of the database load generated by accesses to the table. If buffered tables are located at the top (“Buffer state” column is not empty), check their buffering. Large tables and tables with frequent changes should not be buffered. You must also check whether the WHERE conditions from the application match the type of table buffering.
- Sort the list by the “Invalidations” or “Changes” columns and check whether tables with a high number of changes or invalidations should be removed from buffering.
- Sort the list by the “Buffer size” column and check whether buffering is advisable for tables with the largest buffer size.
For more information about table buffering, see Note 47239. You can find detailed information on monitoring and optimization of table buffering in the “SAP R/3 Performance Optimization” book from the SAP PRESS range.
- Chained Rows
A chained row occurs when a data record no longer fits completely into one database block but must be split between different blocks. If this kind of data record is accessed, multiple blocks must be read instead of one, which can impair performance. A high number of chained rows is most likely if the average length of data records is relatively high (every data record over 8k must be split over several 8k-blocks) or if new columns with values other than NULL are added to a table.
Use Note 1269911 to check how much sense it makes to take the measures required to avoid or reduce chained rows.
- Large Segments
Use DB02 or the following SELECT to check which segments currently take up the most space in the database.
SELECT * FROM
(SELECT SUBSTR(OWNER, 1, 20) OWNER,
SUBSTR(SEGMENT_NAME, 1, 20) SEGMENT, BYTES
FROM DBA_SEGMENTS ORDER BY BYTES DESC)
WHERE ROWNUM <=20;
The size of the tables and indexes can in many cases be limited by archiving by the application or Basis. To create an application-level archiving concept and limit the data growth, SAP offers the service “SAP Data Volume Management”.
You can find further details at http://service.sap. com/safeguarding.
To reduce data in Basis tables, follow Note 706478.
- Operating system-dependent optimization
- Client 8.1.7: Optimize the access to oraus.msb as described in Note 868590.
- Oracle 9.2: Follow Note 750205 to reduce the memory consumption of the Oracle processes.
- Oracle 9.2, JFS2: Follow Note 857847 to avoid freezes in the database during high workloads.
- JFS2, concurrent I/O or direct I/O: Use a file system block size of 512 bytes for the online redo logs (see Note 793113).
- Oracle 10g, HP-UX 11.23 IA64, or HP-UX 11.31 IA64: Ensure that you have applied the Scheduler Cumulative Patches PHKL_37809 (11.23) or PHKL_38397 (11.31) to avoid mutex problems (see Note 964344).
- Activated Traces
Activating traces unnecessarily can lead to an unnecessary load on the database. Therefore make sure that critical traces are not activated unless necessary. These include:
- SQL trace (transaction ST05)
- ABAP trace (transaction SE30)
- Oracle Net traces (Note 562403)
- ORADEBUG traces (Note 613872)
- Workflow event trace (transaction SWELS)
- Parallel Query
If parallel query is activated, this can have a significant effect on cost calculation and database accesses. As described in Note 651060, check, therefore, whether there are segments where parallel processing was accidentally activated and deactivate it.
- UNUSABLE Indexes
As of Oracle 10g, indexes that have the status UNUSABLE can be ignored by the CBO due to SKIP_UNUSABLE_INDEXES = TRUE (previously, a termination occurred with ORA-01502). Therefore, these indexes may result in performance problems. Use the following statements to check whether there are UNUSABLE indexes in the database:
SELECT OWNER, INDEX_NAME
WHERE STATUS = ‘UNUSABLE';
SELECT INDEX_OWNER, INDEX_NAME, PARTITION_NAME
WHERE STATUS = ‘UNUSABLE';
Determine and solve the cause for the UNUSABLE indexes and restructure the determined indexes in accordance with Note 332677.
- Further preventative measures
To avoid time-consuming locks after network problems, you can set SQLNET.EXPIRE_TIME in accordance with Note 20071.
To avoid incorrect CBO decisions due to missing column statistics, you can use BRCONNECT to generate statistics for all columns as described in note 723894. Up to Release 6.20, BRCONNECT only creates statistics for indexed columns by default. As of BRCONNECT 6.40 (35), statistics are generated for all columns – for partitioned tables or for tables with 30 columns or more, no histograms are created for non-indexed columns. As of BRCONNECT 7.00, you can use the “achist” force option (see Note 892296) to activate the creation of histograms also for columns that are not indexed.
Check whether there are segments in dictionary managed tablespaces that include more than 1000 extents. If so, see Note 787533 and construct the segments with fewer extents when you have the opportunity.
Make sure that PSAPTEMP is not defined as DMTS/P with an extent size of less than 2 MB, since this may lead to ST enqueue problems. Instead, use LMTS/T or at least an extent size of 2 MB or higher (see Note 659946).
Ensure that there are no dictionary managed tablespaces that have small NEXT values (less than 1MB) or restricted MAXEXTENTS. Since the tablespace defaults are used implicitly at several points (for example, for LOBs, or during an online reorganization), incorrect settings may cause ST enqueues and terminations.
Use the following statement to check whether both CATALOG and CATPROC in the DBA_REGISTRY have the current patchset status, and are VALID:
SELECT COMP_ID, VERSION, STATUS
WHERE COMP_ID IN (‘CATALOG’, ‘CATPROC’);
If the displayed patch level is older than the patch set used, the catalog scipts have not run correctly (see Note 539921). You can use Note 582427 to correct this problem. If objects are INVALID, validate them as described in Note 648203. In certain cases, serious performance problems may occur when the catalog information is obsolete.
On UNIX, make sure that the Oracle Software was correctly relinked. No serious errors can be reported in the file $ORACLE_HOME/install/make.log.
To optimize accesses to DBA views such as DBA_SEGMENTS, refer to Note 871455.
First of all, determine the Oracle session that belongs to the work process. To do this, note the PID from SM50 or SM66. In transaction ST04, choose “Detail analysis menu -> Oracle session”, and look for the row containing the PID from SM50/SM66 in the column “Client PID”. Caution: If the work process has build secondary connections to the same database, several Oracle sessions may exist that you can assign to the work process using the “Client PID”.
If most of the runtime is used by Oracle (Note 805934), this information is the basis for a wait event analysis (Note 619188) and an SQL statement analysis (Note 766349).
See Note 354080, which deals with performance problems due to Oracle bugs and configuration problems.
See Note 1013912.
Note 521264 contains causes and analysis options in the event of Oracle hanging.
In many cases, significant differences in runtime can be explained by one of the following factors:
- The execution of a statement for the first time often takes longer than subsequent executions, because numerous blocks have to be read from the hard disk. In subsequent accesses, the blocks are already present in the memory in the SGA, which reduces the runtime considerably.
- If Blind Value Peeking is active because _OPTIM_PEEK_USER_BINDS is not set to FALSE as recommended by SAP (in Note 124361), this feature may cause the database to behave unpredictably. Depending on the access path, the runtime may differ greatly as a result.
- Changes to the CBO statistics (for example, as part of a BRCONNECT statistics run) may also lead to varying runtimes.
More precise information can only be obtained on the basis of a detailed SQL analysis.
While SELECT, UPDATE and DELETE statements are being converted on the basis of execution plans, INSERT statements work according to a completely different mechanism based on free lists. Therefore, with INSERT statements, you cannot improve the performance by SQL tuning. Instead, you should analyze in detail the wait events that occur.
Among other things, the following factors can trigger poor performance for INSERT statements:
- If extents that are too small are used in Dictionary Managed Tablespaces for the object in question, performance problems may occur due to space transactions and ST enqueues. See also Note 745639.
Caution: Even without ST-enqueue wait situations, the performance of INSERTS can be seriously affected by space transactions because it is very time intensive, not only to wait for the ST-enqueue, but also to carry out the space transaction itself, and in certain cases, this can last longer than 1 second. In this case, a number of LMODE=6 entries with the type ‘ST’ can be found in V$LOCK, and the “recursive cpu usage” (STATISTIC#=8) for this session in V$SESSTAT is relatively high.
- If parallel INSERTs are waiting for a long time for “buffer busy waits”, increasing the free lists may help (see the “buffer busy waits” section in Note 619188).
- Bulk-INSERTs (“INSERT … FROM TABLE” in ABAP) may insert a large number of data records for each INSERT, depending on the size of the internal table, and this can cause longer runtimes. You can use an SQL trace to determine how many records are inserted for each INSERT. However, note that if the program displays return code 1 (ORA-00001: unique constraint violated) several data records have been processed unsuccessfully because an identical key combination already exists. If most INSERTS fail with ORA-00001, then the INSERT runtime may be very long, even though only a few data records are actually inserted.
- A large number of indexes or index fragmentation can also cause the performance to deteriorate because more blocks have to be read and changed in the SGA.
- If you use CBO statistics for the Oracle DDIC, incorrect CBO decisions relating to recursive SQL statements can extend the INSERT runtime considerably. In this case, refer to Note 138639.
- INSERT operations using the traditional PGA administration can sometimes show high CPU consumption together with a high PGA allocation. For more information, refer to Note 712624 (18).
- On Oracle 126.96.36.199 and 188.8.131.52, the bug described in Note 912918 may be responsible for long INSERT runtimes when you use ASSM.
- AUTOEXTEND extensions of data files during INSERT operations can also increase runtime (refer to the section “Data file init write” in Note 619188).
- On Oracle 10.2.0.3 or lower, the bug described in Note 1028099 may be responsible for long INSERT runtimes when you use a large Oracle buffer pool.
- Due to the behavior that is described in Note 1037755, if you have of ASSM tablesspaces, parallel DML operations and a low COMMIT frequency, this may lead to a significant increase in buffer gets, an increase in CPU usage, and cause significant performance problems.
- Very poor INSERT performance may also be caused by NOCACHE-LOBs and the direct disk accesses that are connected to this. Therefore, in accordance with Note 500340, check whether the table contains LOB columns that have not been cached and, if required, activate caching.
- If you insert a large number of data records, sorting this source data may have a significant influence on the INSERT performance. The better the data is sorted with regard to indexes or partitions, the faster the INSERT can run. This situation may cause using a HASH GROUP BY instead of a SORT GROUP BY to result in poorer INSERT performance when the system determines the datasets that are to be inserted.
Increased COMMIT times usually occur directly in connection with increased wait times on the Oracle wait event “log file sync”. For the optimization, you must improve the write I/O in the online redo logs. In this context, refer to Notes 619188 (section “log file sync”) and 793113.
“Direct read” refers to accesses in which a maximum of one line is returned by the database. This includes fully-qualified primary key accesses and summarization queries such as SELECT COUNT
A “Sequential Read” refers to all other read database accesses, in which there may be more than one line returned. The term “Sequential read” is deceptive since it implies that blocks are always read sequentially (as in a full table scan). In reality, however, the index is accessed in most cases.
“Direct read” and “Sequential read” in SM50 are generally worthless for a performance analysis. The only information provided is that the processes are working on processing one of the queries directed at the database interface. You cannot determine from the analysis whether the wait time is triggered by a physical I/O, database locking, network, Oracle client, DBSL, DBI or for other reasons. For a more precise analysis, you can either use a buffer trace in ST05, create a work process trace of DBI and DBSL (SM50), or carry out an analysis in accordance with Note 805934.
Data Buffer Quality (or “hit ratio”) can be determined in various ways:
(1 – “physical reads” / “session logical reads”) * 100
(1 – (“physical reads” – “physical reads direct” –
“physical reads direct (lob)”) /
(“session logical reads” – “physical reads direct” –
“physical reads direct (lob)”) * 100
(1 – PHYSICAL_READS / (DB_BLOCK_GETS + CONSISTENT_GETS)) * 100
Transaction ST04 uses method a) while transaction ST04N uses method b). When there are many direct reads (parallel query, lobs, PSAPTEMP access, …), the hit ratio issued by ST04N can be significantly better than the ST04 hit ratio.
Method c) delivers similar results as method b), although the hit ratio determined on the basis of V$BUFFER_POOL_STATISTICS can be lower, since in many situations, less logical reads are logged than in V$SYSSTAT.
See Note 806554 for information about improving such I/O-intensive database operations.
See Note 842240.
In many cases, it is difficult to monitor long-running actions, or can only be done so by using individual methods. For certain actions, such as Full Table Scans or sortings, Oracle stores this information in V$SESSION_LONGOPS. Using the following query, you can determine information about long-running actions that are currently active. The column REMAINING gives the estimated remaining time in seconds:
SUBSTR(OPNAME, 1, 15) OPERATION,
SUBSTR(TARGET, 1, 20) OBJECT,
SOFAR < TOTALWORK AND
LAST_UPDATE_TIME > SYSDATE – 1 / 24 / 60;
The following SAP courses cover a number of subjects, including database performance:
- ADM315: Workload Analysis
- BC490: Performance Tuning
- ADM505: Database Administration Oracle 1
- ADM506: Database Administration Oracle 2
- TASP01: EarlyWatch Health Check Empowering Workshop
- USCP10: Business Process Performance Optimization
- TEWB10: SQL Cache Analysis for Oracle
The optimization of system performance is supported by the SMO Service “SAP Technical Performance Optimization” (previously known as “SQL Statement Optimization”). Depending on individual arrangement, this can be arranged for certain areas, such as database performance optimization, for example. For more information, see SAP Service Marketplace under the TPO alias.
If you have a local performance problem that only occurs on a certain table, you can browse the SAP notes by using search terms such as “<table_name>” and “Performance” for recognized problems.
The following book is available from the SAP Press, which
covers optimizing the database performance, among other things:
Title: SAP R/3 Performance Optimization
Author: Thomas Schneider
Publisher: Galileo Press
Title: ABAP Performance Tuning
Author: Hermann Gahm
Publisher: Galileo Press
Information about Oracle online documentation is available in the book:
Performance Tuning Guide and Reference
The use of Java EE in SAP technology has the following advantages for SAP NetWeaver Application Server:
- The open integration architecture SAP NetWeaver integrates perfectly into the openness of Java EE.
- Java EE is further proof of the strategy of platform-independence pursued by SAP.
- The Java EE Connector architecture allows standardized integration of external applications.
- Web Services technologies are supported directly by Java.
- The quickly-growing Java community provides simple access to experienced developers.
If you are new to SAP Web AS Java, these are the familiar terms that being used.
Java is an object-oriented and platform-independent programming language that has spread across many areas.
Java Runtime Environment is an independent platform environment to run Java program
Java Programs consist of application and applet. Java applications are computer programs with the full functional range also provided by other programming languages. The term applet essentially means “little application”. It usually means a Java applet, a small computer program that runs in a Web browser and is written in the Java programming language.
Java virtual machine is a model of a processor (which often does not exist as real hardware) and the associated system architecture. It is a computer program that emulates a processor. see graphic below.
A Java Development Kit (JDK) is required to write platform-independent Java programs. The Java Development Kit includes the software required to create and test Java applications and applets, the packages with the Java classes that are part of the basic configuration, and the online documentation
Java Standard Edition (Java SE). The standard edition is the normal software environment and defines the Java™ SE Development Kit (JDK). This includes the Java SE Runtime Environment (JRE).
Java Enterprise Edition (Java EE). The enterprise edition builds on the JDK and integrates additional packages such as Enterprise JavaBeans (EJB), servlets, JavaServer Pages (JSP), Java Mail API, and Java Transaction Service (JTS). The Java EE SDK (Software Development Kit) includes the Sun Java System Application Server and requires a JDK.
Java Micro Edition (Java ME). The micro edition is a small runtime environment for small devices such as Personal Digital Assistants (PDAs) or telephones.
Enterprise Java Beans is a package of application logic components in accordance with Java EE specification.
EJB Container is a location to execute Enterprise Java Beans which is part of Java EE application server.
Web Container is a part of Java EE Application Server which Java Server Pages and Java servlets are executed in.
Java Server Pages (JSP) are a technology developed by Sun, which essentially allow the simple dynamic generation of HTML and XML output for a Web server.
An enhanced self-diagnosis tool has been delivered with SAP Solution Manager 7.0, Support Package 12. It should be used for troubleshooting, not regular self-diagnosis.
The self-diagnosis checks the system requirements for operating SAP Solution Manager. The self-diagnosis tool is highly configurable and can perform solution-specific or cross-solution self diagnosis. A self-diagnostics module is available in SAP Solution Manager. You can analyze your SAP Solution Manager system and/or a solution, with the self-diagnosis function, to identify and solve problems. You can customize SAP Solution Manager self-monitoring, to avoid red alerts. The self-monitoring functionality is in the transaction SOLUTION_MANAGER → Settings → Self-Diagnosis. You can select which type should generate an alert and inform you, in the Alert Type area.
The self-diagnostics checks the RFC Connection, batch jobs and performance values. The self-diagnosis analyses run daily, in the background, by default. You can perform a new analysis at any time, e.g. to check whether the problems shown by alerts have been resolved. You can display the self-diagnosis results in various levels of detail:
- For the SAP Solution Manager (cross-solution), e.g. to identify SAP Solution Manager system performance problems.
- For the SAP Solution Manager single solution, e.g. to check that the master data of a solution is consistent.
New self diagnosis in SAP Solution Manager: a quick guide
SAP Note 1073382
Ramp-up knowledge transfer
http://service.sap.com/rkt-solman → Overview and Setup of SAP Solution Manager
CA Wily Introscope
How can I check that CA Wily Introscope Enterprise Manager is working correctly?
1) Connect to your CA Wily Introscope WebView (which is part of CA Wily Introscope Enterprise Manager, as of release 184.108.40.206) via web browser – usually http://hostname:8081/webview/jsp/login.jsp. If the logon screen of your WebView does not appear, CA Wily Introscope Enterprise Manager may not be active.
2) Go to operating system level and verify whether the Enterprise Manager system process is available
a. Windows: Check Services
b. Unix: Check with command ps
How can I check that Wily Introscope Agent is working correctly?
1) Verify in SMD → Diagnostics Setup → Managed Systems → Introscope Agent for the system in which the Introscope Agent is running.
Note: Ensure that the version of your agent is the same as the version of the source on your SAP Solution Manager. The equal sign icon indicates that the version is up to date.
2) Check that CA Wily Introscope Enterprise Manager receives information from CA Wily Introscope Agent.
SAP EarlyWatch Alert (EWA)
SAP EarlyWatch Alert is a good indicator for the functioning of the RFC connections. If there is a small red flag in the solution overview of transaction SOLUTION_MANAGER, something is wrong with the EWA data collection.
In this case, check in the solution, which system in the landscape is affected, and check the RFC connections and SDCC. For further information, see SAP Note 216952 “Service Data Control Center (SDCC) – FAQ”. If the EWA is grey in the overview, there is also a problem.
For more information, see SAP Note 762696 “Grey rating for EarlyWatch Alert”.
System Landscape Directory (SLD)
How can I ensure that SLD is working correctly in connection to SAP Solution Manager?
- Call transaction SMSY on your SAP Solution Manager, and check that the system information is sent from your SLD to SMSY. Verify the data sources of your landscape components (Hosts, Databases, Systems, and System Components).
- Call transaction SMSY_SETUP, and check that the “Automatic Data Transfer” comes from your SLD (usually local SLD, which is supplied by the central SLD in standard case).
Compare the last refresh dates in SMSY with the last data transfer date in SMSY_SETUP. Different dates could indicate that the landscape information is not correctly supplied by your SLD.