- 论坛徽章:
- 0
|
主题: Streams Performance Recommendations
文档 ID: 335516.1 类型: BULLETIN
上次修订日期: 09-APR-2009 状态: PUBLISHED
In this Document
Purpose
Scope and Application
Streams Performance Recommendations
General Operational Considerations
General Recommendations
Logminer related Configuration
Capture Related Recommendations
Propagation Recommendations
Apply Recommendations
Additional Apply Performance Tips
References
Applies to:
Oracle Server - Enterprise Edition
Information in this document applies to any platform.
Purpose
Last modification dateApril 10 2009
Oracle Streams enables the sharing of data and events in a data stream either within a database or from
one database to another. This article is intended to assist Replication DBAs in improving the performance
of Oracle Streams Replication.
Scope and Application
To be used by Oracle support analysts and replication DBAs to improve the performance of replication
using Streams in Oracle 9.2 or higher. Below, are some of the key recommendations for
successful implementation of Streams in Oracle database release 9.2 and above.
Streams Performance Recommendations
Customers should be on the latest patchset and it is recommended to apply relevant
patches as detailed in : Note 437838.1 Streams Specific Patches .
These patches detailed address most of the commonly encountered performance and
stability issues connected with Streams.
A complete list of 10.2 recommendations is detailed in : Note 418755.1 - 10gR2 Streams
Recommended Configuration (it is anticipated that various related notes will be merged
at some point).
If there are performance issues, more detail may be found to help gather relevant information
as well as isolate the cause in notes :
Note 746247.1 Troubleshooting Streams Capture when status is Paused For Flow Control
and Note 730036.1 Overview for Troubleshooting Streams Performance Issues.
General Operational Considerations
The following main areas of operation can be affected by the configuration details which follow.
Streams uses buffered queues with the aim that all LCRs are processed in memory.
Spill of information from memory to disk can affect performance and can occur in 3 functional
areas; these spill activities are termed :
- Logminer spill (source capture) , Queue spill (capture and apply) and Apply Spill (apply).
All Oracle versions have Queue spill. Queue spill is typically associated with LCRs not being
processed in a timely fashion from the buffered queue ; rather than allowing old LCRs to remain in
memory for an extended period of time , they are spilled out to disk to the table : aq$_<queue_table>_p.
Queue spill may also be associated with memory (Streams pool) space ; if there is not enough
memory to accommodate LCRs, they are again spilled to disk.
Spill of this nature may be associated with a variety of causes therefore it is worth considering
the parameter setting below.
Oracle versions >= 10gR2 additionally introduce Apply Spill. Apply spill is primarily connected with
the processing of large transactions although it does take account of long running transactions.
It is worth nothing that Apply spill can operate on queue spill.
The Capture side leaves the management of these two types of transaction to the Apply side.
These types of transaction are written to a partitioned table at the Apply side database; the partitioned
table is far easier to manipulate than the queue spill table.
The spill threshold can be configured.
Logminer spill writes cached logminer information out to logminer spill tables. It may be necessary to
reconfigure the amount of space for cached logminer information. Typically the default allocation
is sufficient for most applications.
Where there is Queue spill , there may be the need to deallocate unused space or shrink the
space usage of the related aq$_<queue_table>_p periodically.
Streams (Capture) has to checkpoint as it goes along in order to ensure that it can restart from a
point in the past which does not require too many redo logs to be reprocessed. Care should
be taken to understand when checkpoints will take place as well as the length of time checkpoint
information is retained. Checkpoint information is stored in a single unpartitioned table. Therefore
consideration should be given to the related parameters below especially in the event of there being
multiple Capture processes. Checkpoint frequency and purge interval can be configured.
It is inefficient to checkpoint too frequently , similarly is is inadvisable to hold on to checkpoint
information too long - both can affect performance.
The Capture process in 10gR1 and above now uses an inbuilt flow control mechanism. Flow control
aims to ensure that the source of LCRs (the Capture) does not flood the buffer queue of the Apply
process before the Apply process has had a chance to apply these.
In certain cases it may be relevant to amend the flow control related parameters.
General Recommendations
* Streams Pool Allocation
11g: set STREAMS_POOL_SIZE to be greater or equal to 100 MB;
10gR2: set SGA_TARGET > 0 and STREAMS_POOL_SIZE=0 to enable autotuning of the
Streams pool;
10gR1: use the STREAMS_POOL_SIZE init.ora parameter to configure the Streams memory
allocation;
9.2: increase the SHARED_POOL size to minimize spill-over from buffer queue to disk
As well as addressing how memory is to be allocated for Streams , in version 9.2
the Streams/logminer dictionary tables should be located outwith the SYSTEM tablespace as
follows :
create tablespace &tspace_name datafile '&db_file_directory/&db_file_name' size 25 m reuse autoextend on maxsize unlimited;
execute dbms_logmnr_d.set_tablespace('&tspace_name');
Note in 10gR1 and above : The Streams data dictionary tables are created
in the SYSAUX tablespace by default. There is no need to move it.
*
Initialization file hidden parameters
10gR2
_job_queue_interval=1
_spin_count=5000
10gR1
_job_queue_interval=1
_spin_count=5000
9.2
_first_spare_parameter=50
_kghdsidx_count=1
_job_queue_interval=1
_spin_count=5000
Note: _first_spare_parameter in 9.2.0.8 will configure a % of the shared pool to be
used for the Streams pool. A _kghdsidx_count setting of 1 avoids the use of sub pools within the
shared_pool; in 9.2 multiple sub pools have been associated with performance issues.
* Queue Monitor / AQ_TM_PROCESSES
In 10gR1 onwards, remove the AQ_TM_PROCESSES parameter from your database initialization
parameter file. This will allow the queue management background activity to be managed
automatically.
Note: You should not set the AQ_TM_PROCESSES parameter explicitly to 0. This would
effectively break or disable the following operations : background queue management activity as well
as cleanout of processed messages and would break AQ activity relating to message delay, expiration and
retention.
* Queue to Queue Propagation
When upgrading to Oracle Database Release 10.2 and above, from releases 9.2 or 10.1, drop any
propagations and recreate them specifying the queue_to_queue parameter as TRUE.
This is especially relevant where a RAC destination is in place as this allows propagation to
fail over to an existing instance which now owns the Streams buffered queue.
* Multiple Separate Queues for Capture and Apply
From all Streams versions up to and including11g , where bi-directional replication is in place,
configure separate Capture and Apply queues to minimize the affect of Queue spill to disk. Do not
have both these processes sharing the same queue unless Capture and Apply reside in same database
and there is no propagation taking place. Both processes will see LCRs for both activities; if the
Capture queue has spilled (Queue spill) this can affect Apply operation and slow it down.
* Avoid Complex Rules
Make sure that rules as this allow Fast evaluation. Fast rule evaluation can be used on simple rules
with object_owner, object_name, source_database_name in simple equality statements.
In 9.2, it was necessary to avoid 'LIKE' , 'NOT' and != operators as much as possible as these
operators disallowed rule evaluation optimizations. It is better (i.e., faster to evaluate) to have
multiple simple rules than 1 complex rule.
In 10g and above, the same can be achieved by using a negative rule set to eliminate tables or objects from Streams processing.
Examples of simple rules:
(((:dml.get_object_owner() = 'FSADBO1' and
:dml.get_object_name() = 'STREAMS_TEST')) and
:dml.is_null_tag() = 'Y' and
:dml.get_source_database_name() = 'FSABCD.WORLD'
)
(:dml.get_object_owner() = 'POSIT' and :dml.is_null_tag() = 'Y'
and :dml.get_object_name() IN
('TB_M21_1','TB_M21_2','TB_M21_3','TB_M21_40','TB_M21_10','PN_M211B010')) |
|