sevenchina 发表于 2008-01-04 15:01

LogMiner 安装与使用


Subject:
The LogMiner Utility

Doc ID
:
Note:62508.1
Type:
BULLETIN

Last Revision Date:
02-MAY-2006
Status:
PUBLISHED
PURPOSE
   This paper details the mechanics of what LogMiner does, as well as detailing
   the commands and environment it uses.
SCOPE & APPLICATION
   For DBAs requiring further information about LogMiner.
   The ability to provide a readable interface to the redo logs has been asked
   for by customers for a long time. The ALTER SYTSTEM DUMP LOGFILE interface
   has been around for a long time, though its usefulness outside Support is
   limited. There have been a number of third party products, e.g. BMC's PATROL
   DB-Logmaster (SQL*Trax as was), which provide some functionality in this
   area. With Oracle release 8.1 there is a facility in the Oracle kernel to do
   the same. LogMiner allows the DBA to audit changes to data and performs
   analysis on the redo to determine trends, aid in capacity planning,
   Point-in-time Recovery etc.
   
RELATED DOCUMENTS

Note 117580.1
ORA-356, ORA-353, & ORA-334 Errors When Mining Logs with
                   Different DB_BLOCK_SIZE
Oracle8i- 8.1 LogMiner:
=========================

1. WHAT DOES LOGMINER DO?
=========================
   LogMiner can be used against online or archived logs from either the
   'current' database or a 'foreign' database. The reason for this is that it
   uses an external dictionary file to access meta-data, rather than the
   'current' data dictionary.
   It is important that this dictionary file is kept in step with the database
   which is being analyzed. If the dictionary used is out of step from the redo
   then analysis will be considerably more difficult. Building the external
   dictionary will be discussed in detail in section 3.

   LogMiner scans the log/logs it is interested in, and generates, using the
   dictionary file meta-data, a set of SQL statements which would have the same
   effect on the database as applying the corresponding redo record.
   LogMiner prints out the 'Final' SQL that would have gone against the
   database. For example:
       Insert into Table x Values ( 5 );
       Update Table x set COLUMN=newvalue WHERE ROWID=''
       Delete from Table x WHERE ROWID='' AND COLUMN=value AND COLUMN=VALUE
   We do not actually see the SQL that was issued, rather an executable SQL
   statement that would have the same EFFECT. Since it is also stored in the
   same redo record, we also generate the undo column which would be necessary
   to roll this change out.
   For SQL which rolls back, no undo SQL is generated, and the rollback flag is
   set. An insert followed by a rollback therefore looks like:
       REDO                              UNDO            ROLLBACK
       insert sql                        Delete sql      0
       delete sql                                    1
   Because it operates against the physical redo records, multirow operations
   are not recorded in the same manner e.g. DELETE FROM EMP WHERE DEPTNO=30
   might delete 100 rows in the SALES department in a single statement, the
   corresponding LogMiner output would show one row of output per row in the
   database.
2. WHAT IT DOES NOT DO
======================
   1. 'Trace' Application SQL - use SQL_Trace/10046
      Since LogMiner only generates low-level SQL, not what was issued, you
      cannot use LogMiner to see exactly what was being done based on the SQL.
      What you can see, is what user changed what data at what time.
   2. 'Replicate' an application
      LogMiner does not cover everything. Also, since DDL is not supported
      (the insert into the tab$ etc. is, however the create table is not).
   3. Access data dictionary SQL In a visible form
      Especially UPDATE USER$ SET PASSWORD=.
   Other Known Current Limitations
   ===============================
   LogMiner cannot cope with Objects.
   LogMiner cannot cope with Chained/Migrated Rows.
   LogMiner produces fairly unreadable output if there is no record of the
   table in the dictionary file. See below for output.
   
   The database where the analysis is being performed must have a block size
   of at least equal to that of the originating database. See
Note 117580.1
.
   
3. FUNCTIONALITY
================
   The LogMiner feature is made up of three procedures in the LogMiner
   (dbms_logmnr) package, and one in the Dictionary (dbms_logmnr_d).
   These are built by the following scripts: (Run by catproc)

       $ORACLE_HOME/rdbms/admin/dbmslogmnrd.sql
       $ORACLE_HOME/rdbms/admin/dbmslogmnr.sql
       $ORACLE_HOME/rdbms/admin/prvtlogmnr.plb
   since 8.1.6:

       $ORACLE_HOME/rdbms/admin/dbmslmd.sql
       $ORACLE_HOME/rdbms/admin/dbmslm.sql
       $ORACLE_HOME/rdbms/admin/prvtlm.plb
   1. dbms_logmnr_d.build
      This procedure builds the dictionary file used by the main LogMiner
      package to resolve object names, and column datatypes. It should be
      generated relatively frequently, since otherwise newer objects will not
      be recorded.
      It is possible to generate a Dictionary file from an 8.0.database and
      use it to Analyze Oracle 8.0 redo logs. In order to do this run
      "dbmslogmnrd.sql" against the 8.0 database, then follow the procedure as
      below. All analysis of the logfiles will have to take place while
      connected to an 8.1 database since dbms_logmnr cannot operate against
      Oracle 8.0 because it uses trusted callouts.
      Any redo relating to tables which are not included in the dictionary
      file are dumped RAW. Example: If LogMiner cannot resolve the Table and
      column references, then the following is output: (insert statement)
          insert into UNKNOWN.objn:XXXX(Col,....) VALUES
             ( HEXTORAW('xxxxxx'), HEXTORAW('xxxxx')......)
      PARAMETERS
      ==========
      1. The name of the dictionary file you want to produce.
      2. The name of the directory where you want the file produced.
      The Directory must be writeable by the server i.e. included in
      UTL_FILE_DIR path.
   
      EXAMPLE
      =======
      BEGIN
         dbms_logmnr_d.build(
         dictionary_filename=> 'miner_dictionary.dic',
         dictionary_location => '/export/home/sme81/aholland/testcases
         /logminer'
                           );
      END;
      /
   The dbms_logmnr package actually performs the redo analysis.
   2. dbms_logmnr.add_logfile
      This procedure registers the logfiles to be analyzed in this session. It
      must be called once for each logfile. This populates the fixed table
      X$logmnr_logs (v$logmnr_logs) with a row corresponding to the logfile.
      Parameters
      ===========
      1. The logfile to be analyzed.
      2. Option
         DBMS_LOGMNR.NEW (SESSION) First file to be put into PGA memory.
            This initialises the V$logmnr_logs table.
            and
         DBMS_LOGMNR.ADDFILE
            adds another logfile to the v$logmnr_logs PGA memory.
            Has the same effect as NEW if there are no rows there
            presently.
         DBMS_LOGMNR.REMOVEFILE
            removes a row from v$logmnr_logs.
      Example
      =======
      Include all my online logs for analysis.........
      BEGIN
         dbms_logmnr.add_logfile(
            '/export/home/sme81/aholland/database/files/redo03.log',
                               DBMS_LOGMNR.NEW );
         dbms_logmnr.add_logfile(
            '/export/home/sme81/aholland/database/files/redo02.log',
                               DBMS_LOGMNR.ADDFILE );
         dbms_logmnr.add_logfile(
            '/export/home/sme81/aholland/database/files/redo01.log',
                               DBMS_LOGMNR.ADDFILE );
      END;
      /
      Full Path should be required, though an environment variable
      is accepted. This is NOT expanded in V$LOGMNR_LOGS.
   3. dbms_logmnr.start_logmnr;
      NOTE: When you start logminer, you should specify the dictionary
      option, and not take the default.If you don't specify the dictionary
      option, you will receive unknown in the object name.
      From the 9.2 Administrators Guide
         Execute the DBMS_LOGMNR.START_LOGMNR procedure to start LogMiner.
         It is recommended that you specify a dictionary option.If you do
         not, LogMiner cannot translate internal object identifiers and
         datatypes to object names and external data formats. Therefore,
         it would return internal object IDs and present data as hex bytes.
         Additionally, the MINE_VALUE and COLUMN_PRESENT functions cannot
         be used without a dictionary.
         If you are specifying the name of a flat file dictionary, you must
         supply a fully qualified filename for the dictionary file. For
         example, to start LogMiner using /oracle/database/dictionary.ora,
         issue the following command:
         SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( -
         2 DICTFILENAME =>'/oracle/database/dictionary.ora');
      This package populates V$logmnr_dictionary, v$logmnr_parameters,
      and v$logmnr_contents.
      Parameters
      ==========
      1.StartScn      Default 0
      2.EndScn      Default 0,
      3.StartTime   Default '01-jan-1988'
      4.EndTime       Default '01-jan-2988'
      5.DictFileNameDefault '',
      6.Options       Default 0Debug flag - uninvestigated as yet
      A Point to note here is that there are comparisions made between the
      SCNs, the times entered, and the range of values in the file. If the SCN
      range OR the start/end range are not wholly contained in this log, then
      the start_logmnr command will fail with the general error:
          ORA-01280 Fatal LogMiner Error.
   4. dbms_logmnr.end_logmnr;
      This is called with no parameters.
      /* THIS IS VERY IMPORTANT FOR SUPPORT */
      This procedure MUST be called prior to exiting the session that was
      performing the analysis. This is because of the way the PGA is used to
      store the dictionary definitions from the dictionary file, and the
      V$LOGMNR_CONTENTS output.
      If you do not call end_logmnr, you will silently get ORA-00600 ...
      on logoff. This OERI is triggered because the PGA is bigger at logoff
      than it was at logon, which is considered a space leak. The main problem
      from a support perspective is that it is silent, i.e. not signalled back
      to the user screen, because by then they have logged off.
      The way to spot LogMiner leaks is that the trace file produced by the
      OERI 723 will have A PGA heap dumped with many Chunks of type 'Freeable'
      With a description of "KRVD:alh"
4. OUTPUT
=========
   Effectively, the output from LogMiner is the contents of V$logmnr_contents.
   The output is only visible during the life of the session which runs
   start_logmnr. This is because all the LogMiner memory is PGA memory, so it
   is neither visible to other sessions, nor is it persistent. As the session
   logs off, either dbms_logmnr.end_logmnr is run to clear out the PGA, or an
   OERI 723 is signalled as described above.
   Typically users are going to want to output sql_redo based on queries by
   timestamp, segment_name or rowid.
   v$logmnr_contents
   Name                            Null?    Type
   ------------------------------- -------- ----
   SCN                                    NUMBER
   TIMESTAMP                              DATE
   THREAD#                                  NUMBER
   LOG_ID                                 NUMBER
   XIDUSN                                 NUMBER
   XIDSLT                                 NUMBER
   XIDSQN                                 NUMBER
   RBASQN                                 NUMBER
   RBABLK                                 NUMBER
   RBABYTE                                  NUMBER
   UBAFIL                                 NUMBER
   UBABLK                                 NUMBER
   UBAREC                                 NUMBER
   UBASQN                                 NUMBER
   ABS_FILE#                              NUMBER
   REL_FILE#                              NUMBER
   DATA_BLK#                              NUMBER
   DATA_OBJ#                              NUMBER
   DATA_OBJD#                               NUMBER
   SEG_OWNER                              VARCHAR2(32)
   SEG_NAME                                 VARCHAR2(32)
   SEG_TYPE                                 VARCHAR2(32)
   TABLE_SPACE                              VARCHAR2(32)
   ROW_ID                                 VARCHAR2(19)
   SESSION#                                 NUMBER
   SERIAL#                                  NUMBER
   USERNAME                                 VARCHAR2(32)
   ROLLBACK                                 NUMBER
   OPERATION                              VARCHAR2(32)
   SQL_REDO                                 VARCHAR2(4000)
   SQL_UNDO                                 VARCHAR2(4000)
   RS_ID                                    VARCHAR2(32)
   SSN                                    NUMBER
   CSF                                    NUMBER
   INFO                                     VARCHAR2(32)
   STATUS                                 NUMBER
   PH1_NAME                                 VARCHAR2(32)
   PH1_REDO                                 VARCHAR2(4000)
   PH1_UNDO                                 VARCHAR2(4000)
   PH2_NAME                                 VARCHAR2(32)
   PH2_REDO                                 VARCHAR2(4000)
   PH2_UNDO                                 VARCHAR2(4000)
   PH3_NAME                                 VARCHAR2(32)
   PH3_REDO                                 VARCHAR2(4000)
   PH3_UNDO                                 VARCHAR2(4000)
   PH4_NAME                                 VARCHAR2(32)
   PH4_REDO                                 VARCHAR2(4000)
   PH4_UNDO                                 VARCHAR2(4000)
   PH5_NAME                                 VARCHAR2(32)
   PH5_REDO                                 VARCHAR2(4000)
   PH5_UNDO                                 VARCHAR2(4000)
   SQL> set heading off
   SQL> select scn, username, sql_undo from v$logmnr_contents
         where segment_name = 'emp';
   12134756      scott         insert (...) into emp;
   12156488      scott         delete from emp where empno = ...
   12849455      scott         update emp set mgr =
   This will return the results of an SQL statement without the column
   headings.The columns that you are really going to want to query are the
   "sql_undo" and "sql_redo" values because they give the transaction details
   and syntax.
5. PLACEHOLDERS
===============
   In order to allow users to be able to query directly on specific data
   values, there are up to five PLACEHOLDERs included at the end of
   v$logmnr_contents. When enabled, a user can query on the specific BEFORE and
   AFTER values of a specific field, rather than a %LIKE% query against the
   SQL_UNDO/REDO fields. This is implemented via an external file called
   "logmnr.opt". (See the Supplied Packages manual entry on dbms_logmnr for
   further details.) The file must exist in the same directory as the
   dictionary file used, and contains the prototype mappings of the PHx fields
   to the fields in the table being analyzed.
      Example entry
      =============
      colmap =SCOTT EMP ( EMPNO, 1, ENAME, 2, SAL, 3 );
   In the above example, when a redo record is encountered for the SCOTT.EMP
   table, the full Statement redo and undo information populates the SQL_REDO
   and SQL_UNDO columns respectively, however the PH3_NAME, PH3_REDO and
   PH3_UNDO columns will also be populated with'SAL' , ,
   respectively,which means that the analyst can query in the form.
       SELECT * FROM V$LOGMNR_CONTENTS
       WHERE SEG_NAME ='EMP'
       AND PH3_NAME='SAL'
       AND PH3_REDO=1000000;
   The returned PH3_UNDO column would return the value prior to the update.
   This enables much more efficient queries to be run against V$LOGMNR_CONTENTS
   view, and if, for instance, a CTAS was issued to store a physical copy, the
   column can be indexed.
Search Words:
=============
Log Miner
Subject:
How to Setup LogMiner

Doc ID
:
Note:111886.1
Type:
BULLETIN

Last Revision Date:
11-JUL-2004
Status:
PUBLISHED
PURPOSE
To provide the steps for setting up LogMiner on the database.

SCOPE & APPLICATION
This is intended to help user set up LogMiner.
RELATED DOCUMENTS
Oracle8i Administrator's Guide, Release 8.1.5
Oracle8i Administrator's Guide, Release 8.1.6
Introduction:
=============
LogMiner runs in an Oracle instance with the database either mounted or
unmounted. LogMiner uses a dictionary file, which is a special file that
indicates the database that created it as well as the time the file was
created. The dictionary file is not required, but is recommended. Without a
dictionary file, the equivalent SQL statements will use Oracle internal object
IDs for the object name and present column values as hex data.
For example, instead of the SQL statement:
    INSERT INTO emp(name, salary) VALUES ('John Doe', 50000);
LogMiner will display:
    insert into Object#2581(col#1, col#2) values (hextoraw('4a6f686e20446f65'),
    hextoraw('c306'));"
Create a dictionary file by mounting a database and then extracting dictionary
information into an external file.

You must create the dictionary file from the same database that generated the
log files you want to analyze. Once created, you can use the dictionary file
to analyze redo logs.
When creating the dictionary, specify the following:
    * DICTIONARY_FILENAME to name the dictionary file.
    * DICTIONARY_LOCATION to specify the location of the file.
LogMiner analyzes redo log files from any version 8.0.x and later Oracle
database that uses the same database characterset and is running on the same
hardware as the analyzing instance.
Note: The LogMiner packages are owned by the SYS schema.Therefore, if you
      are not connected as user SYS, you must include SYS in your call.For
      example:
          EXECUTE SYS.DBMS_LOGMNR_D.BUILD
To Create a Dictionary File on an Oracle8i Database:
====================================================
1. Make sure to specify an existing directory that Oracle has permissions
   to write to by the PL/SQL procedure by setting the initialization
   parameter UTL_FILE_DIR in the init.ora.
   For example, set the following to use /oracle/logs:
   UTL_FILE_DIR = /oracle/logs
   Be sure to shutdown and restart the instance after adding UTL_FILE_DIR
   to the init.ora.

   If you do not reference this parameter, the procedure will fail.
2. Use SQL*Plus to mount and then open the database whose files you want to
   analyze. For example, enter:
   STARTUP

3. Execute the PL/SQL procedure DBMS_LOGMNR_D.BUILD. Specify both a file name
   for the dictionary and a directory pathname for the file. This procedure
   creates the dictionary file, which you should use to analyze log files.
   For example, enter the following to create file dictionary.ora in
   /oracle/logs:
   (REMEMBER TO INCULDE THE DASH '-' CONTINUATION CHARACTER AT THE END OF
   EACH LINE WHEN ENTERING A MULTI-LINE PL/SQL COMMAND IN SQL*PLUS)
   EXECUTE DBMS_LOGMNR_D.BUILD( -
   DICTIONARY_FILENAME =>'dictionary.ora', -
   DICTIONARY_LOCATION => '/oracle/logs');
To Create a Dictionary File on an Oracle8 Database:
===================================================
Although LogMiner only runs on databases of release 8.1 or higher, you can
use it to analyze redo logs from release 8.0 databases.
1. Use an O/S command to copy the dbmslmd.sql script, which is contained in the
   $ORACLE_HOME/rdbms/admin directory on the Oracle8i database, to the same
   directory in the Oracle8 database.
   For example, enter:
   % cp /8.1/oracle/rdbms/admin/dbmslmd.sql /8.0/oracle/rdbms/admin/dbmslmd.sql
   Note: In 8.1.5 the script is dbmslogmnrd.sql.
         In 8.1.6 the script is dbmslmd.sql.
2. Use SQL*Plus to mount and then open the database whose files you want to
   analyze. For example, enter:
   STARTUP

3. Execute the copied dbmslmd.sql script on the 8.0 database to create the
   DBMS_LOGMNR_D package.
   For example, enter: @dbmslmd.sql

4. Make sure to specify an existing directory that Oracle has permissions
   to write to by the PL/SQL procedure by setting the initialization
   parameter UTL_FILE_DIR in the init.ora.
   For example, set the following to use /8.0/oracle/logs:
   UTL_FILE_DIR = /8.0/oracle/logs
   Be sure to shutdown and restart the instance after adding UTL_FILE_DIR
   to the init.ora.

   If you do not reference this parameter, the procedure will fail.
5. Execute the PL/SQL procedure DBMS_LOGMNR_D.BUILD. Specify both a file name
   for the dictionary and a directory pathname for the file. This procedure
   creates the dictionary file, which you should use to analyze log files.
   For example, enter the following to create file dictionary.ora in
   /8.0/oracle/logs:
   (REMEMBER TO INCULDE THE DASH '-' CONTINUATION CHARACTER AT THE END OF
   EACH LINE WHEN ENTERING A MULTI-LINE PL/SQL COMMAND IN SQL*PLUS)
   EXECUTE DBMS_LOGMNR_D.BUILD(-
   DICTIONARY_FILENAME =>'dictionary.ora',-
   DICTIONARY_LOCATION => '/8.0/oracle/logs');
   After creating the dictionary file on the Oracle 8.0.x instance, the
   dictionary file and any archived logs to be mined must be moved to the
   server running the 8.1.x database on which LogMiner will be run if it is
   different from the server which generated the archived logs.
Specifying Redo Logs for Analysis
=================================
Once you have created a dictionary file, you can begin analyzing redo logs.
Your first step is to specify the log files that you want to analyze using
the ADD_LOGFILE procedure. Use the following constants:
    * NEW to create a new list.
    * ADDFILE to add redo logs to a list.
    * REMOVEFILE to remove redo logs from the list.
To Use LogMiner:
1. Use SQL*Plus to start an Oracle instance, with the database either mounted
   or unmounted.
   For example, enter:
   STARTUP

2. Create a list of logs by specifying the NEW option when executing the
   DBMS_LOGMNR.ADD_LOGFILE procedure. For example, enter the following to
   specify /oracle/logs/log1.f:
   (INCLUDE THE '-' WHEN ENTERING THE FOLLOWING)
   EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
   LOGFILENAME => '/oracle/logs/log1.f', -
   OPTIONS => dbms_logmnr.NEW);

3. If desired, add more logs by specifying the ADDFILE option.
   For example, enter the following to add /oracle/logs/log2.f:
   (INCLUDE THE '-' WHEN ENTERING THE FOLLOWING)
   EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
   LOGFILENAME => '/oracle/logs/log2.f', -
   OPTIONS => dbms_logmnr.ADDFILE);

4. If desired, remove logs by specifying the REMOVEFILE option.
   For example, enter the following to remove /oracle/logs/log2.f:
   (INCLUDE THE '-' WHEN ENTERING THE FOLLOWING)
   EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
   LOGFILENAME => '/oracle/logs/log2.f', -
   OPTIONS => dbms_logmnr.REMOVEFILE);
Using LogMiner:
===============
Once you have created a dictionary file and specified which logs to analyze,
you can start LogMiner and begin your analysis. Use the following options to
narrow the range of your search at start time:
This option         Specifies
===========   =========
STARTSCN         The beginning of an SCN range.
ENDSCN                 The termination of an SCN range.
STARTTIME         The beginning of a time interval.
ENDTIME         The end of a time interval.
DICTFILENAME         The name of the dictionary file.
Once you have started LogMiner, you can make use of the following data
dictionary views for analysis:
This view               Displays information about
===================   ==================================================
V$LOGMNR_DICTIONARY         The dictionary file in use.
V$LOGMNR_PARAMETERS         Current parameter settings for LogMiner.
V$LOGMNR_LOGS                 Which redo log files are being analyzed.
V$LOGMNR_CONTENTS         The contents of the redo log files being analyzed.
To Use LogMiner:
================
1. Issue the DBMS_LOGMNR.START_LOGMNR procedure to start LogMiner utility.
   For example, to start LogMiner using /oracle/dictionary.ora, issue:
   (INCLUDE THE '-' WHEN ENTERING THE FOLLOWING)
   EXECUTE DBMS_LOGMNR.START_LOGMNR( -
   DICTFILENAME =>'/oracle/dictionary.ora');

   Optionally, set the STARTTIME and ENDTIME parameters to filter data by time.
   Note that the procedure expects date values: use the TO_DATE function to
   specify date and time, as in this example:
   (INCLUDE THE '-' WHEN ENTERING THE FOLLOWING)
   EXECUTE DBMS_LOGMNR.START_LOGMNR( -
   DICTFILENAME => '/oracle/dictionary.ora', -
   STARTTIME => to_date('01-Jan-1998 08:30:00', 'DD-MON-YYYY HH:MI:SS'), -
   ENDTIME => to_date('01-Jan-1998 08:45:00', 'DD-MON-YYYY HH:MI:SS'));

   Use the STARTSCN and ENDSCN parameters to filter data by SCN, as in this
   example:
   (INCLUDE THE '-' WHEN ENTERING THE FOLLOWING)
   EXECUTE DBMS_LOGMNR.START_LOGMNR( -
   DICTFILENAME => '/oracle/dictionary.ora', -
   STARTSCN => 100, -
   ENDSCN => 150);

2. View the output via the V$LOGMNR_CONTENTS table. LogMiner returns all rows
   in SCN order, which is the same order applied in media recovery.

   For example,the following query lists information about operations:
   SELECT operation, sql_redo FROM v$logmnr_contents;
   OPERATION SQL_REDO               
   --------- ----------------------------------------------------------
   INTERNAL      
   INTERNAL      
   START   set transaction read write;
   UPDATE    update SYS.UNDO$ set NAME = 'RS0', USER# = 1, FILE# = 1, BLOCK# = 2450, SCNBAS =
   COMMIT    commit;                                                                        
   START   set transaction read write;                  
   UPDATE    update SYS.UNDO$ set NAME = 'RS0', USER# = 1, FILE# = 1, BLOCK# = 2450, SCNBAS =
   COMMIT    commit;                           
   START   set transaction read write;   
   UPDATE    update SYS.UNDO$ set NAME = 'RS0', USER# = 1, FILE# = 1, BLOCK# = 2450, SCNBAS =
   COMMIT    commit;                                                                        
   11 rows selected.
Analyzing Archived Redo Log Files from Other Databases:
=======================================================
You can run LogMiner on an instance of a database while analyzing redo log
files from a different database. To analyze archived redo log files from other
databases,
LogMiner must:
* Access a dictionary file that is both created from the same database as the
redo log files and created with the same database character set.
* Run on the same hardware platform that generated the log files, although it
does not need to be on the same system.
* Use redo log files that can be applied for recovery from Oracle version 8.0
and later.
REFERENCES:
==========
Note 148616.1
- Oracle9i LogMiner New Features


本文来自ChinaUnix博客,如果查看原文请点:http://blog.chinaunix.net/u1/46451/showart_457635.html
页: [1]
查看完整版本: LogMiner 安装与使用