- 论坛徽章:
- 0
|
PURPOSE
=======
This article provides details about the restriction of older Oracle
Database Utilities that prevent working with files >2Gb
The article also provides the available workarounds, like the usage of
a named pipe or exporting to tape.
SCOPE & APPLICATION
====================
The article is primarily intended for the Database Administrator of the
Oracle7, and Oracle8 database, but can also be used for Oracle8i, Oracle9i,
and Oracle10g databases.
The steps provided in this article do *NOT* apply to the export and import
with the new Export and Import DataPump utilties (expdp and impdp). These
utilities do not support the usage of named pipes. For details, see also:
Note 276521.1 "Oracle10g Export/Import DataPump Does Not Work with Tapes
or UNIX Named Pipes".
LARGE FILE ISSUES (2GB+) WHEN USING EXPORT (EXP-2 EXP-15),
==========================================================
IMPORT (IMP-2 IMP-21), or SQL*Loader
====================================
Introduction.
-------------
1. Prior to Oracle8i Release 1 (8.1.5) like in Oracle8 and Oracle7, the
maximum size of the export dumpfile was 2 Gb. On 32-bit operating
systems without large files enabled, the 2Gb is still the maximum size.
Starting with Oracle8i Release 2 (8.1.6) the export dumpfile is no longer
limited to 2 Gb on 32-bit operating systems with large file option
enabled.
Starting with Oracle8i Release 3 (8.1.5) the export dumpfile is no longer
limited to 2 Gb on 64-bit operating systems.
2. In the past, there has been some confusion over the >2GB patch released
by Oracle which allows datafiles to be >2GB. This patch and file size
only applied to the RDBMS itself, not its utilities. The restrictions
for the older database utilities that are mentioned above, are still
in place.
For details about other "2GB" issues, see the following document:
Note 62427.1 "2Gb or Not 2Gb - File limits in Oracle"
3. The maximum value that can be stored in a file is dependent on the
operating system.
4. This document provides the following information:
- Possible errors related to large file issues.
- Available workarounds to overcome large file issues.
- How to calculate the size of an export dump file.
- How to test if Operating System support 2Gb+ large files.
- How to load large datafiles with SQL*Loader.
Possible errors.
----------------
1. When the export file grows beyond 2GB, the export may fail with the
following errors in the export logfile:
ORA-01114: IO error writing block to file %s (block # %s)
ORA-27069: skgfdisp: attempt to do I/O beyond the range of the file
or:
EXP-00015: error on row <???> of table <???>, column <???>, datatype <??>
EXP-00222: System error message <???>
EXP-00000: Export terminated unsuccessfully
or:
EXP-00015: error on row <???> of table <???>, column <???>, datatype <??>
EXP-00002: error in writing to export file
EXP-00000: Export terminated unsuccessfully
If you examine the file size of the export dump file. It should be
approximately 21474M or 2.1G.
2. When a 2GB+ export dumpfile is transferred to an operating system that
support large files, but where the import is done with an import utility
that does not support large files, the following errors may occur:
IMP-00002 failed to open expdat.dmp for read
IMP-00021 operating system error - error code (dec 79, hex 0x4f)
Available workarounds to overcome large file issues.
----------------------------------------------------
1. Upgrade to Oracle8i, Oracle9i, or Oracle10g.
--------------------------------------------
Starting with Oracle8i, we advise to to use the FILESIZE parameter
to create multiple export dump files, rather than one single (and
usually very large) export dump file.
For more information, see:
Note 290810.1 "Parameter FILESIZE - Make Export Write to Multiple
Export Files"
2. Run multiple export sessions.
-----------------------------
Investigate to see if there is a way to slit up the export at schema
level. Perhaps you can export the schema with the highest number
of objects in a separate export in order to fit under the 2GB limit.
Also, investigate whether certain large tables can be exported
separately.
3. Export and Import with a compressed dump file.
----------------------------------------------
Example of a full database export and import with a compressed export
dump file. The steps involved are: make the pipe, compress or uncompress
in the background, and export to or import from the pipe.
mknod /tmp/exp_pipe p
compress < /tmp/exp_pipe > exp_full.dmp.Z &
-or-
cat /tmp/exp_pipe | compress > exp_full.dmp.Z &
-or-
cat /tmp/exp_pipe > exp_full.dmp &
exp system/manager file=/tmp/exp_pipe log=exp_full.log full=y
rm /tmp/exp_pipe
mknod /tmp/imp_pipe p
uncompress < exp_full.dmp.Z > /tmp/imp_pipe &
-or-
cat exp_full.dmp > /tmp/imp_pipe &
imp system/manager file=/tmp/imp_pipe log=imp_full.log full=y
rm /tmp/imp_pipe
Remarks:
--------
1. WARNING: Some versions of 'compress' and 'uncompress' can have very
poor performance.
2. Instead of compress and uncompress, you can also use dd and cat
to create and read a large export dump file if the Operating System
supports large files, but export/import utility doesn't (pre-8.1.6).
3. The 'compress', or 'cat', or 'dd' command must support 64 bit write
and read offsets.
4. Export and Import to multiple files.
------------------------------------
Example of a full database export and import with multiple files.
The steps involved are: make the pipe, split or combine the 2Gb- chunks,
and export to or import from the pipe.
% mknod /tmp/exp_pipe p
% cd /<file_system_with_enough_free_space>
% split -b2047m < /tmp/exp_pipe &
% exp system/manager file=/tmp/exp_pipe log=exp_full.log full=y
% rm /tmp/exp_pipe
% mknod /tmp/imp_pipe p
% cd /<file_system_with_enough_free_space>
% cat xaa xab xac > /tmp/imp_pipe &
% imp system/manager file=/tmp/imp_pipe log=imp_full.log full=y
% rm /tmp/imp_pipe
Remarks:
--------
1. This option can be used if the export dump file or the compressed
export dump file would be larger than 2 Gb and the Operating System
does not support large files, and/or the export/import utility is
pre-8.1.6.
2. The split command will split the export into several files called
'xaa', 'xab', 'xac' all of size 2047Mb.
3. The 'man' page for split shows options to generate more meaningful
file names.
4. WARNING: Some versions of 'split' can have very poor performance.
5. Not all platforms support the split "-b" option used here.
6. Examples for the KORN SHELL (KSH) only:
Use the UNIX pipe and split commands. E.g. for export:
echo|exp file=>(split -b 1024m - expdmp-) scott/tiger tables=X
Note: You can put any "exp" parameters. This is working only in ksh
and has been tested on Sun Solaris 5.5.1.
And for import:
echo|imp file=<(cat expdmp-*) scott/tiger tables=X
Or you split and compress at the same time:
Export command:
echo|exp file=>(compress|split -b 1024m - expdmp-) scott/tiger
tables=X
Import command:
echo|imp file=<(cat expdmp-*|zcat) scott/tiger tables=X
7. Starting with Oracle8i, we advise to use the FILESIZE parameter
instead. For more information, see:
Note 290810.1 "Parameter FILESIZE - Make Export Write to Multiple
Export Files"
5. Export and import directly to and from tape.
--------------------------------------------
Example of a full database export to a QIC 150 tape (150 Mb capacity).
% exp system/manager file=/dev/rmt/0m log=exp_full.log full=y volsize=145M
After Export has written 145 Mb (VOLSIZE) to the tape, it will prompt
for the next tape:
Please mount the next volume and hit <ret> when you are done.
And for import:
% imp system/manager file=/dev/rmt/0m log=imp_full.log full=y volsize=145M
Remarks:
--------
1. The value for VOLSIZE should be smaller than tape capacity.
2. For other examples, like exporting to and importing from a tape
via a named pipe, see the following document:
Note 30428.1 "Exporting to Tape on Unix System"
How to calculate the size of an export dump file.
-------------------------------------------------
% mknod /tmp/exp_pipe p
% dd if=/tmp/exp_pipe of=/dev/null bs=1024 &
% exp system/manager file=/tmp/exp_pipe <other options>
% rm /tmp/exp_pipe
Remarks:
--------
The output will return the number of of 1K blocks of the export dump file
if the file would be written to disk. The output format is:
<no. of 1K blocks>+0 records in
<no. of 1K blocks>+0 records out
How to test if Operating System support 2Gb+ large files.
---------------------------------------------------------
Example of creating a 2GB+ large test file.
% dd if=/dev/zero of=/db_bkup/ddexp.dmp bs=32768 count=65600
Remarks:
--------
Ensure there is enough free space in the directory specified by the 'of'
parameter.
How to load large datafiles with SQL*Loader.
--------------------------------------------
% mknod /tmp/ldr_pipe p
% dd of=/tmp/ldr_pipe if=<large_file_or-tape_device> &
% sqlldr system/manager control=... data=/tmp/ldr_pipe log=...
% rm /tmp/ldr_pipe |
|