Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Doug.

Asked: December 15, 2004 - 7:53 pm UTC

Last updated: December 16, 2004 - 8:52 am UTC

Version: 9204

Viewed 1000+ times

You Asked

Tom - I have regularly occuring job where I have to copy 2 tables from one database to another. One is large enough (10 million rows).. that I use exp and imp with commit=Y in order to prevent blowing out the rollback segments. Is there another way to do it? Sql Copy ?

and Tom said...

Have you ever actually measured this "blow out"

Take the all_objects view, pretty wide. copy it for 10,000,000 rows. export and import with commit=n.

If 40meg of undo is "blowing out" -- well, I'd disagree. Most of the "we have to commit frequently to avoid blowing out" is overrated. Inserts generate the least undo (delete+rowid is the undo generated)...

I'd just import... with commit=N. Or insert /*+ append */ into localTable select * from remote@table (no undo!!!!) over a dblink...

if you have indexes, get rid of them and rebuild afterwards. they are the undo hogs and would measurably slow down such an operation.


big_table@ORA9IR2> select segment_name, blocks from dba_segments where tablespace_name = 'TEST_BLOWOUT_THEORY';

SEGMENT_NAME BLOCKS
------------------------------ ----------
_SYSSMU11$ 15
_SYSSMU12$ 15
_SYSSMU13$ 15
_SYSSMU14$ 23
_SYSSMU15$ 15
_SYSSMU16$ 15
_SYSSMU17$ 15
_SYSSMU18$ 15
_SYSSMU19$ 15
_SYSSMU20$ 31

10 rows selected.

big_table@ORA9IR2> alter system set undo_tablespace = 'TEST_BLOWOUT_THEORY';

System altered.

big_table@ORA9IR2> !imp userid=/ full=y commit=n buffer=52428800

Import: Release 9.2.0.5.0 - Production on Thu Dec 16 08:08:55 2004

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


Connected to: Oracle9i Enterprise Edition Release 9.2.0.5.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.5.0 - Production

Export file created by EXPORT:V09.02.00 via conventional path

Warning: the objects were exported by BIG_TABLE, not by you

import done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
. importing BIG_TABLE's objects into OPS$TKYTE
. . importing table "BIG_TABLE" 10000000 rows imported
Import terminated successfully without warnings.

big_table@ORA9IR2> select segment_name, blocks from dba_segments where tablespace_name = 'TEST_BLOWOUT_THEORY';

SEGMENT_NAME BLOCKS
------------------------------ ----------
_SYSSMU11$ 4855
_SYSSMU12$ 23
_SYSSMU13$ 15
_SYSSMU14$ 23
_SYSSMU15$ 15
_SYSSMU16$ 15
_SYSSMU17$ 15
_SYSSMU18$ 15
_SYSSMU19$ 23
_SYSSMU20$ 31

10 rows selected.

big_table@ORA9IR2> select 4855*8/1024 from dual;

4855*8/1024
-----------
37.9296875



38m of undo isn't "a blowout"


Is this answer out of date? If it is, please let us know via a Comment

More to Explore

VLDB

If you are new to partitioning, check out Connor McDonald's introduction series here.

VLDB

Documentation set on VLDB and Partitioning.