Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Peter.

Asked: August 23, 2001 - 10:27 am UTC

Last updated: April 12, 2017 - 5:13 am UTC

Version: 8.X

Viewed 50K+ times! This question is

You Asked

Hello,

1) I'm looking for hints and techniques for speeding up exports and imports of large tables. We have an application that has a few small tables and a large table (a small system may have 30M rows, large 100M+).

Often we need to move databases to a faster machine or a different platform (i.e AIX->Solaris). Some larger DBs take 20+ hours to load the data (not including the index build).

We currently take advantage of:
Read/Write to a named pipe with compression
Export with Direct option, set large buffers
Use indexfile parameter to break import process into pieces
(table create, data load, index build)
Build indexes with multiple create processes
Build indexes with NOLOGGING PARALLEL options

Multiple export or import processes aren't much of a gain, because the difference in sizes in tables is extreme.

2) Are there any good references for Exp/Imp? Most reference books only give one or two pages of basic information.



and Tom said...

1) Have you looked at using the sqlplus copy command over the network? In this fashion you can avoid putting stuff to disk and reading it back in from disk. Instead of exporting that really big table, copy it.

short of that, I might use some pro*c to "flatten out" the data to a flat file and sqlldr with direct path mode to reload it. That'll by far be the fastest load (but the overhead of having to write it and read it might make it similar to a sqlplus copy).

for copy:
</code> http://docs.oracle.com/cd/A81042_01/DOC/sqlplus.816/a75664/ch63.htm#1003261

for pro*c to flatten the data
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:459020243348

2) I have a 50 page chapter in my book on EXP/IMP
http://www.amazon.com/exec/obidos/ASIN/1861004826/002-0082367-4217640 <code>

the outline is:

Why you might use them
How they work
the options
exp
imp
large exports
using the filesize parameter
export smaller pieces
export to a device that does not support seeking
subsetting data
transporting data
getting the ddl
as part of backup/recovery plan
importing into different structures
direct path exports
Caveats and errors
cloning
across versions
where did my indexes go?
named vs default named constraints
NLS issues
Multi-tablespace CREATES
Summary



Rating

  (100 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

It does not answer the question...

Harm ten Napel, August 24, 2001 - 7:32 am UTC

A real way of improving speed of exp / imp is to link the
executables single task, of course you don't recommend
that because single task executables in potential can
corrupt your SGA.

Tom Kyte
August 24, 2001 - 7:42 am UTC

Well, in my opinion it did answer the question (which was how can I speed up moving the data) -- but anyway....

Single task linking is not generally supported with 8.x and for 8i -- not supported for imp/exp/sqlldr at all. The 8i documentation needed to be updated in that regards.
That would be why i did not mention it.

Compiling Pro*C

A reader, August 24, 2001 - 7:36 am UTC

Tom,

I like to make use of your pro*c program. I am a novice at Pro*C. How do I compile it and use it ?

- Thanks

Tom Kyte
August 24, 2001 - 9:32 am UTC

See
</code> http://asktom.oracle.com/~tkyte/proc_makefile/ <code>

for a generic unix makefile that can compile this easily. Take the source code from the first link and put it into a directory.

Then take the makefile from the above link and put it into that directory in a file called "Makefile"

Set the TARGET and SOURCE as described and type "make" -- that should do it.

Some more hints & tips

Connor, August 24, 2001 - 9:10 am UTC

A little out of date now, but </code> http://www.oradba.freeserve.co.uk/tips/import_speed.htm <code>should be assistance

Couldn't compile pro*c...

A reader, August 24, 2001 - 11:10 am UTC

Tom,
Thanks for your prompt response. However, I still couldn't compile the pro*c.
This is what I did.
1) created file flatten.pc with a copy of your source code
2) create Makefile from the contents you provided
3) set up the env. variables as follows:
% setenv SOURCE flatten.pc
% setenv TARGET flatten
% setenv ORACLE_PROC_MAKEFILE $ORACLE_HOME/precomp/demo/proc/demo_proc.mk
% make
Make: /disk01/app/oracle/product/8.1.7/precomp/lib/env_precomp.mk: Must be a
separator on line 7. Stop.

I have installed pro*c on the server on Tru64. Database version is 8172.

Your help in resolving this would be much appreciated as I would like to try
this out asap. (I have a very urgent need to copy tables regularly).

Rgds, Chris




Tom Kyte
August 24, 2001 - 11:53 am UTC

Sorry -- I use makefiles so much I sometimes forget to mention the less than obvious rules regarding them.

You need tabs:

$(TARGET): $(SOURCE) $(SOURCE:.pc=.c) $(SOURCE:.pc=.o)
<TAB> $(CC) $(LDFLAGS) -t -o $(TARGET) \
$(SOURCE:.pc=.o) -L$(ORACLE_HOME)/lib $(PROLDLIBS)

include $(ORACLE_PROC_MAKEFILE)

PROCFLAGS= ireclen=255 lines=yes $(PROC_ENV_FLAGS) \
include=$(ORACLE_HOME)/proc/lib
PROFLAGS=$(PROCFLAGS)

CFLAGS=-I. -g $(CC_ENV_FLAGS)

in there. I see the error is being flagged in env_precomp.mk however which is strange. You can email that to me directly as an attachment and I'll have a look (the env_precomp that is being referenced). That should not be.


Your pro*c file compiled - Thanks

A reader, August 24, 2001 - 12:11 pm UTC

Thanks Tom, the .pc program compiled when a <tab> was inserted as you suggested. Also I had to exlude the -t flag from the Makefile (Maybe -t is not recognised under Tru64).

One final question. How do I exclude the column headings/feedback from the output ?.

Tom Kyte
August 24, 2001 - 1:59 pm UTC

The headings and other stuff are written to stderr. the data is written to stdout. Just use redirection to separate them:


$ ./t userid=scott/tiger 'sqlstmt=select * from emp' arraysize=1000 > output.dat

Connected to ORACLE as user: scott/tiger

Unloading 'select * from emp'
Array size = 1000
EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO
14 rows extracted

$ cat output.dat

7369,SMITH,CLERK,7902,17-DEC-1980 00:00:00,800,(null),20
7499,ALLEN,SALESMAN,7698,20-FEB-1981 00:00:00,1600,300,30
7521,WARD,SALESMAN,7698,22-FEB-1981 00:00:00,1250,500,30
7566,JONES,MANAGER,7839,02-APR-1981 00:00:00,2975,(null),20
7654,MARTIN,SALESMAN,7698,28-SEP-1981 00:00:00,1250,1400,30
7698,BLAKE,MANAGER,7839,01-MAY-1981 00:00:00,2850,(null),30
7782,CLARK,MANAGER,7839,09-JUN-1981 00:00:00,2450,(null),10
7788,SCOTT,ANALYST,7566,09-DEC-1982 00:00:00,3000,(null),20
7839,KING,PRESIDENT,(null),17-NOV-1981 00:00:00,5000,(null),10
7844,TURNER,SALESMAN,7698,08-SEP-1981 00:00:00,1500,0,30
7876,ADAMS,CLERK,7788,12-JAN-1983 00:00:00,1100,(null),20
7900,JAMES,CLERK,7698,03-DEC-1981 00:00:00,950,(null),30
7902,FORD,ANALYST,7566,03-DEC-1981 00:00:00,3000,(null),20
7934,miller,CLERK,7782,23-JAN-1982 00:00:00,1300,(null),10

Notice how the output does not have the headers/etc. You can comment out the fprintf(stderr, .... ) lines in the C code to get rid of them as well (just leave the printf calls in there.

cannot compile

David Le, September 23, 2002 - 10:26 pm UTC

Hi Tom,

I used your soource code & your Make file but I got this error.

make: Fatal error in reader: Makefile, line 4: Unexpected end of line seen

How can I fixed it?

Thanks.

Tom Kyte
September 24, 2002 - 7:37 am UTC

well, post your makefile, lets see if it is the same or different and search this page for TAB - -make sure you have TABS

do you know makefiles? do you have someone that works with you that does? this is a pretty simple one!

spool vs PRO*C hostarray

A reader, September 24, 2002 - 3:09 pm UTC

hi

how fast is sqlplus spool command compared with PRO*C hostarray source fetching?

we have data extraction from our oltp databases to dwh, we use spool but it takes pretty long (dealing with 25 ~ 40 million rows), would pro*c hostarray overcome this problem?

Tom Kyte
September 24, 2002 - 3:48 pm UTC

Most likely, it would be faster and can be made even faster with a little tuning of the C code, Pro*C would best sqlplus

Pro*c = specific, does one thing
SQLPlus = generic, big tool for everyone, does a bit of everything

Disappearing features...

Connor McDonald, September 25, 2002 - 9:51 am UTC

9.2 docs:

"The COPY command will be obsoleted in future releases of SQL*Plus"

:-(

Mike, January 19, 2003 - 10:31 pm UTC

Does the sqlplus COPY is the same (performance wise) as
create table tab as select * from tab@reomte?

Tom Kyte
January 20, 2003 - 10:29 am UTC

Nope.

Try this -- dialup from your PC to your servers. On the network there is machineA and machineB you can now reach.

Do a sqlplus copy from A to B. What happens is that every row on A will come down the phone line to your PC and sent backup the phone line to B.

Do a create table as select when connected to B - the data now goes from A to B directly -- not down your phone line.


The sqlplus client can be a bottleneck in a copy.

Also, copy does inserts - -generates undo/redo. CTAS wont do undo and may not do redo (nologging or noarchivelog mode databases for example)

but copy can be very fast/efficient when used machine to machine over a good network.

What is the future for Pro*C

Siva, January 20, 2003 - 10:47 am UTC

I used to extensively code in pro*C. That was the only option for batch programs . I still tend to write Pro*C, as I will it gives the flexibilty. The new bread of people prefer PL/SQL batch programming. I also say ok for this, because PL/SQL developers are easy to find, compared to Pro*C.

Is there still an advantage to hang on to Pro*C.

Thanks
Siva

Tom Kyte
January 20, 2003 - 10:59 am UTC

sure, I still use it from time to time -- just used it last friday to solve a real world problem (C based extproc, I find coding pro*c to be about 1,000,000 more efficient than OCI/jdbc/odbc/whatever API here)



import to raw device

Robert Xuequn Xu, January 20, 2003 - 2:14 pm UTC

I found that import to a database residing on raw device is extremely fast. Also, setting the "_disable_logging=true" instance parameter for the duration of import can give about 20-30% time saving.

Between exp and imp, it is the imp that needs big improvement in speed, because it is usually serveral times slower than exp, especially when there is a huge amount of LOB data.

Tom Kyte
January 20, 2003 - 2:26 pm UTC

The disable logging =true could be the proverbial double edged sword.

Say you have a 10hour import. You are 6 hours into it. You have an hour to go (since you are saving 30%). Crash. Bamm. Boom. You just lost siz hours due to a software/power/whatever failure. As the band Pink Floyd used to sing "careful with that axe eugene"...

As for raw, I found that most unix file systems mounted in directio (or equivalent thereof) perform about the same.


The key in my opinion -- it is all about parallel. Dont' export a database with full=y. Export it with owner=username and do N exports in parallel and N imports in parallel If possible -- do it over a pipe so the data never even hits a disk. But -- do it in parallel. Take your biggest objects and consider doing them "special" (parallel index creates, nologging operations)....



Trevor, January 20, 2003 - 5:24 pm UTC


Tom said "
Also, copy does inserts - -generates undo/redo. CTAS wont do undo and may not do redo (nologging or noarchivelog mode databases for example)"

Tom I was surprised to read CTAS might not create redo.
I understand the no undoplus no redo for nologging
but why if not in archivelog mode?
I thought redo was still generated (except when no logging
specified) and noarchivelog mode simply controlled
if the redo logs were archived or not?



Tom Kyte
January 20, 2003 - 6:01 pm UTC

It is an optimization -- CTAS will do direct writes to the datafiles -- bypassing the need to generate REDO in noarchivelog mode.

ops$tkyte@ORA817DEV> select log_mode from v$database;

LOG_MODE
------------
NOARCHIVELOG

ops$tkyte@ORA817DEV> @mystat "redo size"
old   4: and lower(a.name) like '%' || lower('&1')||'%'
new   4: and lower(a.name) like '%' || lower('redo size')||'%'

NAME                                VALUE
------------------------------ ----------
redo size                            1000

ops$tkyte@ORA817DEV> create table t as select * from all_objects;
Table created.

ops$tkyte@ORA817DEV> @mystat "redo size"
old   4: and lower(a.name) like '%' || lower('&1')||'%'
new   4: and lower(a.name) like '%' || lower('redo size')||'%'

NAME                                VALUE
------------------------------ ----------
redo size                           44920


so, 43,920 bytes of redo...

ops$tkyte@ORA9I> select log_mode from v$database;

LOG_MODE
------------
ARCHIVELOG

ops$tkyte@ORA9I> @mystat "redo size"
old   4: and lower(a.name) like '%' || lower('&1')||'%'
new   4: and lower(a.name) like '%' || lower('redo size')||'%'

NAME                                VALUE
------------------------------ ----------
redo size                           33744

ops$tkyte@ORA9I> create table t as select * from all_objects;

Table created.

ops$tkyte@ORA9I> @mystat "redo size"
old   4: and lower(a.name) like '%' || lower('&1')||'%'
new   4: and lower(a.name) like '%' || lower('redo size')||'%'

NAME                                VALUE
------------------------------ ----------
redo size                         4822944


versus lots more then that.... 

A reader, June 18, 2003 - 7:26 pm UTC

Tom,

"Do a sqlplus copy from A to B. What happens is that every row on A will come down the phone line to your PC and sent backup the phone line to B."

Lets say if i create a sh file and store it on the server which uses copy command and schedule it through crontab. will this be a better way of doing it.

Thanks.


Tom Kyte
June 19, 2003 - 7:54 am UTC

the data flows through SQLPlus -- where sqlplus runs, the data will come to it and go from it. If sqlplus is run on a server -- the data will go through it.

Great program! When I tried to compile, I am getting the following error

gs, November 11, 2003 - 9:03 pm UTC

Tom, I have oracle 9.2.0.3 and when I tried to compile the program I am getting the following error. Is my ORACLE_PROC_MAKEFILE settings incorrect for Oracle 9i?

[gs@dev13 gs]$ export ORACLE_PROC_MAKEFILE=$ORACLE_HOME/precomp/demo/proc/demo_proc.mk
[gs@dev13 gs]$ export TARGET=dump_tab
[gs@dev13 gs]$ export SOURCE=dump_tab.pc
[gs@dev13 gs]$ make
make -f /oracle/product/9.2.0/precomp/demo/proc/demo_proc.mk PROCFLAGS="ireclen=255 lines=yes include=/oracle/product/9.2.0/proc/lib" PCCSRC=dump_tab I_SYM=include= pc1
make[1]: Entering directory `/home/gs'
proc ireclen=255 lines=yes include=/oracle/product/9.2.0/proc/lib iname=dump_tab include=. include=/oracle/product/9.2.0/precomp/public include=/oracle/product/9.2.0/rdbms/public include=/oracle/product/9.2.0/rdbms/demo include=/oracle/product/9.2.0/plsql/public include=/oracle/product/9.2.0/network/public

Pro*C/C++: Release 9.2.0.3.0 - Production on Tue Nov 11 17:54:06 2003

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

System default option values taken from: /oracle/product/9.2.0/precomp/admin/pcscfg.cfg

make[1]: Leaving directory `/home/gs'
/usr/bin/gcc -I. -g -c dump_tab.c
/usr/bin/gcc -o dump_tab -L/oracle/product/9.2.0/precomp/lib/ -L/oracle/product/9.2.0/lib/ -L/oracle/product/9.2.0/lib/stubs/ -t -o dump_tab \
dump_tab.o -L/oracle/product/9.2.0/lib -lclntsh `cat /oracle/product/9.2.0/lib/ldflags` `cat /oracle/product/9.2.0/lib/sysliblist` -ldl -lm
dump_tab.o: In function `sqlerror_hard':
/home/gs/dump_tab.pc:87: undefined reference to `sqlca'
/home/gs/dump_tab.pc:89: undefined reference to `sqlca'
dump_tab.o: In function `process_1':
/home/gs/dump_tab.pc:110: undefined reference to `sqlca'
/home/gs/dump_tab.pc:110: undefined reference to `sqlca'
/home/gs/dump_tab.pc:117: undefined reference to `sqlca'
dump_tab.o:/home/gs/dump_tab.pc:117: more undefined references to `sqlca' follow
/usr/bin/ld: link errors found, deleting executable `dump_tab'
/usr/bin/ld: mode elf_i386
/usr/lib/gcc-lib/i386-redhat-linux/2.96/../../../crt1.o
/usr/lib/gcc-lib/i386-redhat-linux/2.96/../../../crti.o
/usr/lib/gcc-lib/i386-redhat-linux/2.96/crtbegin.o
dump_tab.o
-lclntsh (/oracle/product/9.2.0/lib//libclntsh.so)
-ldl (/oracle/product/9.2.0/lib/stubs//libdl.so)
-lm (/oracle/product/9.2.0/lib/stubs//libm.so)
-lpthread (/oracle/product/9.2.0/lib/stubs//libpthread.so)
-lnsl (/oracle/product/9.2.0/lib/stubs//libnsl.so)
-ldl (/oracle/product/9.2.0/lib/stubs//libdl.so)
-lm (/oracle/product/9.2.0/lib/stubs//libm.so)
/oracle/product/9.2.0/lib/stubs/libc.so.6
/usr/lib/gcc-lib/i386-redhat-linux/2.96/crtend.o
/usr/lib/gcc-lib/i386-redhat-linux/2.96/../../../crtn.o
collect2: ld returned 1 exit status
make: *** [dump_tab] Error 1

Tom Kyte
November 12, 2003 - 7:17 am UTC

can you successfully compile and link the demo programs?

Import Performance in 10g

Orca, November 12, 2003 - 9:10 am UTC

Hi,
i have read that the performace of importing data has increased 5x - 10x with oracle 10g
can you explain how this is possible?
Orca


Tom Kyte
November 12, 2003 - 10:44 am UTC

direct path parallel imports using the 10g data pump

(and its estimated higher then 10x even)

I could compile the demo programs

gs, November 12, 2003 - 2:20 pm UTC

Thanks Tom for your response,

I can compile the demo programs without any error, if I am in the same directory where the demo programs are. Which include library has these sqlca?

[oracle@dev13 proc]$ make -f demo_proc.mk cpdemo1
make -f /oracle/product/9.2.0/precomp/demo/proc/demo_proc.mk
OBJS=cpdemo1.o EXE=cpdemo1 build
make[1]: Entering directory `/u01/oracle/product/9.2.0/precomp/demo/proc'
proc cpool=yes threads=yes iname=cpdemo1

Pro*C/C++: Release 9.2.0.3.0 - Production on Wed Nov 12 11:05:28 2003

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

System default option values taken from: /oracle/product/9.2.0/precomp/admin/pcscfg.cfg

/usr/bin/gcc -O3 -trigraphs -fPIC -DPRECOMP -I. -I/oracle/product/9.2.0/precomp/public -I/oracle/product/9.2.0/rdbms/public -I/oracle/product/9.2.0/rdbms/demo -I/oracle/product/9.2.0/plsql/public -I/oracle/product/9.2.0/network/public -DLINUX -D_GNU_SOURCE -D_LARGEFILE64_SOURCE=1 -D_LARGEFILE_SOURCE=1 -DSLTS_ENABLE -DSLMXMX_ENABLE -D_REENTRANT -DNS_THREADS -c cpdemo1.c
/usr/bin/gcc -o cpdemo1 cpdemo1.o -L/oracle/product/9.2.0/lib/ -lclntsh `cat /oracle/product/9.2.0/lib/ldflags` `cat /oracle/product/9.2.0/lib/sysliblist` -ldl -lm
make[1]: Leaving directory `/u01/oracle/product/9.2.0/precomp/demo/proc'

============================================
But, when I use the Makefile, I get the following error, eventhough I am in the same directory where the demo programs are.

[oracle@dev13 proc]$ pwd
/oracle/product/9.2.0/precomp/demo/proc
[oracle@dev13 proc]$ export SOURCE=cpdemo1.pc
[oracle@dev13 proc]$ export TARGET=cpdemo1
[oracle@dev13 proc]$ export ORACLE_PROC_MAKEFILE=demo_proc.mk

[oracle@dev13 proc]$ make
demo_proc.mk:34: warning: overriding commands for target `cpdemo1'
Makefile:2: warning: ignoring old commands for target `cpdemo1'
make -f /oracle/product/9.2.0/precomp/demo/proc/demo_proc.mk PROCFLAGS="ireclen=255 lines=yes include=/oracle/product/9.2.0/proc/lib" PCCSRC=cpdemo1 I_SYM=include= pc1
make[1]: Entering directory `/u01/oracle/product/9.2.0/precomp/demo/proc'
proc ireclen=255 lines=yes include=/oracle/product/9.2.0/proc/lib iname=cpdemo1 include=. include=/oracle/product/9.2.0/precomp/public include=/oracle/product/9.2.0/rdbms/public include=/oracle/product/9.2.0/rdbms/demo include=/oracle/product/9.2.0/plsql/public include=/oracle/product/9.2.0/network/public

Pro*C/C++: Release 9.2.0.3.0 - Production on Wed Nov 12 11:17:08 2003

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

System default option values taken from: /oracle/product/9.2.0/precomp/admin/pcscfg.cfg

make[1]: Leaving directory `/u01/oracle/product/9.2.0/precomp/demo/proc'
proc cpool=yes threads=yes iname=cpdemo1

Pro*C/C++: Release 9.2.0.3.0 - Production on Wed Nov 12 11:17:09 2003

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

System default option values taken from: /oracle/product/9.2.0/precomp/admin/pcscfg.cfg

/usr/bin/gcc -I. -g -c cpdemo1.c
make -f /oracle/product/9.2.0/precomp/demo/proc/demo_proc.mk OBJS=cpdemo1.o EXE=cpdemo1 build
make[1]: Entering directory `/u01/oracle/product/9.2.0/precomp/demo/proc'
/usr/bin/gcc -o cpdemo1 cpdemo1.o -L/oracle/product/9.2.0/lib/ -lclntsh `cat /oracle/product/9.2.0/lib/ldflags` `cat /oracle/product/9.2.0/lib/sysliblist` -ldl -lm
cpdemo1.o: In function `main':
/oracle/product/9.2.0/precomp/demo/proc/cpdemo1.c:223: undefined reference to `sqlca'
/oracle/product/9.2.0/precomp/demo/proc/cpdemo1.c:255: undefined reference to `sqlca'
/oracle/product/9.2.0/precomp/demo/proc/cpdemo1.c:276: undefined reference to `sqlca'
/oracle/product/9.2.0/precomp/demo/proc/cpdemo1.c:276: undefined reference to `sqlca'
/oracle/product/9.2.0/precomp/demo/proc/cpdemo1.c:326: undefined reference to `sqlca'
cpdemo1.o:/oracle/product/9.2.0/precomp/demo/proc/cpdemo1.c:347: more undefined references to `sqlca' follow
collect2: ld returned 1 exit status
make[1]: *** [build] Error 1
make[1]: Leaving directory `/u01/oracle/product/9.2.0/precomp/demo/proc'
make: *** [cpdemo1] Error 2


Tom Kyte
November 12, 2003 - 4:53 pm UTC

sqlca is just a data structure.

did you modify my code?

No! I did not do any modification.

gs, November 12, 2003 - 5:59 pm UTC

No Tom,

I just copy and pasted the source. Also the demo program was giving the same error when I use the Makefile.

This is the Makefile I have.

% cat Makefile
$(TARGET): $(SOURCE) $(SOURCE:.pc=.c) $(SOURCE:.pc=.o)
$(CC) $(LDFLAGS) -t -o $(TARGET) \
$(SOURCE:.pc=.o) -L$(ORACLE_HOME)/lib $(PROLDLIBS)

include $(ORACLE_PROC_MAKEFILE)

PROCFLAGS= ireclen=255 lines=yes $(PROC_ENV_FLAGS) \
include=$(ORACLE_HOME)/proc/lib
PROFLAGS=$(PROCFLAGS)

CFLAGS=-I. -g $(CC_ENV_FLAGS)

Note: I exported the SOURCE, TARGET and ORACLE_PROC_MAKEFILE as shell variables.

Thanks,

Sorry, the prev statement was not 100% true

gs, November 12, 2003 - 6:08 pm UTC

I combined certain broken lines into one.

Now, I just copy and paste your source as it is and compiled tyhe program. Absolutely no changes. Still I get the following error.

[gs@dev13 gs]$ export SOURCE=dump_tab.pc
[gs@dev13 gs]$ export TARGET=dump_tab
[gs@dev13 gs]$ export ORACLE_PROC_MAKEFILE=/oracle/product/9.2.0/precomp/demo/proc/demo_proc.mk
[gs@dev13 gs]$ make
make -f /oracle/product/9.2.0/precomp/demo/proc/demo_proc.mk PROCFLAGS="ireclen=255 lines=yes include=/oracle/product/9.2.0/proc/lib" PCCSRC=dump_tab I_SYM=include= pc1
make[1]: Entering directory `/home/gs'
proc ireclen=255 lines=yes include=/oracle/product/9.2.0/proc/lib iname=dump_tab include=. include=/oracle/product/9.2.0/precomp/public include=/oracle/product/9.2.0/rdbms/public include=/oracle/product/9.2.0/rdbms/demo include=/oracle/product/9.2.0/plsql/public include=/oracle/product/9.2.0/network/public

Pro*C/C++: Release 9.2.0.3.0 - Production on Wed Nov 12 15:04:18 2003

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

System default option values taken from: /oracle/product/9.2.0/precomp/admin/pcscfg.cfg

make[1]: Leaving directory `/home/gs'
/usr/bin/gcc -I. -g -c dump_tab.c
/usr/bin/gcc -o dump_tab -L/oracle/product/9.2.0/precomp/lib/ -L/oracle/product/9.2.0/lib/ -L/oracle/product/9.2.0/lib/stubs/ -t -o dump_tab \
dump_tab.o -L/oracle/product/9.2.0/lib -lclntsh `cat /oracle/product/9.2.0/lib/ldflags` `cat /oracle/product/9.2.0/lib/sysliblist` -ldl -lm
dump_tab.o: In function `sqlerror_hard':
/home/gs/dump_tab.pc:87: undefined reference to `sqlca'
/home/gs/dump_tab.pc:89: undefined reference to `sqlca'
dump_tab.o: In function `process_1':
/home/gs/dump_tab.pc:110: undefined reference to `sqlca'
/home/gs/dump_tab.pc:110: undefined reference to `sqlca'
/home/gs/dump_tab.pc:118: undefined reference to `sqlca'
dump_tab.o:/home/gs/dump_tab.pc:118: more undefined references to `sqlca' follow
/usr/bin/ld: link errors found, deleting executable `dump_tab'
/usr/bin/ld: mode elf_i386
/usr/lib/gcc-lib/i386-redhat-linux/2.96/../../../crt1.o
/usr/lib/gcc-lib/i386-redhat-linux/2.96/../../../crti.o
/usr/lib/gcc-lib/i386-redhat-linux/2.96/crtbegin.o
dump_tab.o
-lclntsh (/oracle/product/9.2.0/lib//libclntsh.so)
-ldl (/oracle/product/9.2.0/lib/stubs//libdl.so)
-lm (/oracle/product/9.2.0/lib/stubs//libm.so)
-lpthread (/oracle/product/9.2.0/lib/stubs//libpthread.so)
-lnsl (/oracle/product/9.2.0/lib/stubs//libnsl.so)
-ldl (/oracle/product/9.2.0/lib/stubs//libdl.so)
-lm (/oracle/product/9.2.0/lib/stubs//libm.so)
/oracle/product/9.2.0/lib/stubs/libc.so.6
/usr/lib/gcc-lib/i386-redhat-linux/2.96/crtend.o
/usr/lib/gcc-lib/i386-redhat-linux/2.96/../../../crtn.o
collect2: ld returned 1 exit status
make: *** [dump_tab] Error 1


with explicit make it is working

gs, November 13, 2003 - 4:13 pm UTC


I copied the file to the demo directory and tried the following. It worked. Now, I will try chamging the Makefile and see whether it works.

% cd /oracle/product/9.2.0/precomp/demo/proc/
% cp -p ~gs/dump_tab.pc .
% make -f demo_proc.mk build EXE=dump_tab OBJS=dump_tab.o

Also, do you have any program which will dump the LOB also?

thanks,


Tom Kyte
November 13, 2003 - 9:20 pm UTC

no, i don't have a lob program right now.

Also, How would I delimit the columns?

gs, November 13, 2003 - 4:20 pm UTC

Tom,

I have a need to enclose the char/varchar columns in ". I was thinking, while sending the query input, make these columns as '"' || replace(charcol,'"','""') || '"'

ie, select intcol, '"' || replace( charcol,'"','""') || '"', .. from table?

Can it be easily done in the Pro*C program rather than using "REPLACE" function (which may be slow) when I have to dump 100Ks of rows.?

Thanks,



Tom Kyte
November 13, 2003 - 9:20 pm UTC

replace is going to be pretty darn fast -- given that YOU would have to implement replace all by yourself!!! just use replace.

Thanks Tom! I have few more questions

gs, November 14, 2003 - 3:06 pm UTC

you are using ,

static int lengths[] =
{ -1, 0, 45, 0, 0, 0, 0, 0, 2000, 0, 0,
18, 25, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 512, 2000 };

Is the data type mapping stored somewhere in data base as table or view, ie 1 -> varchar, 2 -number, 12 -> 'DATE', 23 -> 'RAW' etc? (I got the above from the dba_tab_cols view definition). I don't see type code such as 23, 24. Are you setting the length of LONG variables to 2000? what would be the maximum value I can set for a long?

Which sqlda variable has the datattype code or data type name? Is it 'T'?

Thanks,


Tom Kyte
November 15, 2003 - 8:48 am UTC

all this and more is in the pro*c manual (whats in the sqlda)

but yes, you can use user|all|dba_tab_columns to see the type codes as well

i wrote this in version 7, i used 2000 as the default length of a long in that release. you can adjust upto at least 4000 in SQL.

Direct I/O improved imp from 50 hours to 2 hours

Clement Charbonnet, November 14, 2003 - 8:51 pm UTC

I was using exp/imp to upgrade a 21 GB database from 8.1.7.4 to 9.2 on Solaris. The import was taking 50 hours to complete. The database has a table that is 20 GB and contains a blob column. I had our system administrators turn direct I/O on the data file system and reran the import. It was able to complete in less than two hours.

There is something about importing lobs that is extremely slow when direct I/O is turned off. We have imported other large databases that do not contain lobs with no performance problems.

We normally keep direct I/O turned on for the redo log file systems, but not the data file systems.

Tom Kyte
November 15, 2003 - 9:05 am UTC



i would have transported that stuff -- it would be as fast as a copy.

your lobs were probably "nocache", you are caching them now.


RECORDLENGTH parameter

Sudhir, November 15, 2003 - 3:22 pm UTC

Tom,

On this page I didn't see any reference made to RECORDLENGTH parameter for exp command. I found speed is close to 8 times faster when I set it to about 65000 value (exact number is slightly smaller and exp does that automatically). Is this your experience? Of course this is for direct only.

Thanks

directio - good and bad

Harrison Picot, January 16, 2004 - 12:46 pm UTC

Tom, you say "As for raw, I found that most unix file systems mounted in directio (or equivalent thereof) perform about the same.", and in another place recommend directio for logs. Solaris always reads and writes in 8k chunks which can make direct small IO inefficient, which is why file systems cache in the first place. For Oracle IO, Sun recommends:

</code> http://www.sun.com/solutions/blueprints/browsesubject.html <code>
"Solaris Volume Manager Performance With UFS File Systems
and Oracle" page 9 November 2003.

"Use an 8-Kbyte database block size in conjunction with an 8-Kbyte fragment size for the file system. This block size ensures alignment of database blocks with underlying storage. If blocks are not properly aligned or are too small, write performance can suffer. Consider the example of a database using 2-Kbyte blocks on a file system. The file system has a block size of 8 Kbytes, so 4 database blocks will fit in one FS block. When the database issues a write on a 2-Kbyte block, the file system must write all 8 Kbytes at once. This means that the remaining 75% of the block will have to be read before it can be written. If you use a 2-Kbyte database block size on file systems, you can prevent the read-modify-write phenomenon by using the forcedirectio mount option to bypass the file system on I/O operations."

"Caution If you are currently using UFS file systems and are not mounting them with the forcedirectio option, be careful to analyze your application before enabling direct I/O. It is quite possible that some of the objects in your database are benefiting from the UFS buffer cache. Turning on direct I/O can increase the amount of physical I/O and reduce the transaction rate as a result of bypassing the FS cache."

The message seems pretty clear, use an 8k database block size, or a multiple, and forget directio for data going into the database, although logs and exports might benefit.

Previous tests of raw volumes at Oracle showed that raw is faster for some operations and ufs is faster for others.

It seems to me that tips like bind variables and 97% of the thousands of other things that Tom has offered to make Oracle run faster, are worth more than directio, unless you are bored and want to make changes to see how fast your phone can start ringing;-)

The Sun Blueprint referenced above has a lot of good information about using Sun's (free with the OS) volume management, which performs as well as Veritas, at least in the tests they show. If you are working on Sun (and you are wasting a lot of time if not), it is well worth reading.

One note for the unwary, for reasons of backwardness, Solaris newfs can make file systems with 4k block size (in case you are tempted to make the OS match a 4k datablock size), but once you have formatted the drive, Solaris 7,8,9 can not read it.

Tom Kyte
January 16, 2004 - 12:52 pm UTC

it is called benchmarking

and on my systems, under my load -- using directio eliminated all "log file sync waits"

as with everything, evaluate it.

FAST EXP/IMP

Prasad, January 22, 2004 - 5:05 am UTC

Hi Tom,

I've a scenario where I've to exp 3 BIG tables (around 1000 million in total - non partitioned) from 1 DB & import it to another DB (Here the 3 tables are partitioned).

What are the hints do i need to give to improve the performace of this Exp/Imp process?

Thanks in advance,

Tom Kyte
January 22, 2004 - 7:09 am UTC

transporting them (taking the datafiles) would be "the fastest"

exporting individual partitions in parallel and importing them in parallel would be another (break the problem into many small problems)

Source not Partitioned

Prasad, January 22, 2004 - 7:37 am UTC

Tom,

The source tables are not partitioned.
they are huge non partitioned tables
650 m, 250 m, 150 m each.

But the Dest tables will be partitioned.

how to go about it?
what are the hints/tips can i give for exp/imp ??

many thanks,
Prasad

Tom Kyte
January 22, 2004 - 8:40 am UTC

you can use the query= parameter with either primary key or rowid ranges to split it up then.


see
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:10498431232211 <code>

for a script to generate rowid ranges that splits a table in N non-overlapping pieces that you could use.

or, just use a query= predicate that pulls out the data for "partition 1" and then "partition 2" and so on using the definitions you will be using in the partitioned table itself and run those in parallel.

data pump

A reader, May 18, 2004 - 8:26 am UTC

hi

can data pump import dump files from ordinary export ?

want to migrate 8i to 10g in my testing environment think this might speed up my import process?

Tom Kyte
May 18, 2004 - 4:27 pm UTC

What Is Data Pump Import?

Note: Data Pump Import (invoked with the impdp command) is a new utility as of Oracle Database 10g. Although its functionality and its parameters are similar to those of the original Import utility (imp), they are completely separate utilities and their files are not compatible. See Chapter 20, "Original Export and Import" for a description of the original Import utility.






A cause is postgreSQL.

Katochin, May 26, 2004 - 3:12 am UTC

try the following.

#find / -name sqlca.h

/usr/include/sqlca.h
/oracle/9.0.?/...../sqlca.h

-----

There is pre-compiler for c in PostgreSQL and it is in such a great place.

A troublesome header file should rename.

#mv /usr/include/sqlca.h /usr/include/byebye_pgsql_sqlca.h

In sqlca.h authentic positive in Oracle

extern DLLIMPORT struct sqlca sqlca;

Thus, it is not referring to from the outside.

struct sqlca sqlca;

Thus, it is only a global variable ordinarily.


I'm sorry if it not fit.


Tom Kyte
May 26, 2004 - 8:10 am UTC

yeah, i noticed that too on RHAS3.0

real nice of them...

Exp and importing but indexes seem to be there

Doug, June 24, 2004 - 2:13 pm UTC

Just wanted to check on a behavior.
If I take an export of a schema, then disable all primary and foreign key constraints in that schema, then truncate the tables and import with indexes=N. After I enable the constraints again (and even before that step), foreign key indexes seem to be there. This is because they are built on import because they are already defined? In other words, the usual wisdom, export, import in parallel with no indexes and rebuild the indexes only works if tables are DROPPED, not truncated.. is this correct? And when I say "work".. all I mean is there will be no indexes to rebuild otherwise.

Tom Kyte
June 24, 2004 - 3:17 pm UTC

there is no such thing as a "foreign key index"

what do you mean? we use indexes for the enforcment of primary keys, but indexes are never generated automagically for foreign keys.

follow up foreign key indexes

Doug, June 25, 2004 - 7:02 pm UTC

foreign key indexes. There's no automagic foreign key index I know, but I've gotten into the habit of putting them in sometimes using your "unindexed" foreign key script. I guess what I meant was - Let's say I create a foreign key index for the usual reasons - locking etc., and then I truncate the table but import with indexes=N - the index appears to be rebuilt anyhow. Make sense?

Tom Kyte
June 26, 2004 - 6:17 pm UTC

did you drop the index?

otherwise, that sounds normal. You truncated the table, indexes stay. you imported the data back in. the indexes were populated. sounds normal to me.

Import one big table

Sean, August 19, 2004 - 5:19 pm UTC

I have a 10g dump file contains about 8M rows (one audit table). I have to import it to another db. (Using 902 and Solaris 9). It is running and I am not sure it will finish next morning. Here are two questions:

(1) I didn't find parallel=y parameter for import. Here is screenshot

---------------------------------------------------------------------------

/exports/scripts: imp userid=sean/heyu12 file=/export/home/oracle/exp_test1 ignore=y parallel=y fromuser=test1 touser=cdm_temp
LRM-00101: unknown parameter name 'parallel'

IMP-00022: failed to process parameters, type 'IMP HELP=Y' for help
IMP-00000: Import terminated unsuccessfully
----------------------------------------------------------------------------
(2) Is copying command faster than export and import as you mentioned in the beginning of this thread?


Tom Kyte
August 19, 2004 - 8:10 pm UTC

1) there is not one.

2) you can run many copies in parallel, but you can export lots of data in parallel too (think schema level exports) and import them in parallel (run imp many times)

Export Size

Jayesh, October 26, 2004 - 3:30 am UTC

Is there any formula for calculating the approximate sizeof the export dump file and the time duration based on the database size?

Tom Kyte
October 26, 2004 - 7:49 am UTC

in 10g, yes -- the export data pump does both.

in 9ir2 and before -- only past experience (empirical observation of prior exports of that database) will be accurate. You could look at the size of the table/lob segments in the database you are exporting from to get an idea of the size as they are the biggest things in the dmp file.

Resource Privilege sufficient for Exp and Imp

LK, October 28, 2004 - 11:19 am UTC

Hello,

I have the following questions (pretty basic):

What is the minimum privilege you need to exp and imp? I know DBAs can do this. Is Resource privilege sufficient?

Can we just import the data if the table is already there? How do we do that?

Thanks



Tom Kyte
October 28, 2004 - 7:01 pm UTC

<b>create session is all you need</b>

ops$tkyte@ORA9IR2> create user a identified by a default tablespace users quota unlimited
  2  on users;
 
User created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> grant connect, create table to a;
 
Grant succeeded.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> @connect a/a
ops$tkyte@ORA9IR2> set termout off
a@ORA9IR2> set termout on
a@ORA9IR2> create table t ( x int );
 
Table created.
 
a@ORA9IR2> insert into t values ( 1 );
 
1 row created.
 
a@ORA9IR2> commit;
 
Commit complete.
 
a@ORA9IR2>
a@ORA9IR2> !exp userid=a/a owner=a
 
Export: Release 9.2.0.5.0 - Production on Thu Oct 28 18:36:19 2004
 
Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
 
 
Connected to: Oracle9i Enterprise Edition Release 9.2.0.5.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.5.0 - Production
Export done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user A
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user A
About to export A's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export A's tables via Conventional Path ...
. . exporting table                              T          1 rows exported
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.
 

<b>yes, you can, just use ignore=y and grants=n, constraints=n, etc=n (where etc is everything you don't want, just type imp help=y to see them all)</b> 

Use of network

A reader, October 29, 2004 - 8:41 pm UTC

Everything on Unix.

I have a compressed full database export on machine A. I want to import a subset of this export to a database on machine B. If I do something like the following on machine A

imp user/password@machineB file=<(zcat full_exp.Z) fromuser=fromuser touser=touser tables='(a,b,c)'

exactly what is sent over the network?

The zcat will consume cpu cycles on machine A, imp will suck it in and send the whole thing over the wire or only what I asked for (fromuser/touser)?

Thanks

Tom Kyte
October 30, 2004 - 2:03 pm UTC

excactly what is needed to be sent over the network, all of the ddl, and dml needed to rebuild the tables a,b,c

A reader, October 30, 2004 - 2:13 pm UTC

"excactly what is needed to be sent over the network, all of the ddl, and dml needed to rebuild the tables a,b,c "

OK so if my export file is 5GB and the ddl/dml needed to rebuild tables a,b,c is 100MB, only 100MB will be sent over the network?

So this is more efficient than 'rcp'ing the 5GB over the network and doing a local import? Really?

Tom Kyte
October 30, 2004 - 2:18 pm UTC

correct, 100mb will be sent.

will it be more efficient? answers include:

a) yes
b) no
c) maybe sometimes


depends on many factors (network latency being perhaps the biggest one, me -- I'm on a sattelite broadband connection right now, latency is very high -- for me, ftp would be the best -- cause each client/server message takes about 1.5 seconds to "turn around". On a LAN, imp over the network would be best probably, unless the table included a long/clob in which case IMP does it row by row and the chattiness of the imp might *(stress might)* be a concern)



A reader, October 30, 2004 - 2:26 pm UTC

Great, thanks. One more thing...

If my full export file contains 100 schemas and I want to import just schemas A and B, I noticed that after doing that, 'imp' just sits around reading the discarding the rest of the import file (in my imp file=<(zcat ...)). Heck, even with a local import (imp file=local.Z), imp sits around until it reads the whole file.

Sometimes, I just Ctrl-C or kill the process after I see in the logfile that my desired schemas are done.

Is this safe? Is there a way to make 'imp' cleanly terminate after the requested work is done and not wait around reading the rest of the file?

Thanks

Tom Kyte
October 30, 2004 - 3:08 pm UTC

it reads the entire file looking for anything "relevant". if you are *sure* everything is in you want in, ctl-c is "safe" as imp commits after each major operation.

10g data pump and 9i

Josh C, December 08, 2004 - 12:43 pm UTC

For clarification. Can I use 10g datapump on a 9i database?

Tom Kyte
December 09, 2004 - 12:42 pm UTC

big pieces of datapump are in the database itself and 9i didn't have them.


exp/imp are for 9i and before.

Buffers Vs Recordlength

Radhakrishnan K, January 12, 2005 - 2:12 am UTC

Hi Tom,

As we know the Conventional export will not take the Buffers param. And it work with Recordlength param. I have a doubt here with the Recordlength.

Does Recordlength is going to be there inside SGA or it is going to be there in Outside SGA. If you can explain me how does the Direct mode export is working with this param in the angle of SGA it will be useful for me.

My basic doubt is where is the RecordLength is it inside SGA or out side SGA. If it is inside SGA how my SGA put the data in to a different blocksize (ex: 8 kb is my db block size).


Thanks & Regards,
Radhakrishnan

Tom Kyte
January 12, 2005 - 8:26 am UTC

why do we "know that" -- actually I don't :) because it is not true:

</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96652/ch01.htm#1005572 <code>

you need to reread about those parameters, because you have a misconception about that.

Further question on speeding up import

Bernice, April 07, 2005 - 2:30 pm UTC

Hi Tom,

I have a 3rd party application that needs to be migrated from windows to linux. It has a 80 GB database (oracle 9.2.0.2 on windows2000) and I want to migrate the data to linux (oracle 9.2.0.5 - 10g not supported by vendor yet).

The vendor provided me with a plain-o' exp and imp command (schema based). This doesn't seem logical because it will take ages to load this data.

I read thru your chapter on exp/imp in your book and here is what I did...

I poked around the database and found 3 candidate tables that can be exported separately (sizes - 56G (table has a blob column), 11G, 4G). I found that these don't have any foreign key constraints with any other tables.

But my question is how best to handle the other tables in this application schema? Since I am unfamiliar with the schema and the relationships between the remainder of the
tables, I was planning to run an exp command with rows=n and then use imp show=y to see the order in which the tables are built. Based on that, I can run the exp/imp of the data (rows=y) with the tables in the same order so that all the foreign key constraints are satisified (such that I don't get Ora-2201 (integrity constraint) errors. Or is there an better way to get this information?

So, my overall approach:
exp userid=appowner/pass direct=y compress=n rows=n file=full.dmp
--> I will look at this with imp show=y to get list of tables to build (in order)
--> I can also use this to build the tables and all the other objects for this owner (and also permissions)

exp userid=appowner/pass direct=y compress=n indexes=n table=t1,t2,t3...
--> t1, t2... are the small tables
exp userid =appowner/pass direct=y compress=n indexes=n table=bigt1
... (repeat 2 more times to get the other 2 big tables)

Then, import the full.dmp with indexes=n to build the objects (sequences, views, procedures, permissions, etc).
Then, fire all 4 table-level imports simultaneously.
Then, run imp with indexfile parameter to build sql to recreate the indexes.

Does this sound logical?

thank you,
Bernice

Tom Kyte
April 07, 2005 - 4:30 pm UTC

doing the piecewise imp/exp takes a while to get everything setup.

for a vendor application, it is even harder (woe to you if you miss something!)

80 gig isn't very much these days and if you factor out the one table of 56g it is really pretty small.

What I might do is setup fgac (fine grained access control) on the really big table. make it so that it'll export zero rows for everyone but one person.

Run the vendor "move the schema's" scripts "as is" (you'll get your big table created but not filled)

(optionally disable indexes on big table in new database)
Then, export in parallel (use query= maybe) that big table
Import it in parallel -- realizing that lobs will make this a row by row import for all (no array processing)

Followup to exp/imp speedup

Bernice Lam, April 07, 2005 - 8:32 pm UTC

Hi Tom,

Thanks for the suggestion! I will spend some time learning and implementing this.

I just want to clarify one thing -
This database does not have end-users accounts (those are controlled by the application layer). So, should I create a db user (blam) who will have full access to the big_table and create a FGAC rule that allows only "blam" to view the contents of big_table (and not the table_owner)? Then run the exp as the schema_owner - this will exp all the objects and all the data except for big_table's data.
As "blam", run the exp on the big_table with query to break up the table in manageable chunks.

thanks!
Bernice


Tom Kyte
April 07, 2005 - 8:40 pm UTC

(or not allows blam to see it so the big export will see ZERO rows, then drop the policy and export just that table...)

we just want to hide the rows for a bit -- while we do the big export.

What is import doing?

Michael, April 08, 2005 - 4:01 pm UTC

When importing, the process sits here for a very long time.

. . importing table "MESSAGE_LOG"
............... 5991707 rows imported

How can I find out what the process is waiting for and what can I do to speed things up?

Thanks

Tom Kyte
April 08, 2005 - 4:12 pm UTC

looks like a table that might have a long/clob right?

that would be imported a single row at a time if so, slow by slow.

What is import doing?

Michael, April 08, 2005 - 5:08 pm UTC

The table only contains number, varchar, and date data type.

I have commit=n during the import. Is the import committing at this point now that all the rows are imported?

Tom Kyte
April 08, 2005 - 5:36 pm UTC

correct, it'll commit when it finished importing the table (which is much faster than commit=y)

so bump up buffer, let it array insert larger stuff.

you could use the v$ tables to see what the session has been waiting on while it is running.

any "checkpoint not completes" in your alert log?

makefile in PRO*C

Ranjana, July 15, 2005 - 2:20 am UTC

I've makefile, ?.pc,?.c,?.h files in my unix system, which i've copied in my windows system and want to run Oracle 9i Proc*C environment, but there are lots of differences as oracle 6 (on unix sco 3 platform) is too old , its sco unix 3.0. I've compiled and run the form from oracle 6 to oracle 9i and left part is reports. Pls tell me some easiest and time saving way to compile my own generated makefile , c and pro*c files.



Tom Kyte
July 15, 2005 - 7:39 am UTC

I'm not even sure what platform you are on now?

fast Oracle data export import

Leon, August 02, 2005 - 2:05 pm UTC

From our experience we found the fastest way to export/import large tables was to extract data into csv flat files and then import with sqlldr. Our company uses fastreader from wisdomforce www.wisdomforce.com to extract data. For instance 50GB table was unloaded and loaded with Fastreader less than in a hour. We used compress on fly feature in fastreader to reduce network bottle neck


Export to an OS Pipe

Khalid, August 14, 2005 - 11:57 am UTC

Hi Tom,
In your book you have mentioned about using OS pipe in unix to stream the output from export to another executable like gzip. In the explaination you have written since EXP does not support seeking hence it can write to the pipe. I wanted to know if IMP is like EXP i.e. it does not support seeking. And if IMP does not support seeking can it be placed at the receiving end of the pipe to do a simultaneous import?

Pardon me if my question sounds a little stupid. I am not well versed in Unix.


Thanks

Tom Kyte
August 14, 2005 - 12:04 pm UTC

that is not quite what was said.

<quote>
When EXP is used to write to a device that supports 'seeking' as a normal file does, it is limited in the size of the file it can generate. EXP uses the normal OS file APIs, which, on a 32bit OS, limit the size of the file to two gigabytes. There are four solutions to this issue and we'll take a look at each of these, although there are probably other solutions as well.
</quote>

It is not that EXP does not support seeking, it is when you use EXP to a file that does support "seeking", random IO (a regular file), it is limited by the size of the file the OS lets it create. In the past, this used to be 2 gig on most all platforms.


If IMP is used against a device that does not support seeking, IMP can read a theoretically infinite amount of data, just as EXP can provide.


Yes, you could

mknod data.dmp p
exp userid=/ file=data.dmp &
imp userid=/ file=data.dmp

but why? If you wanted a copy of a database, it would be much more efficient to just back up and restore it.

and if you have 10g, use impdp and expdp - it works naturally like this.

Export to an OS pipe

Khalid, August 14, 2005 - 5:34 pm UTC

Thanks Tom,
As the subject of the thread goes. I was just trying to figure out a way to increase the speed of export and import. It's Good that 10g has come up with data pump to reduce the time consumption.

Since you mentioned that it is feasible to put IMP on the recieving end of a pipe wouldn't it resolve the issue of time. IMP would be ready to process the data as soon as it is available in the pipe and not wait for EXP to terminate in order to begin its operation. Wouldn't this be very fast if we need to copy set of tables from one database to another. I think i will have to give it a shot.

Thanks


Tom Kyte
August 14, 2005 - 9:02 pm UTC

on a pipe, it would NOT wait (this is unix, not WINDOWS, pipes are for -- well -- pipelined data -- no waiting)

typically IMP would wait for EXP to generate a few kilobytes of data (8/16k perhaps) and then start working on it immediately.


but pipes have the mandatory need to be on the same machine and as we all know, the best number of instances on a machine is -- one -- so perhaps you need to ask "why >1 instance" in this case


sqlplus copy command works pretty well, assuming it can handle all of your types.

Reading from a Socket

Khalid, August 16, 2005 - 12:01 am UTC

Is it possible for EXP to write to a socket and IMP to read from a socket. If that can be done then won't that be great.

Thanks,
Khalid

Tom Kyte
August 16, 2005 - 10:47 am UTC

it cannot, expdp and impdp with 10g CAN however - they can write directly to/from eachother.

Import Problem

David - Chennai, September 15, 2005 - 1:13 am UTC

Hi Tom,

I am encountering the below error while importing a schema objects to our database. Both the Databases have UTF8 as Character Set. Is there something I need to set up to have our database with NCHAR character set? Please advice on how I can go about solving this issue.

Thanks and regards,
David.

Tom Kyte
September 15, 2005 - 7:44 am UTC

what error?

Db links?

Kamal, December 30, 2005 - 10:32 am UTC

Aren't db links better than sqlplus copy command to move data?

Tom Kyte
December 30, 2005 - 11:12 am UTC

not necessarily, they are all tools that may be useful in different circumstances.

db links or sqlplus copy

A reader, December 30, 2005 - 4:44 pm UTC

In addition, 10gR2 docs still state
- LONG and LONG RAW columns cannot be used in distributed SQL statements and cannot be replicated.
So one example where dblink is not an alternative
SQLPLUS doc says SET LONG (n)
The maximum value of n is 2,000,000,000 bytes.
Hopefully long enough for most cases.

Export data pump sys schema

jack, January 05, 2006 - 5:00 pm UTC

hi tom

I have two table on SYS schema and I want to export them by data dump but a problem exist:

* SYS as sysdba caould not export data pump
* When I export with SYS as sysoper, it could not see my tables for export data pump!
* When I want to use another user for export, it coluld not see SYS schema!

Please help me to export data pump my 2 tables!

Tom Kyte
January 05, 2006 - 6:44 pm UTC

what two tables ?

are they true sys tables - or something you created?

Export data pump sys schema

Jack, January 07, 2006 - 2:43 am UTC

I have created 2 tables on SYS schema.

Tom Kyte
January 07, 2006 - 10:21 am UTC

copy them into another schema (create table your_own_table as select * from sys.that_table_you_created)

drop the two sys tables you created

export the copies

and never use sys again! (except to start/stop the instance, upgrades and the like)


Or, you can use an external table unload to create the equivalent of datapump files that can be attached to another database:

ops$tkyte@ORA10GR2> create or replace directory tmp as '/tmp'
  2  /

Directory created.


ops$tkyte@ORA10GR2> create table all_objects_unload
  2  organization external
  3  ( type oracle_datapump
  4    default directory TMP
  5    location( 'allobjects.dat' )
  6  )
  7  as
  8  select * from all_objects
  9  /

Table created.


<b>and later, in another database - once you move the file over</b>


ops$tkyte@ORA10GR2> create table t
  2  ( OWNER            VARCHAR2(30),
  3    OBJECT_NAME      VARCHAR2(30),
  4    SUBOBJECT_NAME   VARCHAR2(30),
  5    OBJECT_ID        NUMBER,
  6    DATA_OBJECT_ID   NUMBER,
  7    OBJECT_TYPE      VARCHAR2(19),
  8    CREATED          DATE,
  9    LAST_DDL_TIME    DATE,
 10    TIMESTAMP        VARCHAR2(19),
 11    STATUS           VARCHAR2(7),
 12    TEMPORARY        VARCHAR2(1),
 13    GENERATED        VARCHAR2(1),
 14    SECONDARY        VARCHAR2(1)
 15  )
 16  organization external
 17  ( type oracle_datapump
 18    default directory TMP
 19    location( 'allobjects.dat' )
 20  )
 21  /

Table created.

and then just "select * from t" - to load back into the target database. 

Export data pump SYS schema

jack, January 07, 2006 - 11:43 pm UTC

Thanks Tom!

Can I use:

select * from all_objects

instead of:

select * from my_sys_table1

and I move only my tables?


Export data pump SYS schema

jack, January 07, 2006 - 11:49 pm UTC

Excuse me Tom!

I take a mistake! I wanted to say:

Can I use:

select * from my_sys_table1

instead of:

select * from all_objects

and I move only my tables?

Tom Kyte
January 08, 2006 - 11:38 am UTC

yes, I was simply using all_objects as an EXAMPLE

you would HAVE to use your tables, else you would not get your data!

Export data pump SYS schema

Jack, January 08, 2006 - 11:38 pm UTC

Very Very Thanks Tom!

Manjunath, January 26, 2006 - 4:56 am UTC

Single task linking is not supported in 8i.
Is it supported by oracle in 9i?

Tom Kyte
January 26, 2006 - 10:19 am UTC

no, it is no longer a 'feature'

Manjunath, February 15, 2006 - 8:03 am UTC

Looking at ways to speed up the exp/import process. Is this a right approach?:

1)Export metadata from source with rows=n, index=n,full=y
2)Export individual schemas seperately(paralelly) with rows=y and index=n
3)create indexfile on source during export
4)Import metadata on target using export dump from (1) using index=n
5)Disable all foreign constraints on target
6)Drop all materialized views (this is done to take care of a bug during import which does not import MVs if MVs are already created once even if ignore=y is used)
7)Import all schemas paralelly using dumps from (2) using index=n,compile=n,analyze=n
8)Enable all foreign key constraints
9)create indexes using indexfile from (3) on target
10)compile all invalids on target

We tried this on an Oracle Apps database and we faced some errors during import.:

#1) A parent table had a trigger and the rows in that table didnt get imported (due to some errors in the trigger) but the rows for the child table got imported.Due to this, we could'nt do step 8 successfully for some tables.

Errors:

=================================================
alter table HR.PER_POS_STRUCTURE_ELEMENTS enable constraint PER_POS_STRUCTURE_ELEMENTS_FK2
                                                            *
ERROR at line 1:
ORA-02298: cannot validate (HR.PER_POS_STRUCTURE_ELEMENTS_FK2) - parent keys
not found
==============================
Corresponding Import error:

. importing HR's objects into HR
. . importing table   "PER_POS_STRUCTURE_VERSIONS"
IMP-00058: ORACLE error 4068 encountered
ORA-04068: existing state of packages has been discarded
ORA-04063: package body "APPS.HR_API" has errors
ORA-06508: PL/SQL: could not find program unit being called
ORA-06512: at "APPS.PER_PSV_OVN", line 16
ORA-04088: error during execution of trigger 'APPS.PER_PSV_OVN'

===================================
data imported:
Parent:
SQL> select count(*) from PER_POS_STRUCTURE_VERSIONS;
 
  COUNT(*)
----------
         0
 
Child:
SQL> select count(*) from PER_POS_STRUCTURE_ELEMENTS;
 
  COUNT(*)
----------
         4

==========================================================
#2)Second type of error that we faced:

IMP-00017: following statement failed with ORACLE error 1445:
 "CREATE FORCE VIEW "APPS"."PER_COMMON_LOOKUP_TYPES_V"                       "
 "     ("ROW_ID","APPLICATION_ID","CUSTOMIZATION_LEVEL","D_CUSTOMIZATION_LEVE"
 "L","LAST_UPDATE_DATE","LAST_UPDATED_BY","LOOKUP_TYPE","LOOKUP_TYPE_MEANING""
 ","LAST_UPDATE_LOGIN","CREATED_BY","CREATION_DATE") AS "
 "SELECT FCLT.ROWID , FCLT.APPLICATION_ID , FCLT.CUSTOMIZATION_LEVEL , FL.MEA"
 "NING , FCLT.LAST_UPDATE_DATE , FCLT.LAST_UPDATED_BY , FCLT.LOOKUP_TYPE , FC"
 "LT.LOOKUP_TYPE_MEANING , FCLT.LAST_UPDATE_LOGIN , FCLT.CREATED_BY , FCLT.CR"
 "EATION_DATE FROM FND_COMMON_LOOKUP_TYPES FCLT, FND_LOOKUPS FL WHERE FCLT.CU"
 "STOMIZATION_LEVEL = FL.LOOKUP_CODE AND FL.LOOKUP_TYPE = 'CUSTOMIZATION_LEVE"
 "L'"
IMP-00003: ORACLE error 1445 encountered
ORA-01445: cannot select ROWID from a join view without a key-preserved table



Questions:
===========
1)Is the export import method logically ok?Can there be any more improvements?

2)Any light can be thrown on the 2 error types above?We are having a lot of those. The above errors do not occur for a normal full database export/import.


Facts:
======
Source= 9206
Target=9206
OS Linux


 

Tom Kyte
February 15, 2006 - 9:42 am UTC

what is the goal here, why are you doing a full export/import?


If my source was 9206 and target was 9206 and OS = Linux, I would

a) backup source
b) restore to target

and be done with it. very simple.

Manjunath, February 16, 2006 - 6:29 am UTC

Sorry for not mentioning the full details:
Source is Solaris
Target is Linux

Tom Kyte
February 16, 2006 - 11:58 am UTC

export import will work in general, but APPS is APPS, I would suggest you work with support/consulting to come up with the best approach to migrate an Oracle applications database - lots of "special" setup may well be necessary.

imp performance

A reader, March 25, 2006 - 6:39 am UTC

i am using 9.2 on aix
i have exported one schema into a .dmp file
and then droped that schema ( have no backup)
now i am importing from that .dmp file into another
database but its taking too long, there are tables
with 150million rows, 50million rows in that .dmp file.
the imp process is stuck on one of these tables
and its importing it now for two days (not finished yet).

Can you suggest me how can I get my million rows table
back in a database from that .dmp file? Is there a
way to read that .dmp file and make a .txt file from it
and then load it into database? any other way i can
do this task complete.

Thanks


Tom Kyte
March 25, 2006 - 7:40 am UTC

does the table contain a long or a lob column? if so, the import is having to go slow by slow (row by row)


You could use feedback=10000 or so to get a dot printed so you can see progress being made (or if you used commit=y, the slow way, you could query the table itself to see progress - but not recommended).




not using any lob

A reader, March 26, 2006 - 5:25 am UTC

the table have only char, number, varchar2 , data
data types no long or lob types

Tom Kyte
March 26, 2006 - 8:28 am UTC

Then I would "guess" (must guess, not much else to go on)

a) you might have checkpoint not complete messages in your alert (undersized redo)
b) you might have lots of free buffer waits (insufficient buffer cache, dbwr cannot keep up)
c) you might have slow=true set by saying commit=y

but you can monitor the process using feedback or by monitoring the v$ tables to see what your imp session is doing.

more info

A reader, March 27, 2006 - 6:23 am UTC

I have no increased the redolog groups by 4 and each
of it is of 1gigabyte.
my log buffers is 100MB.

You are saying that commit=Y is slow but oracle
documentation says it improves performance, here is
copy/paste from 9.2 documentation:-

Specifying COMMIT=y prevents rollback segments from growing inordinately large and improves the performance of large imports. Specifying COMMIT=y is advisable if the table has a uniqueness constraint. If the import is restarted, any rows that have already been imported are rejected with a recoverable error.



Tom Kyte
March 27, 2006 - 10:03 am UTC

the documentation is entirely INCORRECT and I've already filed a bug report about that one a while ago.


commit=y slows down import.

inserts into unindexed tables like imp does generate the least amount of undo and hence will not cause the rollback to get hugely large.

commit=y would not be advisable for uniqueness issues - you either want the table loaded OR NOT - I would not want a partial table. And "any rows that have already been imported..." would be rejected with a recoverable error WITH OR WITHOUT commit=y.




export dump larger than the actual size

Manjunath, April 07, 2006 - 10:05 am UTC

Hi Tom,

Every month we take an export backup of a partition table,and this time the export dump is actually larger than the actual partitioned segment. Feb06 partition is actually 14gb but the export dump turned out to be 19gb.
How could this be possible and is there any way you could avoid this...

Thanks in advance
Manjunath

Tom Kyte
April 08, 2006 - 9:00 am UTC

there is more in a dump than just the table data itself.


Why are you using export on such large things, what is the goal here, what is the dmp file used for.

Exort dump larger than actual size

Manjunath, April 10, 2006 - 12:09 am UTC

Hi Tom,

Our database version is oracle 9.2.0.6.0 running on solaris 2.9. The purpose of taking the partition backup (range partition) is that we store only 6 months data on the production database and backup the partitions to tape and restore it to another database for mis reporting.
The export command is quite simple exp username/passwd file=abc.dmp log=abc.log tables=abc:abc_feb grants=n statistics=none indexes=n.
If my export command is capturing more data than required, then how can we aviod this??

Thanks in advance
Manjunath

Tom Kyte
April 10, 2006 - 5:31 am UTC

why not

a) turn it into a table (exchange partition)
b) TRANSPORT the tablespace to the other database
c) if you want, on the other database, exchange partition again to make the table into a partition again


that way, you need not unload and reload a single byte of data. Just move datafiles.

export dump larger than actual segment size

Manjunath, April 11, 2006 - 6:54 am UTC


Hi Tom,

I really appreciate for your immediate replies, thanks.
But my main concern is not moving the data from one database to another. We archive these partitions to tape for further references, if required. But I'm very curious as to how these export dump is larger than the actual segment size. The production database is on raw device.

Thanks again
Manjunath

Tom Kyte
April 11, 2006 - 2:37 pm UTC

it seems unusual, but perhaps you are exporting compressed data.

and it would seem you changed your mind. Last week you said:

...
The purpose of
taking the partition backup (range partition) is that we store only 6 months
data on the production database and backup the partitions to tape and restore it
to another database for mis reporting.
.......


hmm, ok.

Refreshing test database

A reader, April 12, 2006 - 10:59 am UTC

Hello Tom,

Thanks for your all help.

I have been asked to refresh my test database for just 2 schemas.

Requirement is:
I have to refresh 2 schemas in test database, from production, and have to retain rows for few objects (20 tables) in test database for one of the schemas out of 2.

Following is my action plan

(1) Export required schemas from production (of course with compress=NO) with rows, grants, index etc...

As far as my knowledge is concerned, I need to take care of jobs which were scheduled on production database (Which I want on test database as well!) so I will backup them as well as under

spool database_jobs.lst
select job,log_user,schema_user,next_date,interval,what from dba_jobs
where log_user in ('those 2 users which we are refreshig')
/

spool off

(2) Export required 20 tables (I have the list for these tables & of course with compress=NO)) from test database with rows, grants, index etc ..
(3) Create SQL script for creating these 2 users from test database with grants (I got it from TOAD, Thanks to TOAD)
though I will create following tables in system schema in test database, just for backup.

create table greants_tab_privs_b4refresh
as select * from dba_tab_prives;

create table grants_sys_privs_b4refresh
as select * from dba_sys_privs;

create table grants_role_privs
as select * from dba_role_privs;

(Extremely defensive!)

(4) take cold back up of test database (Just to save my self in case something would go wrong!, though it is not required! Once again extremely defensive!)
(5) Once steps 1 to 4 are done, drop 2 schemas in test database
(6) Recreate users in test database (As per saved SQL with grants as per step 3)
(7)Import only data from export taken from production into test database from export dump file (with indexfile=index_rebuild.sql, that's gonna work as init.ora parameter fast=true ;))
(8) Disable all PK as well as FK on tables on tables on which we needed to preserve data using following query

spool disable_constraint
select 'alter table '||owner||'.'||table_name||' disable constraint '|| constraint_name||';'
from dba_constraints where constraint_type in ('P','R')
where table_name in ('<those 20 tables for which I need for which I needed to preserved data>');
spool off;

and before disabling these PKs and FKs I will spool file for enabling them as well as under:

spool enable_constraint.lst
select 'alter table '||owner||'.'||table_name||'enable constraint '|| constraint_name||';'
from dba_constraints where constraint_type in ('P','R')
/

spool off

(9) Truncate those 20 tables in test database.
( I would create the script for this as well, it's easy)
spool truncate_table_afterrefresh.lst

select 'truncate table owner.||table_name||';' from dba_tables where owner in (<owner name of those 20 tables>');
spool off

(Those 20 tables belong to only one user)

(10) Once I have truncated those 20 tables, I would import, only date no indexes, from export.dmp file which I did in step 2 (Yeah, I still know I took export of these 20 tables in step 2!). Once again I would also specify INDEX=NO, as I donÂ’t want indexes, I already have them in step 7.

Please let me know how do you feel about my action plan?

Any suggestion is appreciated

Cheers,

Tom Kyte
April 12, 2006 - 11:23 am UTC

export gets jobs for you.

can you move the 20 odd tables to another schema, that would make this list "very trivial". Else, preserving them mucks it all up.

do you have 10g?

Refreshing test database

A reader, April 12, 2006 - 11:07 am UTC

Continue from last post (Just forgot to include this!)
And yeah, I will rebuild indexes on those 2 schemas as well (I have DDL for creating them!)

Refreshing test database

A reader, April 12, 2006 - 11:43 am UTC

Tom,

Thanks for your reply
Unfortunately I am on 9i ( I know what did you mean, me to missed data pump here!)

any other suggestion in plan?

Thanks,


Tom Kyte
April 12, 2006 - 7:29 pm UTC

can you move the 20 objects into another schema - they do seem to be something "special" right?

Refreshing test database

A reader, April 13, 2006 - 9:12 am UTC

Tom,

Unfortunately we do not have space in test database for storing those 20 objects as they are very big.

any other suggestions?

Thanks for your reply.

You are the best.

Cheers,

Tom Kyte
April 14, 2006 - 11:35 am UTC

ahh, we have changed slightly the goal.


how about you used fine grained access control (FGAC) - when you go to export, you do so using an account where by the FGAC policy returns "where 1=0" (no rows exported from these tables). Everyone else gets "NULL" (you can add the policy, do the export, remove the policy if you don't want it there all of the time).

then just import with "ignore=y" to ignore the object create errors that will happen and zero rows would be imported into these 20 tables since zero rows were exported.

Alexander, April 21, 2006 - 3:53 pm UTC

Tom could you tell me the command to run an import using parallel and the disable logging option as mentioned above I'm having trouble finding the syntax. I'm on 9.2.0.7.

Thanks.

Tom Kyte
April 21, 2006 - 4:29 pm UTC

that is because.... there is no such syntax. parallel and direct pathing is a new feature of impdp - the data pump - in 10g.

Which is more faster?

Davis, April 24, 2006 - 2:41 pm UTC

Hi Tom,

We will transport about two hundreds million data from one database to another,both are oracle9iR2.It must be operating before the data is stored in the new database.

There two ways to transport the data:
1)using database link between the two databases,directly operating.
2)export the data from the first database and import the data to the second database in a temp shema,then converting the data from temp schema to another.

You think which is faster? or do you have another better method?

Thank you in advanced.

Tom Kyte
April 24, 2006 - 2:53 pm UTC

why not transport the data ? eg: just moving data files basically....

old posting

just nobody, April 25, 2006 - 4:01 am UTC

What is import doing? April 08, 2005
Reviewer: Michael from San Diego

When importing, the process sits here for a very long time.

. . importing table "MESSAGE_LOG"
............... 5991707 rows imported

How can I find out what the process is waiting for and what can I do to speed
things up?

Thanks
---------------------------
There was this previous posting.


Maybe question here is that at that point import is creating possible indexes and that may take a lot of time.

And what to be done to make index creation faster ? Some suggestions:
- before importing adjust database paramaters so that
whole imported table would fit in memory so that there would be no extra reads. Set also checkpoint interval to some large value. log_buffer may require adjusting.
- adjust database parameters so that there is enough sort area. (sort_area_size or pga_aggregate_target. And remember that if pga_aggregate_target is used it can temporarily be set quite high because one session is using only max 5 % of pga_aggregate_target (oracle 9i ?). Presuming that there is no other activity on database and there will be no other sessions trying to utilize large pga_aggregate_target)
- before importing check indexes. Especially pctfree -values. e.g. if index value is taken from somekind of sequence and it is decreasing/increasing set pctfree as 0. Then index will be smaller and less work is to be done. Also check possible index compression. But setting this beforehand will require rebuilding (not online!) of index in source database.
- after import set database parameters to appropriate values for future use. (I did presume that this was one time use).

while compiling linux code on solaris i am getting error as line 75: Unexpected end of line seen

ravi, May 17, 2006 - 2:19 am UTC

please could you tell me the answer i am puuting my mkhdr.gnu as below
# Make API header file: GNU Make version #
########################################################################

#ifeq ($(DEVDIR),)
#_error_::
# @echo "!!! makhdr ERROR: undefined variable: DEVDIR"
#endif

#ifeq ($(ETCDIR),)
#_error_::
# @echo "!!! makhdr ERROR: undefined variable: ETCDIR"
#endif

#ifneq ($(MAKTYP),gnu)
#_error_::
# @echo "!!! makhdr ERROR: invalid variable: MAKTYP"
#endif
#ifeq ($(EXTLIBDIR3),)
#EXTLIBDIR3 = $(LIBDIR)
#endif

########################################################################
# C/C++ compiler stuff #
#ifeq ($(EXTLIBDIR3),)
#EXTLIBDIR3 = $(LIBDIR)
#endif

########################################################################
# C/C++ compiler stuff #
#ifeq ($(EXTLIBDIR3),)
#EXTLIBDIR3 = $(LIBDIR)
#endif

########################################################################
#ifeq ($(EXTLIBDIR3),)
#EXTLIBDIR3 = $(LIBDIR)
#endif

########################################################################
# C/C++ compiler stuff #
########################################################################

#ifeq ($(EXTLIBDIR3),)
#EXTLIBDIR3 = $(LIBDIR)
#endif

########################################################################
# C/C++ compiler stuff #
src_install:: _error_ $(SRCDIR)
ifeq ($(SRCDIR),)
@echo "!!! makhdr ERROR: undefined variable: SRCDIR"
exit
endif

_error_::



Exp/Imp data

A reader, June 28, 2006 - 11:21 am UTC

Tom,

I have to exp/imp data from a very busy prod database (9.2) into our test system. Export was done without warnings, but at the end of the import I got a lot of ORA-02298 *** Parent key not found errors.

Did this happen because some tables were updated during the export? How does export work - I mean whether I should ever do export only on the idle database because of potential RI issues or is it okay to do export online - I know a lot of people who do export instead of backup and think that if they have export done without warnings they can import data in any case.

Thanks

Olaf

Tom Kyte
June 28, 2006 - 11:40 am UTC

did you use consistent=y

the default is "no" and every table would be exported consistently with respect to ITSELF (as of a point in time) but each table would be at a DIFFERENT point in time.

(and be aware that consistent=y is not an option for "as sysdba" exports!)

Murali, July 21, 2006 - 6:23 am UTC

Tom:

Curretnly we are using oracle10g R1.Solaris version.
I need your help on Table data Movement from One database to another database

We have 1557 tables in Schema_1 under Database_1.
Hardware 4 cpu 1.5Tera SAN Hard Disk and 64G RAM. Currently 460 Big major tables which each table on avarage have 3 million records. that taking around 500Gig

Now We need to move 460 tables data into another Database_2 Schema_2.

I need your help and advise which method is suitable I am most worried it will take howmay day's..

pls adive

1) COPY command ( can be very fast/efficient when used machine to machine over a good )
network.
2) Import and Export . (Traditional)
3) Data Pump using (direct path parallel imports using the 10g data pump ( with no logging))
4) create db link between two db. and exeucte create table T1_big as select * from schema_1.big@dblink
5) SQL* Loader ( i.e generate flat files and load into target)
6) Insert into /* append (parallel)*/ Target_table select /*parallel*/ from source.table@dblink
7) transport 460 tablespace datafiles to 460 tables into target related ( i am not sure this works)

Pls advise such a big volume how to move to one schema to another schema using good methods.





Tom Kyte
July 22, 2006 - 7:23 pm UTC

if #7 works - that gets my vote - however that'll want to use the SAME schema............

why? the move should be towards consolidation - not distribution...

Can expdp & impdp be used with 9205

Manish Sharma, September 15, 2006 - 4:04 pm UTC

Hi Tom,
Can we run expdp utility from 10g to 9205 database using user/pwd@9205 and same for impdp ?? and why not??


Thanks
Manish

Tom Kyte
September 16, 2006 - 2:28 pm UTC

nope

because expdp and impdp are NEW features of the NEW software and take advantage (rely on) thing the NEW software (10g) does.

the datapump code - it is NOT in impdp and expdp - it is in the database itself, 9i doesn't have this code.

Using 10gR2 expdp & impdp

Manish Sharma, September 18, 2006 - 11:23 am UTC

Hi Tom,
In some other thread you suggested to use export & import for migrating the SYSTEM and other TS's from DMT to LMT. I did this by pre-creating all the TS's in LMT mode and then using expdp & impdp in full mode.

I am missing lots of grants to different users from the sys user on mostly sys objects (like various select on sys tables), after importing into the new 10gr2 10.2.0.2. Exporting database is also of same version and same OS.

I don't see any errors in import log.

Can you advise ???

Thanks
Manish Sharma

Tom Kyte
September 18, 2006 - 1:40 pm UTC

sys stuff is not exported.

How to get the sys grants

Manish Sharma, September 18, 2006 - 3:27 pm UTC

Hi Tom,
What is the way of getting these grants or privs for these users, as some objects within these users are using these privs and now they are invalid.
or
What is the workaround ???

Thanks

Tom Kyte
September 19, 2006 - 2:27 am UTC

you can query the dictionary to get them of course.

(i've always wondered why people don't have their install scripts for their applications, always perplexes me that this stuff isn't in source code control)

Maintaining oracle backend work in source control

Manish Sharma, September 19, 2006 - 8:48 am UTC

Hi Tom, Thanks for your reply.

Well till now I haven't come across where all these user creations and their grants are being maintained in source control. Yes many times we have the scripts or we have to generate from their dev and then push the same to QA & prod.

But in this case this is the dev/test database.

Can you please explain what do you mean by maintaining all these grants or oracle backend stuff for application related be maintained in source control software.

Thanks
Manish

Tom Kyte
September 19, 2006 - 2:40 pm UTC

I mean - you use source code control for things like "create", "grant", "alter", etc. Meaning - you know what privileges your special accounts have because you have a handle on it, it is in source code control.

Saved my weekend

Doug Brown, October 02, 2006 - 4:26 pm UTC

Just a huge THANK YOU for your expert advice. I had a table importing and after 8.5 hours it had only imported 200M out of 1.5G total. This was at 3am on a Sunday morning when I realized the import may not finish in time for Monday morning. I started to go to metalink and thought better check your experiences first. Finding the advice concerning longs ( this table had one ), I ftp'd the dmp file to a test server, started the import again using commit=n and a larger buffer. I too had always subscribed to the commit=y from prior readings in the Oracle docs. Needless to say it now finished in 1.5 hours and the one running on prod was still at 200M. Killed the one on prod and restarted and finished in time for football at noon.

COPY command - Ideal value for array fetch size

Dheeraj, February 21, 2008 - 4:02 am UTC

Hi Tom,

In the example below, I need to copy 5 million records of a table, from one Oracle DB to another. In your first and foremost response, you have also suggested COPY command. 

My queries are:
 
1) If I set arraysize = 5000, what can we say abt. the performance of the following statement?

SQL> copy from abc/123@xyz create dummy_test using select * from table where ROWNUM < 5000000;

Above would lead to 1000 batches.

2) I guess, we should be committing only when COPY is completed, hence copycommit should be set to 0

Should we follow the same approach if no. of records to be transported is 50 million or use:
- Flat file spooling --> SQLLDR/EXTERNAL TABLE
- Transportable tablespaces.

Any other alternative will be highly appreciated.

Thanks,

Dheeraj

Tom Kyte
February 21, 2008 - 7:39 am UTC

1) don't do 5000, that would be rather large. A couple hundred is good.

abt?

what could we say abOUt performance? not much. There are diminishing returns for larger and larger arraysizes (server has to get and package up that many rows - that is a lot of data, memory. network has to get slammed with that much data. client has to receive and unpack - that is a lot of data, memory). So, it could well run slower than small arraysizes would.


2) correct, commit when your transaction is done.




transportable tablespace would be perhaps the fastest approach of course, think about it - just move the datafiles - no reloading

To: Dheeraj

A reader, February 21, 2008 - 8:58 am UTC

If its only 5 million, and if its 10g, we can use EXPDP, IMPDP.

The another solution would be
1. On source, create an external table from data in your database using ORACLE_DATAPUMP in PARALLEL (believe me it's really very fast)
2 Copy dmp files from source server to target
3 On target Create external table pointing to above dump files
4. On target, create heap organized table as select * from external table
5. build indexes in parallel

HDH

Lynn Sattler, March 10, 2008 - 2:41 pm UTC

Thanks Tom for this thread, because I was able to help the following import project by using commit=n.

To provide some information I've learned recently and also much over the years regarding exp/imp here is some real life data.

I recently ran a full import using a two step process.
This was done on a new (1000mhz) HP unix box with san attached emc disk with oracle ver 9208.  I had the machine and the database to myself.  We are working on upgrading to a new machine and a new release of oracle with JD Edwards advanced in code levels.  I want to exp / imp to get the new tablespace features of oracle 9i.

import 1 did: rows=y, grants=y
import 2 did: indexes=y, constraints=y

total imported database size was 289 (including system and undo)

gigabytes imported in import 1 was:  206 gb
gigabytes imported in import 2 was:   82 gb  
import 1 took: 14:46  (14 hrs 46 min)  ( 896 minutes)  ran from Thu Mar  6 04:15:07 to Thu Mar  6 19:01:12
import 2 took:  31:10  (31 hrs 10 min)  (1870 minutes) ran from Thu Mar  6 19:13:40 to Sat Mar  8 02:33:00
over all took   45:56  (45 hrs 56 min)  (2756 minutes)

I used these init parms to help with the speed.
log_checkpoint_interval = 50000000
log_buffer = 65536000
sort_area_size = 5120000000
db_block_buffers = 300000
no archive logging

import 1's parmfile
file=filename.dmp
buffer=327680000
commit=n
ignore=y
full=y
userid=system/pw
rows=y
indexes=n
grants=y
constraints=n
show=n
FEEDBACK=100000
STATISTICS=ALWAYS

import 2's parmfile
file=filename.dmp
buffer=327680000
commit=n
ignore=y
full=y
userid=system/pw
rows=n
indexes=y
grants=n
constraints=y
show=n
FEEDBACK=100000
STATISTICS=ALWAYS

some misc notes

_disable_logging=true
This parameter did change the i/o slightly (on an earlier run)  but with modern emc disk, did not help much.  I ran 1 import with this parm and one without this parm.  The actual effects was that with the parm 

set, oracle never wrote io to the actual redo files.  However it "went through the motions" to build the redo logs internally, they just did not get written to the redo files.  That is, the alert log showed redo logs getting built, with the message like "Thread 1 advanced to log sequence 72" but the unix sar command 

showed no activity to the redo log files.  Thus I gave up on this parameter doing much good.  The total import logging: 
import 1 cut 2125  100 meg logs    approx 212 gb
import 2 cut  709  100 meg logs    approx  70 gb
This may seem like a lot cut out io, but import 1 seemed mostly cpu bound.  import 2 seem mostly io bound, 

using very little cpu.  Note we are talking emc disk that provides dasd fast writes.  Yes I would have gotten a bit more speed with this parm set.  I did not use it because I was changing things to see the effects.  This parm got left this way in the process.

STATISTICS=ALWAYS
This parameter says it will keep the stats from the export file/db.  That is a mistatement of oracle.  I queried my newly imported system and found that after the import there were statistics on most tables.  The last_analyzed_date showed the exact time the table/ index was imported.  The numeic numb_rows always matched the count shown on the import log.  My hunch is (I don't have the source db to prove) that oracle re-calculates statistics on any table and index that had statistics on at the time of the export.

STATISTICS=recalculate
I am not sure what one of my earlier runs proved here.  I believe it did the same thing as always.
I was trying to speed up the imports by not having to recalculate stats, but could not accomplish that.

I have used the ixfile parameter in the past and will now plan to use it and parallel the index creation 

for the real import that will be needed at cutover.
Then I will likely use 3 steps.
step 1  import with rows=y, grants=y
step 2  index creation using ixfile input, 5 - 8 index creation runs simultaneously
step 3  import constraints=y

exclude=table_statisitics

Ebrahim, April 21, 2008 - 1:21 am UTC

In efforts to find ways to expedite expdp/impdp, I was trying to find in manuals if table/index statistics can be avoided. Then I saw someone using exclude=table_statistics,index_statistics.

Is this an undocumented feature? Are there any risks associated with it? I would like to use this in production environment if it is safe.

thanks, Tom

export using expdb,

A reader, May 27, 2008 - 8:37 pm UTC

I want to export the entire database using expdb of my test database (10gr2). However, I don't need the dumpfile.

Is there a way to perform the expdp and output the dmpfile to /dev/null (or something similar to that)

Thanks,

Tom Kyte
May 28, 2008 - 8:32 am UTC

why??????

just do this:

$ [this space left intentionally blank]


I don't get it.

dmp file not required,

A reader, May 28, 2008 - 1:09 pm UTC

Following up on your question, the reason we don't need dumpfile is we are trying to perform export to find out the time it takes for a 10TB database and also we need to do block checking.

We are not going to import it anywhere at this time.

Thanks,

Tom Kyte
May 28, 2008 - 3:52 pm UTC

it would take most of your natural life for export to do that.

And, it would not tell you anything - since, you didn't do the real part of export - the writing of the dump file.

Why do you need to find out the time to do something that is utterly impractical to do - ever? This is not anything you would ever actually do.



RMAN - just backup your database and you'll have done the block checking.

follow up,

A reader, May 28, 2008 - 9:14 pm UTC

Thanks for your input.
if I use RMAN, then I need space to store the backup files. In our case, we don't have that much of space to store it on the server or anywhere else. The size of the DB is more than 10TB.

Using RMAN, is there a way to delete the file as it creates to make room for the next file?

Thanks,

Tom Kyte
May 29, 2008 - 7:35 am UTC

if you have a 10tb database and no backup.... please - think about this. Are you seriously telling me you DO NOT HAVE ANY BACKUPS?????????????


Block Checking with RMAN

Mark A. Williams, May 29, 2008 - 9:47 am UTC

A Reader,

You can use RMAN to do block checking without performing an actual backup. See MetaLink Note:283053.1 (How To Use RMAN To Check For Logical & Physical Database Corruption).

Also, you say, "...we are trying to perform export
to find out the time it takes...", but, sending the bits off to neverland like /dev/null would not give you a good representation of that. It just might take longer to write the physical file (I believe this is part of Tom's point) than to send the bits to /dev/null.

- Mark

thanks,

A reader, May 29, 2008 - 12:37 pm UTC

Tom and Mark: Thanks a lot for your input.

About not backing up a 10TB DB, yes we don't have a backup system here. Instead we have 6-8 different standby databases where a lag time of each standby server varies from 30 minutes to 1 day.

Thanks,

Tom Kyte
May 29, 2008 - 1:26 pm UTC

so, well, why do you want to do this again? what is the point in scanning 10tb of data if you have 6 to 8 copies of it from a standby point of view. What would you gain from this - given that you couldn't really fix it, you would have to just fail over to another copy. Meaning - you don't need to look for them, they'll find themselves and then you switch over.

follow up,

A reader, May 29, 2008 - 1:38 pm UTC

The scanning of 10TB of data was recommended by migration team who use Xenobridge to migrate the data to the new server. The database has quite a few LOBS. Also, there s goldengate that replicates from old server to new server.

Thanks,

Tom Kyte
May 29, 2008 - 9:25 pm UTC

so? if you have data guard, you have transmitted redo to 6 to 8 other sites. Any block corruption would not be repeated (that comes from hardware failures). The redo stream is checked.

Why would you want to do this, what is the logical reasoning, what would you hope to accomplish?


There is no replacement for a backup by the way, data guard - that is a good failover, but I'm not sure I'd say "that is my backup"

I'd want the ability to block level recover in a 10tb database - without a backup.... Well...


Backups are backups, there aren't any real replacements for them.

problem with speed of import

Antonio Jose Cantillo, June 17, 2008 - 1:38 pm UTC

hi,

the problem:
my import as time passes reduces the number of insert per minute. and my "hit ratio buffer cache" as well.
example:
in the first hour 400.000 rows/minute -- hit ratio 98
in the second hour 300.000 rows/minute -- hit ratio 96
....
24 hour later 10000 rows/minute -- hit ratio 86%

if i put a second import in parallel is the same thing.

this is my select:

select s.program, s.process,
2 s.status,
3 SUBSTR(sql_text, INSTR(sql_text,'"', 1, 1)+1,INSTR(sql_text,'"',1,2)-INSTR(sql_text,'"',1,1)-1) tablename,
4 a.rows_processed,
5 last_call_et,
6 round((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60,1) minutes,
7 trunc(rows_processed/((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60)) rows_per_min
8 from v$session s, v$sqlarea a
9 where s.sql_address (+) = a.address
10 and sql_text like 'INSERT %INTO "%'
11 and command_type = 2
12 and open_versions > 0
13 and s.program like 'imp%'
14 order by s.username, s.sql_address, s.sid;

PROGRAM |PROCESS |STATUS |TABLENAME |ROWS_PROCESSED|LAST_CALL_ET| MINUTES|ROWS_PER_MIN
------------------------------|------------|--------|------------------------------|--------------|------------|----------|------------
imp@xxx.com.co (TNS V1|7689 |ACTIVE |FACTURA | 13516091| 0| 54.3| 248762
-V3) | | | | | | |

imp@xxx.com.co (TNS V1|16239 |ACTIVE |GST_ACUMFACT | 25019722| 24| 2569.5| 9737
-V3) | | | | | | |


what mean LAST_CALL_ET ? and why LAST_CALL_ET is more frecuent over time.


my top waits:
db file sequential read
buffer busy waits
rdbms ipc reply

--
nohup imp user/xxxx file=filexxx.dmp log=imxxx.log COMMIT=Y IGNORE=Y analyze=n BUFFER=1048576 fromuser=user touser=user tables=FACTURA recordlength=65535 STATISTICS=NONE &
--

My buffer cache is 400M,
redo buffer 8M,
2 DBWR's,
6 groups of redo lof files 32M
Oracle 9.2.0.8
Linux Centos 4.6

thanks a lot, and sorry for my english
Tom Kyte
June 17, 2008 - 2:58 pm UTC

last call et is the time elapsed since the last call was started, it shows how long that session has been in the status (active, inactive) it is currently in. It is a timer.


insufficient data to comment here - are you loading into indexed tables - or is import creating the tables, loading them and then indexing.

Antonio Jose Cantillo, June 17, 2008 - 3:46 pm UTC

select IOT_NAME,PCT_FREE,PCT_USED,INI_TRANS,PCT_INCREASE,FREELISTS from dba_tables where table_name='FACTURA';

IOT_NAME | PCT_FREE| PCT_USED| INI_TRANS|PCT_INCREASE| FREELISTS
------------------------------|----------|----------|----------|------------|----------
| 5| 90| 5| 0| 8

i have only normal tables, without LOBS, or partitions .

everything is already created(tables, indexes, triggers, packages, etc), iam loading rows. i had disabled the constraints, primary key and triggers.

---
select INDEX_NAME,STATUS from dba_indexes where TABLE_NAME='FACTURA';

INDEX_NAME |STATUS
------------------------------|--------
IX_FACTURA04 |VALID
IX_FACT_SUSC |VALID
IX_FACT_FEGE |VALID

--
select CONSTRAINT_NAME,CONSTRAINT_TYPE, STATUS from user_constraints where TABLE_NAME='FACTURA';

CONSTRAINT_NAME |C|STATUS
------------------------------|-|--------
FK_FACT_CLIE_CODI |R|DISABLED
FK_FACT_LOCA_DEPA |R|DISABLED
FK_FACT_PEFA_CODI |R|DISABLED
FK_FACT_SUSC_CODI |R|DISABLED
NN_FACTCODI |C|DISABLED
NN_FACTCLIE |C|DISABLED
NN_FACTSUSC |C|DISABLED
NN_FACTPEFA |C|DISABLED
NN_FACTCICL |C|DISABLED
NN_FACTANO |C|DISABLED
NN_FACTMES |C|DISABLED
NN_FACTDEPA |C|DISABLED
NN_FACTLOCA |C|DISABLED
NN_FACTSAAN |C|DISABLED
NN_FACTSAFA |C|DISABLED
NN_FACTSAPE |C|DISABLED
NN_FACTVAAP |C|DISABLED
NN_FACTVARE |C|DISABLED
NN_FACTVAIV |C|DISABLED
NN_FACTVAAB |C|DISABLED
NN_FACTREFU |C|DISABLED
NN_FACTNUCO |C|DISABLED
NN_FACTVALR |C|DISABLED
NN_FACTVATO |C|DISABLED
NN_FACTFEGE |C|DISABLED
NN_FACTTERM |C|DISABLED
NN_FACTUSUA |C|DISABLED
PK_FACTURA |P|DISABLED

more information:
the tables have between 50'000.000 and 100'000.000 rows aprox. and i have all this in a file with size 50G aprox created with export with option direct=no. the original database is in Oracle 8i, the version's exp is 8i too.

thanks....
Tom Kyte
June 18, 2008 - 12:08 pm UTC

well, your indexes are the reason. why load (using conventional path with import) into indexed tables. You are causing the subsequent slow down and physical IO's and buffer busy waits.


let's say an index on NAME exists. In the beginning, the index is tiny and 100% of it 'fits' in the cache. Over time, that index gets large. Now, you encounter someone with the name starting with 'A', you need the left most block of the index, it isn't in the cache - so we read it in and you modify it. Later, you get a name 'Z', we need the right hand side - well, to load the left hand side we had to flush the right hand side to disk to make room (that was a buffer busy wait for you and the subsequent IO was a physical IO wait). So, we have to read in the 'Z' (buffer busy wait - we need to flush out an 'M' block to make room, physical IO wait to read in 'Z'). Now you get a record with 'M' - but we just wrote that out, so buffer busy wait to free space (write the 'A' block out) and physical IO. But wait, the next record is 'A'

and so on


suggestion, mass data load - disable all indexes, load, and then index (in parallel, unrecoverable - whatever you want)

Export of table with CLOB column taking long...

Harschil Kaparwan, November 11, 2008 - 10:38 pm UTC

Hi Tom

Many thanks in advance.

- We have a table T with the following definition.


CREATE TABLE T
(
OBJID NUMBER,
DEV NUMBER,
NAME VARCHAR2(20 BYTE),
VALUE CLOB,
CONTEXT_INST2GROUP_INST NUMBER,
CONTEXT_INST2PROC_INST NUMBER
)
TABLESPACE TS001
PCTUSED 40
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 64K
NEXT 10M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
FREELISTS 1
FREELIST GROUPS 1
BUFFER_POOL DEFAULT
)
LOGGING
LOB (VALUE) STORE AS
( TABLESPACE TS001
ENABLE STORAGE IN ROW
CHUNK 8192
PCTVERSION 10
NOCACHE
STORAGE (
INITIAL 64K
NEXT 10M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
FREELISTS 1
FREELIST GROUPS 1
BUFFER_POOL DEFAULT
)
)
NOCACHE
NOPARALLEL
MONITORING;


CREATE INDEX T_IND1 ON T
(CONTEXT_INST2GROUP_INST)
LOGGING
TABLESPACE INDEX06
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
NEXT 10M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
FREELISTS 1
FREELIST GROUPS 1
BUFFER_POOL DEFAULT
)
NOPARALLEL;


CREATE INDEX T_IND2 ON T
(CONTEXT_INST2PROC_INST)
LOGGING
TABLESPACE INDEX
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
NEXT 10M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
FREELISTS 1
FREELIST GROUPS 1
BUFFER_POOL DEFAULT
)
NOPARALLEL;


CREATE PUBLIC SYNONYM T FOR T;


ALTER TABLE T ADD (
PRIMARY KEY (OBJID)
USING INDEX
TABLESPACE INDEX
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
NEXT 10M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
FREELISTS 1
FREELIST GROUPS 1
));

- This table is the only object in tablespace TS001 and size of diffrent segments of table T :

select segment_name,segment_type,bytes/1024/1024 Size_MB, extents,initial_extent,next_extent,freelists
from dba_segments where tablespace_name=upper('TS001');

SEGMENT_NAME            SEGMENT_TYPE SIZE_MB EXTENTS INITIAL_EXTENT NEXT_EXTENT  
T                       TABLE           310      31   65536         10485760  
SYS_LOB0000034216C00004$$  LOBSEGMENT     16200  1620    65536          10485760  
SYS_IL0000034216C00004$$   LOBINDEX     10        1    10485760    10485760


- Tablespace extent management/allocation type :
select initial_extent,next_extent,extent_management,allocation_type
from dba_tablespaces where tablespace_name=upper('ts001')


INITIAL_EXTENT NEXT_EXTENT EXTENT_MANAGEMENT ALLOCATION_TYPE
10485760 10485760 LOCAL UNIFORM

- select count(*) from T;
COUNT(*)
1267796

- It takes 2 hr and 30mins to export this table while the entire database schema takes 5 hr.

- Database version 9.2.0.6

Questions :

a)We have tried increasing the buffers parameter while taking conventional path export . but still it takes 2 hr 20 mins?

b) Direct path export will not help as it has CLOB column. So how we can speed up the export of this table T?

c) Your inputs on reorganisation the table ?

d) We initiate the export backup from Application server which is in diffrent subnet then database server and it writes dmp file into NAS. I read somewhere it takes forever if export dump is written to NAS. Will you recommend to write exp dmp file into the database servers SAN disks thus reducing the netwrok latency?

Kind Regards.

Tom Kyte
November 12, 2008 - 10:32 am UTC

if the goal is "reorg"

then the answer will NOT include "export/import"


Just use alter table t move and then alter index i rebuild. The alter table t move can include lob clauses to move them as well.

IF AND ONLY IF YOU ARE SO SURE, SO SO SO VERY VERY SURE that a reorganization is 100% mandatory, desirable and the right thing to so. Which is doubtful if you ask me - it might be true, but probably isn't.




You have 16gb of lobs, you are pulling that over the network row by row (that is how lobs are handled with export, a row at a time, since they can possibly be HUGE)

Why are you exporting - what is the goal? What is the purpose behind the export (it'll be really fast to NOT do the export and probably - export is not the right tool for whatever you are doing)

Export of table with CLOB column taking long...

Harschil Kaparwan, November 12, 2008 - 9:11 am UTC

Hi Tom,

THe chain count of the above table T is :

15559

Export of table with CLOB column taking long...

Harschil Kaparwan, November 12, 2008 - 1:13 pm UTC

Hi Tom,

....Why are you exporting - what is the goal? What is the purpose behind the export (it'll be really fast to NOT do the export and probably - export is not the right tool for whatever you are doing)

I am totally agree with you. Let me summarize *why* we are taking the export backup.

1. All bug fixes in the Application comes as the patches from development team and the same is deployed at regular intervals (after every 02 weeks) in production.

2. ABove (1) may include database schema changes as well.

3. In case (1) above fails due to some reasons. We need to rollback the changes. i.e. bring the application and database to the stage prior to point where we started (1).

4. We do take application backup ( cp all application files and dir to backup location) & database *export* backup, prior to patch deployment for rollback (3)above.

5. In order to take (4)& do (1) above we bring the application down i.e. outage.


*export* was good option in (4)above when database size was small but now as you have seen the few tables have grown big with LOB 16g. It is not the viable option now.

So , what next ...

a) Speed up the 'export'....Not possible ...As export involving LOBs will take its own time.
<Quote>...it'll be really fast to NOT do the export and probably - export is not the right tool for whatever you are doing </Quote>. Thats rightly said by you.

b)As the goal is to rollback the changes caused by patch deployment (1) above, get rollback script, to rollback the database changes from development team , who develops the patches...but they say they can't as they DON'T / CAN'T backup the database stuffs prior to the change ( add few columns, add new table etc etc..) in the database as the part of the patch deployment.

c) If not (b) above then rely on the Hot backup to rollback the changes. Rollback *entire* database. In such a scenario application outage is reduced as there is *no* export backup. And hot backup + archive logs prior to the deployment will be used for rollback.

d) Saying that (c) above is being followed then, any other option, which can further reduce the application outage during patch deployment. Here now the scenario is like - changing the wheel of the car but AT THE SAME TIME driving the car. I mean deploying the bug fixes and the same time keeping the application up for users!

Your expert comments on above please.

Further,

*) Same Table T, another goal - 'reorg'. As the chain count is huge above 15k. We can solve the same by reorg.. alter table T ...?


Kind Regards.

Tom Kyte
November 13, 2008 - 4:52 pm UTC

flashback database - 10g.

in 9i, your backup and then restore is valid (logically same as flashback, but more work)



... .but they say they can't as they DON'T / CAN'T backup
the database stuffs prior to the change ...

"they" are totally mistaken or making stuff up, period. Of course they "don't" but of course they "CAN" - there is absolutely no reason why they cannot. You are doing this super inefficient export (which do you think kills the database more a) export, b) backup)


and you know, I'll betcha that a restore+rollforward is a ton faster than a full database import.



15,000 is tiny, why do you say it is huge. Have you identified it as a 'problem'? and yes, alter table move would put most of them right again, but 15,000 is a small number in the year 2008

Export of table with CLOB column taking long...

Harschil Kaparwan, November 13, 2008 - 8:06 pm UTC

Many Thanks Tom, for enlightening me.

Further,

....15,000 is tiny, why do you say it is huge....

I got the answer.

Kind Regards

kishore, June 29, 2012 - 6:16 am UTC

hi Tom,
expdp taking 5 hrs &impdp taking 11 hrs. this for only metadata. how can we reduce this timelines???

Improving Import Speed

Bob, November 21, 2015 - 11:57 am UTC

I have a Datapump export from the database for four schemas. The export was created like this:

expdp \" as sysdba \"
compression=all
parallel=4
dumpfile=my_dir:schemas%u.dmp
logfile=my_dir:schemas.log
schemas=a,b,c,d

I then scp the dump files over the network and import them like this:

impdp system@db
directory=my_dir
parallel=10
exclude=index
exclude=constraint
exclude=ref_constraint
exclude=index_statistics
exclude=table_statistics
content=all
dumpfile=a.dmp,b.dmp,c.dmp,d.dmp
logfile=my_dir:a.log

I am importing approximately 5.5TB and it takes over 48 hours. I'd like to speed the import up. I have noticed that even though I have excluded constraints and indexes the default behaviour is to include NOT NULL columns (from the documentation) - this of course includes primary keys. So I noticed as soon as I kick the import off, if I do this:

select index_name from dba_indexes where owner in ('A','B','C','D') straight away I noticed the Primary key indexes have been created.

If I exclude the tables with primary keys (having first identified them from the source) and then do the import ...I start getting a whole load of referential constraint failures.

Any ideas what I can do to get rid of the primary keys and their associated indexes?
Connor McDonald
November 22, 2015 - 4:44 am UTC

Hmm.... I'm not seeing that when I do similar

expdp compression=all dumpfile=DATA_PUMP_DIR:schemas.dmp logfile=DATA_PUMP_DIR:schemas.log schemas=scott

impdp directory=DATA_PUMP_DIR exclude=index exclude=constraint exclude=ref_constraint exclude=index_statistics exclude=table_statistics content=all dumpfile=schemas.dmp logfile=schemas.implog remap_schema=scott:demo1

SQL> select constraint_type, count(*)
  2  from dba_constraints
  3  where owner = 'SCOTT'
  4  group by constraint_type
  5  /

C   COUNT(*)
- ----------
R          1
P          2
C          1


expdp compression=all dumpfile=DATA_PUMP_DIR:schemas.dmp logfile=DATA_PUMP_DIR:schemas.log schemas=scott

Export: Release 12.1.0.2.0 - Production on Sun Nov 22 12:41:02 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYS"."SYS_EXPORT_SCHEMA_01":  sys/******** AS SYSDBA compression=all dumpfile=DATA_PUMP_DIR:schemas.dmp logfile=DATA_PUMP_DIR:schemas.log schemas=scott
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 256 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported "SCOTT"."ABC"                               4.867 KB       1 rows
. . exported "SCOTT"."DEPT"                              5.007 KB       4 rows
. . exported "SCOTT"."EMP"                               5.664 KB      14 rows
. . exported "SCOTT"."SALGRADE"                          4.914 KB       5 rows
. . exported "SCOTT"."BONUS"                                 0 KB       0 rows
Master table "SYS"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_SCHEMA_01 is:
  C:\ORACLE\ADMIN\NP12\DPDUMP\SCHEMAS.DMP
Job "SYS"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Nov 22 12:41:30 2015 elapsed 0 00:00:25


impdp directory=DATA_PUMP_DIR exclude=index exclude=constraint exclude=ref_constraint exclude=index_statistics exclude=table_statistics content=all dumpfile=schemas.dmp logfile=schemas.implog remap_schema=scott:demo1

Import: Release 12.1.0.2.0 - Production on Sun Nov 22 12:42:16 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_FULL_01":  sys/******** AS SYSDBA directory=DATA_PUMP_DIR exclude=index exclude=constraint exclude=ref_constraint exclude=index_statistics exclude=table_statistics content=all dumpfile=schemas.dmp logfile=schemas.implog remap_schema=scott:demo1
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "DEMO1"."ABC"                               4.867 KB       1 rows
. . imported "DEMO1"."DEPT"                              5.007 KB       4 rows
. . imported "DEMO1"."EMP"                               5.664 KB      14 rows
. . imported "DEMO1"."SALGRADE"                          4.914 KB       5 rows
. . imported "DEMO1"."BONUS"                                 0 KB       0 rows
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at Sun Nov 22 12:42:21 2015 elapsed 0 00:00:02


SQL>  select constraint_type, count(*)
  2      from dba_constraints
  3      where owner = 'DEMO1'
  4      group by constraint_type
  5      /

C   COUNT(*)
- ----------
C          1




Import speed performance

Bob, November 22, 2015 - 1:35 pm UTC

Hey Tom, thanks for the prompt reply. Please can you copy/paste your DDL for your tables (in both of your scenarios) for your export. I think if the primary key is created as part of the table then it will be imported in (even with the excludes)

e.g. create table A
(a number not null enable,
b number,
constraint a_pk primary key(a) using index
... < storage characteristics>
... < tablespace xyz> enable


Connor McDonald
November 23, 2015 - 12:58 am UTC

SQL> drop user demo1 cascade;

User dropped.

SQL>
SQL> grant create session, create table to demo1 identified by demo1;

Grant succeeded.

SQL>
SQL> alter user demo1 quota unlimited on users;

User altered.

SQL>
SQL> create table demo1.t1 ( x int, y int, constraint pk primary key ( x ) ) ;

Table created.

SQL>
SQL> create table demo1.t2 ( x int, y int) ;

Table created.

SQL>
SQL> alter table demo1.t2 add constraint pk2 primary key ( x );

Table altered.

SQL>
SQL> insert into demo1.t1 values (1,1);

1 row created.

SQL> insert into demo1.t1 values (2,2);

1 row created.

SQL> commit;

Commit complete.

SQL>
SQL> select constraint_type, count(*)
  2  from dba_constraints
  3  where owner = 'DEMO1'
  4  group by constraint_type
  5  /

C   COUNT(*)
- ----------
P          2

expdp compression=all dumpfile=DATA_PUMP_DIR:demo1.dmp logfile=DATA_PUMP_DIR:demo1.log schemas=demo1

Export: Release 12.1.0.2.0 - Production on Mon Nov 23 08:51:44 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYS"."SYS_EXPORT_SCHEMA_01":  sys/******** AS SYSDBA compression=all dumpfile=DATA_PUMP_DIR:demo1.dmp logfile=DATA_PUMP_DIR:demo1.
log schemas=demo1
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported "DEMO1"."T1"                                4.804 KB       2 rows
. . exported "DEMO1"."T2"                                    0 KB       0 rows
Master table "SYS"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_SCHEMA_01 is:
  C:\ORACLE\ADMIN\NP12\DPDUMP\DEMO1.DMP
Job "SYS"."SYS_EXPORT_SCHEMA_01" successfully completed at Mon Nov 23 08:52:10 2015 elapsed 0 00:00:23



impdp directory=DATA_PUMP_DIR exclude=index exclude=constraint exclude=index_statistics exclude=table_statistics content=all dumpfile=demo1.dmp logfile=demo1.implog remap_schema=demo1:demo2

Import: Release 12.1.0.2.0 - Production on Mon Nov 23 08:53:41 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_FULL_01":  sys/******** AS SYSDBA directory=DATA_PUMP_DIR exclude=index exclude=constraint exclude=index_statisti
cs exclude=table_statistics content=all dumpfile=demo1.dmp logfile=demo1.implog remap_schema=demo1:demo2
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "DEMO2"."T1"                                4.804 KB       2 rows
. . imported "DEMO2"."T2"                                    0 KB       0 rows
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at Mon Nov 23 08:53:46 2015 elapsed 0 00:00:01


SQL> select constraint_type, count(*)
  2  from dba_constraints
  3  where owner = 'DEMO2'
  4  group by constraint_type
  5  /

no rows selected



Oracle import performance

Bob, November 22, 2015 - 2:08 pm UTC

One other thing I noticed was these developers are using "reference partitioned tables". That basically makes it impossible for me to disable foreign keys and disable referenced primary keys and then drop the unique index associated with the primary. So will disable the developers triggers and try the import again.

So my approach was now to:

1) do an export (metadata only) from production
2) import the skeleton - excluding index, constraint, ref_constraint, index_statistics, table_statistics
3) disable triggers
4) do an import data only

..let's see what the timings are like this time ...
Connor McDonald
November 23, 2015 - 12:13 am UTC

Ah, definitely reference partitioning will have an impact (because obviously we need those PK/FK's place in order to load the database).

But the philosophy of "try to disable anything that isn't just purely load data into a plain table" (ie, indexes, constraints, triggers, etc etc) is a valid one.


import export

vamsi, April 11, 2017 - 11:09 am UTC

hello sir are export is fast and import is fast
Connor McDonald
April 12, 2017 - 5:13 am UTC

yes