Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question.

Asked: July 12, 2001 - 11:27 am UTC

Last updated: May 21, 2012 - 8:27 am UTC

Version: 8.1.7.0

Viewed 10K+ times! This question is

You Asked

Hi Tom

When I say direct=y in the export the processing is much faster.
What r the scanerious where i should not use direct=y

Regards



and Tom said...

A direct path export should not be considered the converse of a direct path load using SQLLDR (for those of you familiar with that option). A direct path export does not read directly from the datafiles and write to the DMP file. SQLLDR does write directly to the datafiles from the DAT files. Export in direct path mode simply bypasses the SQL evaluation buffer (where clause processing, column formatting and such). 90% of the path is the same, EXP is still reading buffers into the buffer cache, doing the same consistent read processing and so on.

The speedup of the direct path export can be large however. The 10% of the processing that it cuts out, accounts for a much larger percentage of the run-time. For example, I have just exported about 100meg of data, and 1.2 million records. The direct path export took about one minute. The conventional path export on the other hand took three minutes. There is unfortunately no corresponding 'direct path import'. Import uses conventional SQL to insert the data back into tables. SQLLDR is still the way to go for high performance data loads.

It should be noted that in direct path mode, you cannot use the QUERY= parameter to select a subset of rows. This makes sense given that DIRECT=Y is simply a way to bypass the SQL evaluation buffer and that is where the where clause processing would normally take place.


followup to comment two below

Yes, you can move the dmp file -- the file format is the same regardless of the direct= setting.


Rating

  (26 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Export Direct = y

Bibeka Nanda Patnaik, July 13, 2001 - 12:39 am UTC

Thanks Tom,
Really it's very nice the way You explained Here and it helped me a lot

Cross Platform export import using Direct=y

Sudip, July 13, 2001 - 9:45 am UTC

Can export import using direct = y be used
across platforms i.e. export on NT and import on Unix?

Errors after importing a table from a dmp file created using direct=y option

Yogeeraj, November 01, 2002 - 8:55 am UTC

hi,

I exported my database using the following script:
===========================================================
#!/bin/sh
#
###############################################################
# Script Name: cmtdbexp_full.sh #
# Description: Export all tables in CMTDB Database #
# to DISK #
# By: YD-14/12/2001-created script #
# Date Modified: YD-14/12/2001-Created script
###############################################################
#
LOG_PATH=/BACKUP/export/logfiles
ORACLE_HOME=/d01/app/oracle/product/8.1.7
DMP_PATH1=/BACKUP-SLX1/export
ACC_PASS=system/xxxxxx
export dt=`date +%Y-%m%d`

export LOG_PATH DAT ACC_PASS DMP_PATH1 ORACLE_HOME
#
$ORACLE_HOME/bin/exp $ACC_PASS filesize=1024M file=\($DMP_PATH1/dbexp"$dt"FULLa.dmp,$DMP_PATH1/dbexp"$dt"FULLb.dmp, $DMP_PATH1/dbexp"$dt"FULLc.dmp, $DMP_PATH1/dbexp"$dt"FULLd.dmp,$DMP_PATH1/dbexp"$dt"FULLe.dmp\) buffer=409600 log=$LOG_PATH/dbexp"$dt"FULL.log full=Y grants=Y rows=Y compress=N direct=Y
#
# AUTHOR - YD
===========================================================

and tried to import one table from the exported dmp using the script below:
============================================================
#!/bin/sh
#
dt=2002-1031
DMP_PATH1=/u02/backup-l1000/export
imp system/xxxx file=\($DMP_PATH1/dbexp"$dt"FULLa.dmp,$DMP_PATH1/dbexp"$dt"FULLb.dmp,$DMP_PATH1/dbexp"$dt"FULLc.dmp,$DMP_PATH1/dbexp"$dt"FULLd.dmp,$DMP_PATH1/dbexp"$dt"FULLe.dmp\) constraints=n indexes=n buffer=409600 log=impxlarge32.log grants=n fromuser=swprod touser=swprod ignore=y tables='(FDT850)'
#
# AUTHOR - YD
============================================================

I am getting the following errors:
============================================================
Import: Release 8.1.7.4.0 - Production on Fri Nov 1 14:53:01 2002

(c) Copyright 2000 Oracle Corporation. All rights reserved.


Connected to: Oracle8i Enterprise Edition Release 8.1.7.4.0 - 64bit Production
With the Partitioning option
JServer Release 8.1.7.4.0 - 64bit Production

Export file created by EXPORT:V08.01.07 via direct path
import done in US7ASCII character set and US7ASCII NCHAR character set
IMP-00046: using FILESIZE value from export file of 1073741824
. importing SWPROD's objects into SWPROD
. . importing table "FDT850" 5103464 rows imported
IMP-00051: Direct path exported dump file contains illegal column length
IMP-00008: unrecognized statement in the export file:
M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?
M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?^A
IMP-00008: unrecognized statement in the export file:
^KM-~M-^?^C
IMP-00008: unrecognized statement in the export file:
^_M-~M-^?M-~M-^?M-~M-^?M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?
^F
IMP-00008: unrecognized statement in the export file:
3M-~M-^?^B
IMP-00008: unrecognized statement in the export file:
^K^C
IMP-00008: unrecognized statement in the export file:
^F^C
IMP-00008: unrecognized statement in the export file:
^F^C
IMP-00008: unrecognized statement in the export file:
M-~M-^?M-~M-^?^B
IMP-00008: unrecognized statement in the export file:
^C
IMP-00008: unrecognized statement in the export file:
M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?
M-~M-^?^C
============================================================

Please guide me on how to troubleshoot this.

thank you in advance

Regards
Yogeeraj
PS. Can you please tell us about your forthcoming book and if possible about a tentative date...thanks

Tom Kyte
November 01, 2002 - 3:35 pm UTC

I'll have to refer you to support on that -- if the dmp file was transferred using binary ftp -- that would look like a "product issue".

book, in the spring -- performance/best practices/scalability related.

export=yes

Sami, August 06, 2005 - 1:38 pm UTC

Dear Tom,

Thanks for all your help in the past.

<asktom>
Export in direct path mode simply bypasses the SQL evaluation buffer (where clause processing, column formatting and such).
</asktom>

Also you mentioned that "the file format is the same regardless of the direct= setting."

Since DIRECT=YES bypasses the column formatting, I assume the the dump file format would be slightly different.
Please correct me and explain if I am wrong?

I am wondering why export=yes is not a default option.Anyway they is no downside of having direct=yes




Tom Kyte
August 06, 2005 - 2:24 pm UTC

both are dmp files, the *format* is the same, the contents might not be identical, but the logical "outcome" will be.

defaults are left to historical values generally.

see
</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96652/ch01.htm#1012890 <code>
for more details.

I have a different impression for DIRECT=Y in 10.2.0.1

Sushil Vyas, December 14, 2005 - 7:57 am UTC

Tom,

Based on your very first feedback , it gives the impression that export with DIRECT=Y uses DB Buffer Cache.

"
Export in direct path mode
simply bypasses the SQL evaluation buffer (where clause processing, column
formatting and such). 90% of the path is the same, EXP is still reading buffers
into the buffer cache, doing the same consistent read processing and so on.

"

I did a test case.

1) Bounced the Instance.
2) Checked in X$BH for the table which was about to export.

select object_name , count(*) from sys.xbh , dba_objects
where obj = object_id and obj > 52000
group by obj, object_name order by 1

My test case is focusing on one table , therefore I have put a filter for objects with id > 52000.

3) Count = 0
4) Fired Export with Direct=Y , Count = 1
5) Fired Export with DIRECT=N , Count = 5550

Now where am I going wrong or is it that the behaviour has changed with versions.


Additionally I noticed that my another table , which has table level compression enabled was pulled into the buffer even when it was direct=y. I very much understand this behaviour since the data is compressed at block level , so for Direct also it has to extract it and only than download the data in the export file.

Kindly suggest.

Thanks and Regards,
Sushil.




Tom Kyte
December 14, 2005 - 9:07 am UTC

I love cut and pastes - they show what actually took place. 

Here is mine, there be blocks in the buffer cache.  



ops$tkyte@ORA10GR2> create table t as select * from all_objects;
Table created.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> connect / as sysdba
Connected.
sys@ORA10GR2> startup force;
ORACLE instance started.

Total System Global Area  608174080 bytes
Fixed Size                  1220844 bytes
Variable Size             243273492 bytes
Database Buffers          360710144 bytes
Redo Buffers                2969600 bytes
Database mounted.
Database opened.
sys@ORA10GR2> connect /
Connected.
ops$tkyte@ORA10GR2> set echo on
ops$tkyte@ORA10GR2> !exp userid=/ direct=y tables=t

Export: Release 10.2.0.1.0 - Production on Wed Dec 14 09:45:49 2005

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)

About to export specified tables via Direct Path ...
. . exporting table                              T      50560 rows exported
Export terminated successfully without warnings.

ops$tkyte@ORA10GR2> select objd, count(*)
  2    from v$bh
  3   where objd = (select data_object_id from user_objects where object_name = 'T' )
  4   group by objd;

      OBJD   COUNT(*)
---------- ----------
     58933        380

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> connect / as sysdba
Connected.
sys@ORA10GR2> startup force;
ORACLE instance started.

Total System Global Area  608174080 bytes
Fixed Size                  1220844 bytes
Variable Size             247467796 bytes
Database Buffers          356515840 bytes
Redo Buffers                2969600 bytes
Database mounted.
Database opened.
sys@ORA10GR2> connect /
Connected.
ops$tkyte@ORA10GR2> set echo on
ops$tkyte@ORA10GR2> !exp userid=/ direct=n tables=t

Export: Release 10.2.0.1.0 - Production on Wed Dec 14 09:46:21 2005

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)

About to export specified tables via Conventional Path ...
. . exporting table                              T      50560 rows exported
Export terminated successfully without warnings.

ops$tkyte@ORA10GR2> select objd, count(*)
  2    from v$bh
  3   where objd = (select data_object_id from user_objects where object_name = 'T' )
  4   group by objd;

      OBJD   COUNT(*)
---------- ----------
     58933        698
 

My Test results - Different one

Sushil Vyas, December 15, 2005 - 3:47 am UTC

Tom,

Here is my spool and as you can see , they speak different than your test results.

Only thing different in my query is , I am using sys.x$bh referring as sys.xbh, this is just to keep confusions away.

Regards
Sushil


---------------   Start   ------------------------
SQL> startup force
ORACLE instance started.

Total System Global Area  289406976 bytes
Fixed Size                  1248576 bytes
Variable Size             100664000 bytes
Database Buffers          180355072 bytes
Redo Buffers                7139328 bytes
Database mounted.
Database opened.

SQL> @tds1

no rows selected

SQL> l
  1  select  object_name , count(*)  from sys.xbh , dba_objects
  2  where obj = object_id and obj = 52544
  3* group by  obj, object_name order by 1
SQL> host
Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.

H:\>exp userid=system/oracle direct=y  tables=SCOTT.LARGE_ASSM

Export: Release 10.2.0.1.0 - Production on Thu Dec 15 13:42:59 2005

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set

About to export specified tables via Direct Path ...
Current user changed to SCOTT
. . exporting table                     LARGE_ASSM    2012632 rows exported
Export terminated successfully without warnings.

H:\>

H:\>exit

SQL> @tds1

OBJECT_NAME                                COUNT(*)
---------------------------------------- ----------
LARGE_ASSM                                        1


SQL> host
Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.

H:\>exp userid=system/oracle direct=n  tables=SCOTT.LARGE_ASSM

Export: Release 10.2.0.1.0 - Production on Thu Dec 15 13:47:23 2005

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set

About to export specified tables via Conventional Path ...
Current user changed to SCOTT

. . exporting table                     LARGE_ASSM    2012632 rows exported
Export terminated successfully without warnings.

H:\>

H:\>exit

SQL> @tds1

OBJECT_NAME                                COUNT(*)
---------------------------------------- ----------
LARGE_ASSM                                    14257


 

Tom Kyte
December 15, 2005 - 10:33 am UTC

why does that keep confusion away?


it uses the buffer cache, just like everything else.


run my example please. mine being "complete" soup to nuts. Yours being "not complete", I cannot run yours.





With statistics , it does not use the Buffer Cache.

Sushil Vyas, December 15, 2005 - 11:35 am UTC

Tom,

I ran your example and it has behaved exactly like you found it untill I executed dbms_stats.gather_schema_stats.

If you can throw technical light on why it does not fill buffer cache , if the stats are updated , it would be of help .

Thanks and Regards,
Sushil 


------------------------   Start    ------------------
H:\>exp userid=system/oracle  direct=y tables=scott.t

Export: Release 10.2.0.1.0 - Production on Thu Dec 15 21:54:21 2005

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set

About to export specified tables via Direct Path ...
Current user changed to SCOTT
. . exporting table                              T      49713 rows exported
Export terminated successfully without warnings.

H:\>exit

SQL> @tds3

      OBJD   COUNT(*)
---------- ----------
     52588        265

SQL> exec dbms_stats.gather_schema_stats('SCOTT',cascade => true) ;

PL/SQL procedure successfully completed.

SQL> startup force
ORACLE instance started.

Total System Global Area  289406976 bytes
Fixed Size                  1248576 bytes
Variable Size             100664000 bytes
Database Buffers          180355072 bytes
Redo Buffers                7139328 bytes
Database mounted.
Database opened.
SQL> @tds3

no rows selected

SQL> host
Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.

H:\>exp userid=system/oracle  direct=y tables=scott.t

Export: Release 10.2.0.1.0 - Production on Thu Dec 15 21:56:55 2005

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set

About to export specified tables via Direct Path ...
Current user changed to SCOTT
. . exporting table                              T      49713 rows exported
Export terminated successfully without warnings.

H:\>exit

SQL> l
  1  select objd, count(*)
  2        from v$bh
  3       where objd = (select data_object_id from dba_objects where object_name
 =
  4  'T' )
  5   group by objd
SQL> /

      OBJD   COUNT(*)
---------- ----------
     52588          1


 

Tom Kyte
December 15, 2005 - 12:05 pm UTC

neat, it dropped to a direct path read, much like parallel query - which can bypass the buffer cache:

WAIT #0: nam='enq: KO - fast object checkpoint' ela= 72 name|mode=1263468550 2=65551 0=1 obj#=-1 tim=1108075404820109
WAIT #0: nam='direct path read' ela= 14 file number=4 first dba=988 block cnt=5 obj#=51973 tim=1108075404820994
....


It flushed the buffer cache of blocks for this segment, and then just did the equivalent of a "parallel query" read on it - direct pathing the operation.


guess it should say "a direct path export does not necessarily read directly from the datafiles..." - it may, it may not. If it does not, it will checkpoint the affected objects to disk before doing so.

Thanks.

Use DIRECT=N

Sven Bleckwedel, December 15, 2005 - 12:58 pm UTC

Hi,

Regarding to this issue:

<quote>
...
export LOG_PATH DAT ACC_PASS DMP_PATH1 ORACLE_HOME
#
$ORACLE_HOME/bin/exp $ACC_PASS filesize=1024M
file=\($DMP_PATH1/dbexp"$dt"FULLa.dmp,$DMP_PATH1/dbexp"$dt"FULLb.dmp,
$DMP_PATH1/dbexp"$dt"FULLc.dmp,
$DMP_PATH1/dbexp"$dt"FULLd.dmp,$DMP_PATH1/dbexp"$dt"FULLe.dmp\) buffer=409600
log=$LOG_PATH/dbexp"$dt"FULL.log full=Y grants=Y rows=Y compress=N direct=Y
#
...
Export file created by EXPORT:V08.01.07 via direct path
import done in US7ASCII character set and US7ASCII NCHAR character set
IMP-00046: using FILESIZE value from export file of 1073741824
. importing SWPROD's objects into SWPROD
. . importing table "FDT850" 5103464 rows imported
IMP-00051: Direct path exported dump file contains illegal column length
IMP-00008: unrecognized statement in the export file:
M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?
M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?M-~M-^?^A
IMP-00008: unrecognized statement in the export file:
...
<quote>

Please use direct=n in the script parameters, in this specific case and do a new dump export. Then, use binary mode when transfering dump files thru ftp. I had experienced this problem in a near past when exporting tables from a development instance, and some tables had many columns marked as unused, but don“t were dropped...

HTH,
Sven


Few links are still missing

Sushil Vyas, December 16, 2005 - 6:43 am UTC

Tom,

This is with reference to your last response.

1) Exp could do direct path operations only when the statistics were updated. What is the link between statistics and direct path reads.

2) In your below statement I guess by "buffer cache of blocks " you are referring to dirty blocks for that object. If my interpretation is incorrect than I am sorry but I could not understand the flushing part in your below statement.

<quote >

It flushed the buffer cache of blocks for this segment, and then just did the
equivalent of a "parallel query" read on it - direct pathing the operation.

</quote>

Thanks and Regards
Sushil.

Tom Kyte
December 16, 2005 - 8:48 am UTC

1) just like the optimizer will never ever use bitmap indexes without fully using the cbo.... must be something related to the optimizer path here.

2) I'm refering to any blocks in the buffer cache that might not yet have their images on disk, yes - the dirty blocks.


it checkpointed this segment.

My quert 1 is cleared , so please ignore.

Sushil Vyas, December 16, 2005 - 8:27 am UTC

Tom,

Could figure out that Oracle optimizer fires query with NO PARALLEL clause if the statistics are not present.

So this clears my doubt with respect to earlier point no 1).

If you can clarify query # 2 , it would be good.

Thanks and Regards
Sushil.


Tom Kyte
December 16, 2005 - 3:15 pm UTC

I have to revise this entirely - the statistics were a red herring after I looked at this in more detail.

The blocks you are seeing in v$bh? They came from dynamic sampling, when the table has no statistics in 10g - dynamic sampling will kick in.

The direct path export has the ability to do the checkpoint and direct path reads period. It does not have to use the buffer cache if it uses the direct path reads (other than to checkpoint that is). The blocks we saw in the cache during the direct=y - they got there from the *sampling*, not from export itself. They were a side effect of export parsing "select * from t".

The trace file shows this (if you set up a logon trigger to trace - capture the export output). Did further testing in other releases where the dynamic sample did not kick in. There analyzed or not.... It direct pathed the data out.

How do I validate dynamic sampling by myself.

Sushil Vyas, December 19, 2005 - 2:47 am UTC

Tom,

I appreciate your support.

But w.r.t Dynamic sampling I am not able to understand it correctly.
Following are my observations.

1) dynamic_sampling set to 0.
With stats : 1 block , status = xcur.
Without stats : 2 blocks , status = cr and xcur.

2) dynamic_sampling set to 2.
With stats : 1 block , status = xcur.
Without stats : 2 blocks , status = cr and xcur.

I hardly see any difference , kindly help me in understanding this better.

Thanks and Regards
Sushil.




Tom Kyte
December 19, 2005 - 7:32 am UTC

without statistics in 10g - we dynamically sampled - MY numbers are above (as are yours, you reproduced me findings).  We READ the table in order to PARSE the query.



remember this bit:

About to export specified tables via Direct Path ...
. . exporting table                              T      50560 rows exported
Export terminated successfully without warnings.

ops$tkyte@ORA10GR2> select objd, count(*)
  2    from v$bh
  3   where objd = (select data_object_id from user_objects where object_name = 
'T' )
  4   group by objd;

      OBJD   COUNT(*)
---------- ----------
     58933        380


That was unanalyzed, direct path. 

a question on "how to calculate Buffer Size"

A reader, October 02, 2006 - 12:13 pm UTC

Hi tom,
I am reading oracle documentation for Export.
==============================================
This section shows an example of how to calculate buffer size.
The following table is created:
CREATE TABLE sample (name varchar(30), weight number);

The maximum size of the name column is 30, plus 2 bytes for the indicator. The maximum size of the weight column is 22 (the size of the internal representation for Oracle numbers), plus 2 bytes for the indicator.
Therefore, the maximum row size is 56 (30+2+22+2).
To perform array operations for 100 rows, a buffer size of 5600 should be specified.
=================================================

Can you please tell me what does the "2 bytes for the indicator" means while calculating the size of any table.

Thanks

Tom Kyte
October 02, 2006 - 12:35 pm UTC

ok, just read it as "always add 2, we need 2"

indicators are null flags in general. Just add "2"

to estimate the export dump file size

chitta, August 22, 2007 - 11:39 am UTC

Hi Tom,

I have 200gb size of database in unix box.i want to take FULL database export dump. but i dont have another 200gb space in my server.
could you please clarify me How to calculate the export dump file size?

Thanks in Advance,
Chitta
Tom Kyte
August 22, 2007 - 2:23 pm UTC

expdb has the ability to estimate

exp does not, you would have to guess, it would be based on the raw unloaded size of your tables (not indexes)

it would be smaller than 200gb in most cases


and i doubt you want to do this, a full export is typically not useful for much, it is just a logical copy tool - it is not like it is a backup or anything

RE:

chitta, August 27, 2007 - 10:19 am UTC

Thanks Tom,
but in my case i want to refresh my test server with production.Both are in diffrent platform.So i guess cloning is not possible.For that reason i have to go with Full Database export in prod and imported into test server.


Thanks&Regards,
Chitta


airother, April 10, 2008 - 4:13 am UTC

I want to know what is bypassing SQL evaluation buffer
acually means?
Tom Kyte
April 10, 2008 - 11:09 am UTC

export stores information in the dmp file in binary format - dates are in 7 byte format, not in strings, numbers are in 0-22byte format - not in strings.

So, normally, when you get something out of the database, we make it nice and readable - the sql evaluation buffer does that.

Since we just want the binary data anyway - we bypass that nice formatting, we don't need it.

Can we use direct=y for daily full export backups?

Zaffer Khan, July 15, 2008 - 4:50 am UTC

Hi Tom,

My client has a 9i Database which has export backup scheduled daily. Due to serious space constraints, RMAN has not been implemented, yet.

A Full backup takes around 3 hrs to complete. My question is, can we use direct=y for daily export backups, so as to speed up the backup performance?

Thanks in advance,
Zaffer Khan
Tom Kyte
July 15, 2008 - 9:46 am UTC

well, so basically, they have never done a backup. an export cannot really be called a backup, it is a logical copy of data that might be able to recreate your data - or not.

make sure you use consistent=y
make sure you do NOT use sys or sysdba to export with (it cannot do consistent)

yes, you can use direct, and it may or may not speed up anything.

disk is rather cheap, you can get a terabyte for a couple of hundred dollars, consider it. a true backup is the only backup.

views

A reader, July 28, 2008 - 4:52 pm UTC

Should views be exported and imported when using direct path export, any considerations there
Tom Kyte
July 29, 2008 - 3:42 pm UTC

using "direct" only affects the manner in which table data would be read - everything else happens the same way it would otherwise happen, they are not affected.

Export direct =y

Nishant Santhan, July 30, 2008 - 11:32 am UTC

Dear tom,

Why I am missing some objects after full exp and imp operation ?

I have exported my 8.1.5 database using following command

exp file=/dump_file_path direct=y statistics=none full=y

and I imported this dump in my 10.2.0.1 database using

imp file=/dump_file_path log=log_file_path full=y ignore=y

I missed 4 index 1 trigger and 1 (chained_rows) table in one schema

I used one application user to export this dump and that is the same schema I am having problem.

Is it because of I used direct path export ?
Is there any other way to avoid this situation?

Thanks in advance
Nishant Santhan





Tom Kyte
August 02, 2008 - 5:37 pm UTC

when you reviewed your export and import logs, what did you see.

The indexes might have gone missing for two major reasons:

a) they were not deemed necessary, some indexes are created implicitly by constraints IF the constraint needs them. If the constraint creation the second time deems "there is already an index - we don't need it" it won't create it.
b) there was an error, it would have been in the log


and (b) would be the only things I can think of for triggers and a missing row.

direct PAth

MOHD KHUBEB, August 01, 2008 - 6:23 am UTC

Well tom sir....its really intresting

Export direct =y

Nishant Santhan, August 03, 2008 - 11:08 am UTC

Thanks Mr.Tom

Nishant

speed up exp

Shankar, August 10, 2008 - 1:43 am UTC

From reading this read, direct=y may not speed up the export. What other option in export would help to speed exporting the whole database (300GB size) when we have all of the tables in ONE schema. Our database version is 9.2.0.7.0 on 64 bit AIX platform.

Thank You
Tom Kyte
August 12, 2008 - 4:40 am UTC

I would never even consider using export on a 300gb database.

you'll want to consider doing table level exports, in parallel (you have to do it yourself parallelize them here, by running export many times simultaneously)

exp

Shankar, August 13, 2008 - 10:05 pm UTC

Thanks for your response.
Our plan is to upgrade to 10.2.0.4 RAC database from 9.2.0.7 RAC database and also change the storage to ASM from GPFS that we currently use for shared file system for our 9.2.0.7 RAC database. We thought export of 9.2.0.7 database and importing into 10.2.0.4 database instance would be a straight forward solution after creating ASM disk groups.
Considering the above scenario, I have a follow up question to your response above. Our tables have referential integrity constraints. If we do exp table by table in parallel using multiple exp sessions, while importing into 10.2.0.4 database, how should it be done? Table by table import may not work as creation of constraints might fail. Should we import all of the tables that are related by referential constraints together in one imp session?
Thank You.
Tom Kyte
August 18, 2008 - 9:53 am UTC

I would suggest you upgrade from 9 to 10.

then create tablespaces that are ASM

then alter table t move/alter index i rebuild - into ASM from gpfs


direct export and IMP-00051 error.

khalid Al-mansour, November 08, 2009 - 12:29 am UTC

Dear All, for those who export thier data using direct=y and during importing the file they get oracle error IMP-00051 please export the data again with direct=y and RECORDLENGTH=64000 and try the import again if it fails then conventional export is the only way. I read a metalink note and It suggest that. regards Khalid

where clause

srikanth, February 24, 2010 - 10:53 pm UTC

please guide me on can we use where clause in exporting multiple tables. or we have one field common to our all tables
with the help of that can we use where clause...

i m using the export as

exp uname@db file=dump_file_path Log=Log_File_Path tables=t1,t2 query=\please tell query syntax\ owner=uname statistics=none buffer=200000 CONSISTENT=Y
Tom Kyte
March 01, 2010 - 9:24 am UTC

the where clause would apply to all tables - if not, use export more than once (actually use data pump - then you can specify a where clause for each

http://docs.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_export.htm#sthref154

SQL loder Load data very slow

Harshit, May 08, 2012 - 1:22 am UTC

Hi TOM,

This is Harshit (DBA). I have face problem long time. sql loder take large time to load data....
It take 2 mints for 1 lack record after that i was reboot system it take on 7 second for 1 lack record. But after 2 and 3 days it take again 2 or 3 mints time loading data in oracle.

I share some my analysis and statistics--->

OS= Linux
Database = Oracle 10g R2
######### shared pool and Buffer cache size in MB ##########

SQL> select NAME,BYTES/1024/1024 from V$sgastat where rownum<3;

NAME BYTES/1024/1024
---------------
fixed_sga 1.93634796
buffer_cache 4880
SQL> select POOL,sum(BYTES/1024/1024) from V$sgastat group by POOL;

POOL SUM(BYTES/1024/1024)
--------------------
4911.99885
java pool 16
shared pool 1116.44039
large pool 16
######### No of processes and dbwr ##########

[oracle@webreporting2 ~]$ cat /proc/cpuinfo |grep pro
processor : 0
processor : 1
processor : 2
processor : 3
processor : 4
processor : 5
processor : 6
processor : 7
[oracle@webreporting2 ~]$ ps -ef|grep dbw
oracle 12292 1 0 Apr27 ? 00:16:11 ora_dbw0_orcl
oracle 12294 1 0 Apr27 ? 00:16:07 ora_dbw1_orcl

######### Disk IO ##########

[root@webreporting2 ]# sar -u -o datafile 1 6
Linux 2.6.18-194.el5 (mca-webreporting2) 04/27/2012
04:54:17 PM CPU %user %nice %system %iowait %steal %idle
04:54:18 PM all 7.62 0.00 1.62 16.10 0.00 74.66
04:54:19 PM all 7.74 0.00 1.50 11.99 0.00 78.78
04:54:20 PM all 7.75 0.00 1.50 12.25 0.00 78.50
04:54:21 PM all 8.36 0.00 1.50 12.48 0.00 77.65
04:54:22 PM all 7.88 0.00 1.62 12.25 0.00 78.25
04:54:23 PM all 7.35 0.00 1.49 12.33 0.00 78.83
Average: all 7.78 0.00 1.54 12.90 0.00 77.78
* ###### Paging ########*

[root@webreporting2 ]# sadf -d /var/log/sa/sa27 -- -B
WS-DEL-DND1;600;2012-04-27 12:50:01 UTC;565.28;1949.63;7931.46;0.00
WS-DEL-DND1;600;2012-04-27 13:00:01 UTC;531.42;2000.87;7969.84;0.00
WS-DEL-DND1;600;2012-04-27 13:10:01 UTC;583.04;2024.20;8746.53;0.36
WS-DEL-DND1;600;2012-04-27 13:20:01 UTC;573.77;1673.77;9934.39;0.03
WS-DEL-DND1;600;2012-04-27 13:30:01 UTC;790.76;659.12;9019.46;0.30
WS-DEL-DND1;601;2012-04-27 13:40:01 UTC;453.54;475.25;8055.46;0.27
WS-DEL-DND1;599;2012-04-27 13:50:01 UTC;503.18;847.91;8785.42;0.03
WS-DEL-DND1;600;2012-04-27 14:00:01 UTC;656.59;2138.40;8340.97;0.07
Please help why this is......
Reg....
Tom Kyte
May 08, 2012 - 11:16 am UTC

what is a lakh

how are you loading? Sounds a lot like a conventional path load. That writes into the buffer cache. If the cache is empty - you are loading into memory and not waiting for anything. If the cache is full (as it would be after a while of loading) - you have to wait for dbwr to flush things to disk - checkpoint basically.

is direct path loading something you can do? you don't really tell us too much about what you are loading, how long you are loading, how much you are loading and so on.

exp direct=y on 11gR2

A reader, May 20, 2012 - 7:38 am UTC

While this is a very old topic, we write 2012 now, and I am stuck with an export made with direct=y, and apparently there is no way to import it with imp. I am just flabbergasted hwo a copmany with such broken tools has become a market leader?
Tom Kyte
May 21, 2012 - 8:27 am UTC

export is not supported, export is deprecated on 11gr2

I'm surprised a technical person would not be up to date? Would not be using the correct and current and supported tools?

and direct path works, perhaps you did something wrong? you don't give us any details, NONE WHATSOEVER, to work with here... Nothing.


[tkyte@localhost ~]$ exp userid=/ 'owner=ops$tkyte' direct=y 

Export: Release 11.2.0.3.0 - Production on Mon May 21 09:26:21 2012

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set

About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user OPS$TKYTE 
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user OPS$TKYTE 
About to export OPS$TKYTE's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export OPS$TKYTE's tables via Direct Path ...
. . exporting table                              T         50 rows exported
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.
[tkyte@localhost ~]$ plus

SQL*Plus: Release 11.2.0.3.0 Production on Mon May 21 09:26:30 2012

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

ops$tkyte%ORA11GR2> drop table t;

Table dropped.

ops$tkyte%ORA11GR2> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[tkyte@localhost ~]$ imp userid=/ full=y

Import: Release 11.2.0.3.0 - Production on Mon May 21 09:26:36 2012

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Export file created by EXPORT:V11.02.00 via direct path
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
. importing OPS$TKYTE's objects into OPS$TKYTE
. . importing table                            "T"         50 rows imported
Import terminated successfully without warnings.
[tkyte@localhost ~]$