Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, KISHORE.

Asked: May 09, 2011 - 3:42 pm UTC

Last updated: June 17, 2011 - 11:37 am UTC

Version: 10.2.0.4.0

Viewed 10K+ times! This question is

You Asked

Hi TOm,
I have been following your blog since 3 months. This is the first time i got a chance to ask you a question.

What are the steps to speed up the export and import process.

What is the difference between importing into Database in noarchive log mode and importing after setting _disable_logging=true.

I need to import around 500 GB of data on to a database which is in archive log mode. Can you suggest me the best and the fast method to perform the import. I have come across several blogs that syas setting the hidden parameter _disable_logging=true before importing is too dangerous.

Can you list all the steps in performing import in the following 2 methods:

1. By taking the database into no archive log mode.

2. By setting the _disable_logging=true parameter.


It will be a great help if you can clear these doubts with examples. Sorry to ask so much in the first post itself.

Waiting for your reply.

and Tom said...

The first step is:

do not use export import, they are legacy tools and export is completely deprecated in 11g. Data pump itself is faster.

Step two:

do not use _disable_logging. It will corrupt your database and you'll have to restore and recover a full backup at some point. Just do not even consider going there. The ONLY thing it will save you is some writing to the redo log, which in the grand scheme of things will be a trivial component of your import.


Logging will not be your problem. It will be

a) the fact you are using import, it should be data pumped.

b) the fact that if you use import, it does things rather serially (no parallel operations) and if you have longs/lobs - it does things row by row which will be very slow.


The only real way to speed up an import is it implement a "do it yourself" parallelism. You would export into MANY files (you would run export MANY times against the source data - perhaps at the schema level, creating a separate file per schema) so that you can import in parallel - yourself - and maximize the use of the machine.


Or, just go to data pump which can parallelize the operations for you out of the box.

Rating

  (16 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Kishore, May 10, 2011 - 11:38 am UTC

Thanks for the response
But tom my team is so dumb that they are still using some guys script who left the company & he made use of export instead of datapump.

So I have no option but to use import because they have already done the exporting job and they want me to use the hidden parameter _disable_logging parameter

How fast it will be if I keep the database in noarchive log and do import when compared with using the disable logging parameter.

And what's the exact difference between keeping database in noarchive log mode and that of using _disable_logging parameter. I want a detail comparison between these two.

Thanks in advance.
Tom Kyte
May 10, 2011 - 12:00 pm UTC

_disable_logging will not make you noticiably faster - we do ALL OF THE WORK except for an IO - and most of the IO done logging is done in the BACKGROUND (import isn't waiting on it) with an import. It won't be a silver automagically life is good bullet.


You have options, you and your team are purposely choosing to not explore them.


_disable_logging will corrupt your entire database if the instance fails for any reason whatsoever. It is beyond UNSAFE. Please stop talking about it, I will not discuss it any further.

That is as detailed a comparison as you'll get from me.

Datapump is the way to go!

AskTom Rocks, May 10, 2011 - 1:53 pm UTC

Kishore,
I agree with Tom...using Data pump is the preferred way to refresh schemas from release 10g and don't even think about tinkering with system parameters. "Your team" or you can easily do a data pump export of the original data and use those files to do a data pump import. Unlike the deprecated export/import, datapump will export a 500GB data in an hour or so...use the parallel parameter in expdp par file for better performance...check help by typing "expdp help=y" or go through the documentation.

Datapump is the way to go!

Oleksandr Alesinskyy, May 12, 2011 - 4:07 am UTC

100% true - unless the original poster has no chance to export his 500 GB with a datapump. The question was "need to import around 500 GB of data". That may meant that data already exist as an export file. It is not clear if they may be exported once more (this time with a datapump).

Is it the case?

export and import

suriya, May 12, 2011 - 4:36 am UTC

Hi tom,

let me know the Advantage of datapump export(expdm) and import(impdm) over original export(exp)
and import(imp)
Tom Kyte
May 12, 2011 - 7:52 am UTC

see the other place you asked this same exact thing.

export and import

suriya, May 12, 2011 - 4:36 am UTC

Hi tom,

let me know the Advantage of datapump export(expdp) and import(impdp) over original export(exp)
and import(imp)

A reader, May 12, 2011 - 9:12 am UTC

Thanks Tom....

Since my team insisted on performing the export by setting the _disable_logging to true, I thought if getting some safe steps to perform the import without getting into the problem of database crashing.

But I don't why you are not at all ready to talk about it. Did I asked something that I'm not supposed to ask or else i'm missing something. I really don't the reason for your angry.


Anyway thanks a lot again and this blog is really awesome and I will keep on following it. Good job and I hope you continue to do this service..
Tom Kyte
May 12, 2011 - 9:31 am UTC

hah, they turned off writing to the redo logs for an export? Well, that sure was a waste of a database shutdown/startup. Exports don't really tend to generate redo.

_disable_logging is something you should not use, period. That is all I'm willing to talk about. I'm not angry, I'm just steadfast. I said repeatedly "do not use it, stop thinking about it, it won't do much if anything - other than corrupt your entire database perhaps".


Kishore, May 12, 2011 - 9:28 am UTC

@ Oleksandr Alesinskyy 

Exactly oleksandr...what you said is my present case.

Don't know how long it takes and will it be successful. I will keep posted about the import results and the steps I followed to import by enabling the disable logging parameter.

Sorry, forgot to include my name in the above reply. Its me who replied as a reader above.
Tom Kyte
May 12, 2011 - 9:36 am UTC

_disable_logging ain't going to do squat for you. All it does is disable the WRITE (not the generation, all of the code to generate redo is going to happen, just the WRITE by lgwr, a background process, is going to be skipped). And that write takes place IN THE BACKGROUND, without you waiting for it.

This has been a tremendous waste of time apparently.


As your import runs, you will be inserting rows into a table.

That will generate redo in the redo log buffer (even with this magic thing _disable_logging).

When lgwr notices the redo log buffer is filling - it will gather up the redo and bundle it all together and - just skip the write (it does everything else).


If you enabled logging, the only difference would be - lgwr would write it out. But - here is the kicker - you don't wait for this - lgwr does this in the background, you are NOT waiting for lgwr at this point, it is just streaming the data out of the redo log buffer to disk.

And if you are in noarchive log mode, which is a relatively safe mode for this import to be in, things like CREATE INDEX - they won't generate redo anyway.

So, all of this "magic" for naught. I hope your instance doesn't shutdown in an unexpected fashion. If it has to perform crash recovery - you are *toast*, you have lost the entire database.


???

A reader, May 12, 2011 - 9:49 am UTC

Kishore,
You mentioned your team as "dumb" in the post above...not sure why you are insisting on not following Tom's advice where he clearly says "do not use _disable_logging, it will corrupt your database"...what part of that statement don't you understand? You seem to follow your team's advice blindly than the advice of a senior technical architect of Oracle Corporation.

You can use the old export/import feature in release 10.2.0.4, and assuming you are importing data into a dev/qa environment it should be ok to put those databases in no archive log mode during the import.

Suriya, here is a link that explains the advantages of Data pump.
http://www.datadisk.co.uk/html_docs/oracle/data_pump.htm

Kishore, May 12, 2011 - 11:30 am UTC

I know that it dangerous to change hidden parameters and it will corrupt the database....but my team is not listening to it. So I lost temper and that's where I used the dumb word.

My team says the last guy has done this way but no body is guarantee whether the steps used by that guy are going to work and still my team to ready to take the risk.

So I thought of asking a senior oracle architecture like Tom whether he knows the exact order of steps to perform the task.

That's what all I expected from Tom or any one on this blog.

I may be wrong that I called my team dumb for which I apologize but I was so helpless that I lost my temper on this topic.

I will be still thankful to any senior architecture like Tom if they know the exact steps to perform this task by that parameter.

I just expected someone will be there who might have not faced any database corruption problem and successfully done the import task.

Tom Kyte
May 12, 2011 - 3:11 pm UTC

I don't know what steps you could possibly be looking for?

You shutdown, change the parameter, start up and PRAY your database doesn't shutdown abnormally. If it does, database is DEAD. If you are lucky and that doesn't happen, you just import - and it will take *just as long* as if you didn't set this parameter. Then you shutdown very carefully - making sure to do a NORMAL shutdown, unset this parameter and start up again.

There are no "steps". Short of setting this parameter you are dead set on setting.


for Kishore

A reader, May 12, 2011 - 1:30 pm UTC

Either find a New Team or a New Job.
This forum is for pure Technical problems not political ones.
Sir Tom Kyte already gave you the correct way to do things.
Just follow it and set a good example for your Team , then they may listen.
Change is law of Nature.

@Kishore

Oleksandr Alesinskyy, May 13, 2011 - 7:15 am UTC

You have used the word "dumb" in the most appropriate way.

Now 2 questions:
1. Is there a possibility to re-export the data (using datapump)?
2. What is your position in the team (just a rank-and-file developer? DB specialist? Team lead?

If answer to 1 is yes, just go this way.
Else
if answer to 2 is "team lead" or "DB specialist" then disregard what the team is saying and follow the Tom's advice.
else relax.

Datapump import error

A reader, June 09, 2011 - 7:18 am UTC

Hi,

I am trying to use datapump for data migration.
I am able to successfully export data and create dump files using below
expdp system/manager@test schemas=scott directory=EXPORT_DIR parallel=4 dumpfile=export_file_%U.dmp logfile=export2.log

Now I am trying to import these dump files to another schema
impdp system/manager@test1 schemas=A directory=IMPORT_DIR job_name=hr dumpfile parallel=4
But I am getting following error -
ORA-39002: invalid operation
ORA-39165: Schema A was not found.

Don't we have something like remap_schema in transportable tablespace here? What am i doing wrong here?

Tom Kyte
June 09, 2011 - 10:18 am UTC

well, yes there is

http://docs.oracle.com/docs/cd/E11882_01/server.112/e16536/dp_import.htm#SUTIL927

what you are doing wrong is not remapping the schema. You are asking to import schema A from some arbitrary dump file - and there is NO schema A to import. It doesn't say to itself "hey, there is a scott schema in here, I guess they might want scott to become a" - you have to tell it that.

network_link

A reader, June 13, 2011 - 9:30 am UTC

Hello,

We have two servers sitting on two different data centers. Database export using IMDDP NETWORK_LINK took almost 8 hours to transfer a 25 GB database whereas, the same database with EXPDP to a file, SCP the dmp file and IMPDP took less than hour. This is 8 times difference between the two methods.

In both the cases, the Indexes, constraints and triggers were first disabled before IMPORT ran . What could cause for such a big difference?

A brief note on my IMPDP using network link option:
1. First IMPORTED the schema only (with flashback_scn value and NETWORK_LINK value)
2. Disabled indexes, constraints and triggers
3. IMPDP table data (one table at a time) with parallel 8 (but flashback_scn parameter wasn't used here)
4. Enable Indexes, constraints and triggers

the EXPDP and IMPDP par files are:

directory=wcd
dumpfile=ocsp%U.dmp
filesize=5G
flashback_scn=10915100804148
logfile=ocsp.log
full=y
job_name=WCD
parallel=4
status=300

directory=wcd
dumpfile=ocsp%U.dmp
exclude=statistics
exclude=SCHEMA:"='SYS'"
exclude=SCHEMA:"='SYSTEM'"
exclude=SCHEMA:"='OMNIBACKUP'"
logfile=ocsp.log
full=y
job_name=WCD
parallel=4
status=300


Thanks,

IMPDP Speed calculation

aliyar, June 13, 2011 - 12:10 pm UTC

Dear Tom ,

recently involved migration activity. as a part of it , we taken export and imported some schemas into new server

while importing , we needed following details

1) at what speed currently impdp importing : for example how many GB per Hour
2) so far howmuch completed out of total size
3) how many hours need if it runs with current speed

we queried v$session_longops .. but feeling it is not giving correct values.

is there any other way to find out values for above questions perfectly

Appreciate your Help for DBA world

Thanks in advance
Tom Kyte
June 17, 2011 - 11:37 am UTC

http://docs.oracle.com/docs/cd/E11882_01/server.112/e16536/dp_import.htm#i1007539

you can use the STATUS option to get a current status and percent complete.


Just adding an example for the guy above

Marcus Rangel, June 27, 2011 - 3:36 pm UTC

Aliyar, this data pump utility is a thing of beauty. I use it all the time, so let me show you an example. I used EXPDP but it works exactly the same way for IMPDP:

1. Start you data pump normally at a command shell, and get the identifier (SYS_EXPORT_FULL_01 in this example):
[oracle@taurus ~]$ export ORACLE_SID=YOURDB
[oracle@taurus ~]$ expdp system/****** directory=BACKUP_DIR dumpfile=test.dmp full=y

Export: Release 10.2.0.5.0 - 64bit Production on Monday, 27 June, 2011 17:01:36

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_FULL_01":  system/******** directory=BACKUP_DIR dumpfile=test.dmp full=y 
Estimate in progress using BLOCKS method...
(... lots of work here ...)


2. Go to another shell* and do this:
[oracle@taurus ~]$ export ORACLE_SID=YOURDB
[oracle@taurus ~]$ expdp system/****** attach=SYS_EXPORT_FULL_01


It will show you the status right away (see below). Leave it there, and just type "STATUS" whenever you want to see what is happening.
Export: Release 10.2.0.5.0 - 64bit Production on Monday, 27 June, 2011 17:03:11

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Job: SYS_EXPORT_FULL_01
  Owner: SYSTEM                         
  Operation: EXPORT                         
  Creator Privs: FALSE                          
  GUID: A6B805164FDE87B3E040A8C0D2010E2B
  Start Time: Monday, 27 June, 2011 17:01:44
  Mode: FULL                           
  Instance: TSTSUP03
  Max Parallelism: 1
  EXPORT Job Parameters:
  Parameter Name      Parameter Value:
     CLIENT_COMMAND        system/******** directory=BACKUP_DIR dumpfile=test.dmp full=y 
  State: EXECUTING                      
  Bytes Processed: 0
  Current Parallelism: 1
  Job Error Count: 0
  Dump File: /u03/backup/test.dmp
    bytes written: 4,096
  
Worker 1 Status:
  State: EXECUTING                      
  Object Schema: KIPREV
  Object Name: AF_SE_ENDERECOS
  Object Type: DATABASE_EXPORT/SCHEMA/TABLE/TABLE
  Completed Objects: 1,749
  Worker Parallelism: 1

Export> 

* It can be done in the same shell but I am not man enough to hit CTRL+C after hours of run :-)

Export and Import with 10gR1

A reader, September 03, 2011 - 9:38 pm UTC

Hi Tom,

Well, this is a new question, however, it still is related to the topic.

My client has a 10g R1 (10.1.0.4.0) database on AIX 64-bit and needs their current database characterset be changed from US7ASCII to AL32UTF8. They have CLOBs and BLOBs. I plan to use the CSALTER with selective Import. Now for the testing, what I initially did was to use exp/imp, and not Data Pump, as stated in this Metalink Document ID 260192.1:
"Do NOT use Expdp/Impdp when going to (AL32)UTF8 or an other multibyte characterset on ALL 10g versions lower then 10.2.0.4 (including 10.1.0.5). Also 11.1.0.6 is affected.
It will provoke data corruption unless Patch 5874989 is applied on the Impdp side. Expdp is not affected, hence the data in the dump file is correct.Also the "old" exp/imp tools are not affected.
This problem is fixed in the 10.2.0.4 and 11.1.0.7 patch set"


My concern is, just the same as the others, how I can improve my exp/imp process? It took me 3hours to export the selected tables only, and in theory, import would likely take 2x the export time. Since this is a 24x7 production and client would want to minimize the downtime (I gave them an estimate of 14 hours of downtime).

So far, these are the parameters I set:
EXP
DIRECT=Y

IMP
INDEXES=N


Since it contains LOBs, I did not set BUFFER. Also, since it will be exp/imp on the same database, I did not use RECORDLENGTH as well. Kindly enlighten me as to what parameters can I also set to optimize my exp/imp given my current environment.

Thank you so much!

More to Explore

Data Pump

All of the database utilities including Data Pump are explained in the Utilities guide.