Skip to Main Content
  • Questions
  • Move Data Across Servers Using Oracle Pump

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Shrikant.

Asked: May 27, 2008 - 4:32 pm UTC

Last updated: February 06, 2012 - 10:54 am UTC

Version: 10.0.2

Viewed 1000+ times

You Asked

Sir,

Putting this as new question now.

Suppose I have Server-1 and Server-2. And I have Oracle-10g installed on both, say Oracle-1 and
Oracle-2 respectively. I want to export data from Oracle-1 to Oracle-2.

Using the tradition Import-Export, I can unix login to Server-2, take export of Oracle-1 and then
import in Server-2 using following commands

Export : exp user1/pwd@oracle1 --> this is on server2
Import : imp user2/pwd@oracle2 ---> again on server2

With above two commands I transferred data from one server to another server by logging on just one
server and without using FTP.

If same thing I have to do using new IMP and EXP, I have to create directory on server-1, give
access permission to user of Oracle-1. And then do export. To import in server-2, again create
directory on server-2, grant acecss, then copy that file on server-2 and then import.

This looks very long process, but I know whatever I am assuming is wrong, but dont know what is
correct also. Can you please help me in understanding the better flow to move data from one server
to another

regards
--Shrikant

and Tom said...

and with datapump, you can export datapump from one database over the network to the other database without ever even creating a file

http://docs.oracle.com/cd/B19306_01/server.102/b14215/dp_overview.htm#sthref22

The data should never even hit the disk.

Rating

  (6 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

data export

A reader, May 27, 2008 - 7:24 pm UTC

Tom:

For a similar situation where we have

server 1 has oracle 9i R2.
server 2 has SQLite.

We need to take some data that in the SQLite and store it in a table on server 1 for reporting purposes.

What is the best way to accomplish this. Both servers can connect over same network.
Tom Kyte
May 28, 2008 - 8:30 am UTC

how is this similar - it is about as different as you can get.

if sqlite has odbc drivers that work over the network, you might be able to use generic connectivity:

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4406709207206#18830681837358

else, you might be dumping data out of sqlite using whatever toolset they might provide and then using sqlldr or an external table to load into Oracle

export

A reader, May 28, 2008 - 5:38 pm UTC

Tom:

Thanks, I will read the chapter on generic connectivity and ODBC drivers.

In terms of ease of development would it be like this:
1. ODBC (generic connectivity)
2. external table
3. sqlldr

I think the XML dump option and parsing it would be "not easy".

database link from Oracle to Sqlite

RC, March 28, 2009 - 11:23 am UTC

It is possible to create a database link from Oracle to Sqlite. See here: http://forums.oracle.com/forums/thread.jspa?threadID=846516&tstart=0
Tom Kyte
March 30, 2009 - 3:56 pm UTC

that link seems to, well, show how to do it?

Source data

David Montgomery, February 03, 2012 - 10:56 am UTC

Hi Tom:

If the data in the source database is truncated and reloaded nightly what impact if any is there when datapump is used to move it from the source to the destination database? does the destination need to be truncated also or will datapump only import the new or changed data?

cheers,
David
Tom Kyte
February 03, 2012 - 11:50 am UTC

if you truncate and reload into source...

why not just do that into target as well??

why
a) read it and write it into source
b) read it from source and write it again to disk
c) read it and write it into target

when you can just

a) read it and write it into source
b) while at the same time read it and write it into target

??


At the very least - just transport it so you can

a) read it and write it into source
b) copy the data files over to the other database and attach them

I ask myself those questions every day...

David Montgomery, February 06, 2012 - 9:53 am UTC

Thanks for your response.

Essentially we do the last thing you describe using transportable tablespaces coupled with the cloning feature of our underlying storage array. I was hoping to simplify it a little by using data pump exclusively but I doubt it can match the array for speed.

I would like to use your response to lobby for a complete overhaul of the process if I may?

Some clarification

David Montgomery, February 06, 2012 - 10:07 am UTC

Sorry I should have added that we only move a subset of the data from the source database. After the trunc and load there is some ETL performed and the data from this process populates a tablespace for reporting purposes. Only that tablespace is replicated.
Tom Kyte
February 06, 2012 - 10:54 am UTC

ok, transporting would be the way to go in my opinion, given what I know about your situation. Just move the tables/indexes instead of unloading and reloading them.

You may feel free to share my opinion.

More to Explore

Data Pump

All of the database utilities including Data Pump are explained in the Utilities guide.