Skip to Main Content
  • Questions
  • Table Copy From One DB to Another DB from dblink

Breadcrumb

May 4th

Question and Answer

Connor McDonald

Thanks for the question, shekhar.

Asked: August 20, 2008 - 11:26 am UTC

Last updated: November 09, 2021 - 2:52 am UTC

Version: 10.2.0.3.0

Viewed 100K+ times! This question is

You Asked

Dear Tom
can u help me in how to do :-Table Copy From One DB to Another DB from dblink.
i searched a lot but i m not getting satificatory asnwer.
i dont want, that create a table n then through dblink insert a data from one DB to another DB.
and i also dont want create table from dblink to on DB to another DB.
i dont want to use export/import utility.
Please give me different solution.

and Tom said...



insert into local_table select * from table@database_link;



that is all you need to do.

Rating

  (21 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

... or save your time by generating this script ...

Renato, August 21, 2008 - 5:16 pm UTC

SELECT 'insert into '
|| table_name
|| ' select * from '
|| table_name
|| '@database_link;'
FROM user_tables

SQL*Plus COPY Command

Eric Yen, August 21, 2008 - 7:58 pm UTC

Take a look at . . . the table should be at a "quite" state. Low DML activity during a maintenance window is the best time.

SQL*Plus COPY Command

http://download.oracle.com/docs/cd/B28359_01/server.111/b31189/apb.htm#i641251

updates during data migration

Ben, August 22, 2008 - 11:53 am UTC

The idea of limiting DML (or DDL for that matter) during a migration is critical, and not limited to using the SQL*PLUS copy command. I hope the original poster knows this already, but it seems good to say that out loud anyway. :-)

Table Copy From One DB to Another DB (longraw/blob)

Shravan, August 22, 2008 - 6:53 pm UTC

Dear Tom,
We will face issues if the table contains long raw column/blob. The insert statement you have provided gave error to me when I tried with table with longraw column. Kindly tell us how we can transfer longraw/blob objects from db to db
Shravan
Tom Kyte
August 26, 2008 - 7:44 pm UTC

long raw, not going to happen over a database link

blob - no problem.


long raw - export/import (i don't care if you don't "want" to use it, you might have to), transport (using export/import again)

need urgent help

MAYUR, August 27, 2008 - 8:24 am UTC

HI TOM, CAN U TELL ME PLEASE HOW TO CREATE DATABASE LINK BETWEEN TWO SYSTEM WHICH ARE IN LAN CONNECTION. PLEASE SPECIFY STEPS FOR THAT....
THANK YOU
Tom Kyte
August 28, 2008 - 8:54 am UTC

CREATE DATABASE LINK

read about it in the documentation, fully documented command and syntax is there.


you need to have setup the listener on at least one instance and from the machine you want to create the database link on (the machine - the server), you need to be able to "sqlplus user/password@remote_site", where remote_site is a tns connect string you configured in that machines tnsnames.ora

then create database link remote_database using 'remote_site', maybe with other options like identified by, connect to, and so on - you'll figure out what options you want to use after you study the documentation and see what is available to meet your needs.

DB Links best syntax

Govind, August 28, 2008 - 8:19 pm UTC

Hi,

Here is a syntax I find very useful when it comes to creating DB links. This works even without the TNS entry on the server.
create database link <db link name>
connect to <username>
identified by <password>
using '(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS =(PROTOCOL = TCP)
(HOST = <your host>)
(PORT = 1521)))
(CONNECT_DATA = (SID = <your sid> )))'

Regards, Govind
orafind.blogspot.com - Oracle Search with a difference
Tom Kyte
August 29, 2008 - 11:10 pm UTC

that works but in general is a bad idea except for testing.


think about it, what happens when you restore this to 'test' - you don't really want your test database link to connect to production right?

there is a reason we commonly use indirection in coding/software.


and this syntax is so easily documented and understood.... documentation is something to embrace, not reject.


so, I in some ways find your comment dangerous. It offers an "easy way" with lots and lots of unmentioned caveats.

need more explanation for the following example

venkat, February 17, 2010 - 6:47 am UTC

Hi Tom,
What if the table on the remote database has some 50M of data? The following query will take lot of time depending on the resources available.

insert into local_table select * from table@database_link;


But If i go with the following query its a bit faster,

insert /*+APPEND PARALLEL(local_table,x)*/into local_table select /*+prallel(a,x)* from table@database_link a;


but I want to improve that even more. Is it possible? I tried it using the bulk collect implementaion as follows:

OPEN c1 FOR (SELECT * FROM table@database_link);
LOOP 
   FETCH c1 BULK COLLECT INTO tab LIMIT 2000;
   EXIT WHEN tab.count = 0;
   FORALL idx IN 1..tab.count
   INSERT INTO local_table
        VALUES tab(idx);
END LOOP;
CLOSE c1;


Thanks
Venkat.
Tom Kyte
February 17, 2010 - 9:32 am UTC

50mb - that is tiny, I can type that in myself. 50mb is TINY, teeny tiny. Nothing.

The append is probably the only thing doing anything for you in that statement. The parallel full scan won't do much since the DBLINK will serialize the data anyway (so what if you have 10 readers, if they all feed a SINGLE process on the remote site that will write data on the dblink - the full scan won't benefit very much).


And you know what, I want to know exactly why you think procedural code like that would be better???



mfz, February 17, 2010 - 6:08 pm UTC

Venkat -

I am assuming you are referring M as million.
It is always not to use these abbreviation , as this causes confusion.

For 50 million records , by placing the hint you are

a) inserting the records above the HWM
b) using PQ jobs .

where as the insert (with out hint) does not use PQ process . Why dont you get the execution plan for these inserts and paste it in the question. That should answer your question.

A reader, March 08, 2010 - 11:43 pm UTC


Db Link solves a critical error

A reader, April 26, 2012 - 10:28 am UTC

Thanks v much.
Created a DB Link and imported table within no time !
Thanks a ton guys.

copy partition table to another database

sanat Nayak, May 31, 2012 - 12:51 pm UTC

Hi tom
which command i have to use for copy single partition of table to another database using dblink.
Tom Kyte
May 31, 2012 - 1:02 pm UTC

you can either

a) create a view at the remote site that selects from this partition and use that
b) use a where clause that includes the right range to select the rows




how to push data os a table from one oracle Db to another oracle DB

amit, October 11, 2012 - 3:54 am UTC

Hi,

I need to push the data of one oracle table to another.
The source table have real time data.
this real time data should be flown into other oracle db.

earlier we used Oracle streams between this 2 DB.
Now we are facing some problems with this.

Can we have other methods to replicate data from one to another db. other then using Dblink and MV.

Thanks,
amit

Auto Purge of MV LOgs files

amit, October 13, 2012 - 2:05 am UTC

To Purge the log files of fast refresh mv we follow the below syntax
CREATE MATERIALIZED VIEW LOG on Amit
TABLESPACE Amit_tzt
WITH ROWID
(
EMP_NAME ,
ID ,
DOLLAR_VALUE ,
LOCAL_VALUE ,
UNITS
)
INCLUDING NEW VALUES
PURGE REPEAT INTERVAL '1' DAY;

But when i give PURGE REPEAT INTERVAL '1' MINUTE; Its not working
can anybody give more suggestion on it

Thanks,
Amit
Tom Kyte
October 13, 2012 - 8:45 am UTC

why would you do this every single *minute*. The sole goal of using purge repeat would be to spread the purges out over many refreshes - instead of doing it every refresh. the goal of this is to reduce the number of purges, not make it an ongoing thing forever, constantly, continuously.

what is your thinking, your rationale behind this???

PURGE REPEAT INTERVAL

amit, October 15, 2012 - 1:23 am UTC

Yes Got it.

Can anybody explain me How does Purge work internally.
Can see when ever a log is create,A table mlog$_<> is also creates to note down the changed records.How does it get propagated to target.
CREATE MATERIALIZED VIEW LOG on amit
TABLESPACE EXT_DATA_TS
WITH ROWID
(
store_id ,
measure_name ,
billing_dollar_value ,
billing_currency_value ,
billing_units
)
INCLUDING NEW VALUES
PURGE REPEAT INTERVAL '1' DAY;

select * from mlog$_amit;

Tom Kyte
October 15, 2012 - 10:32 am UTC

the purge repeat uses the scheduler to run a job that does the purge.

the log doesn't get propagated to the target, the target connects to the source, queries the log to find changes and pulls them.

Log Purge

amit, October 18, 2012 - 6:15 am UTC

Can i know the exact job name which purges it.
As we have giving the purging time 1 hour.but some time its purging within 40 minutes.

If a job exists for it then we can monitor it
Tom Kyte
October 18, 2012 - 8:29 am UTC

query the scheduler views. you'll see it.

Purging of Logs

amit, October 22, 2012 - 1:30 am UTC

Can u please explain me in detail.
about the scheduler View
Tom Kyte
October 23, 2012 - 12:11 pm UTC

U is not available, sorry. did you check out the documentation at all? all of the views are documented???

INSERT SELECT vs CREATE AS SELECT

Rustam Kafarov, June 06, 2013 - 4:35 pm UTC

Hi Tom.
I have table on remote DB (Oracle 11.1) which consists of 2 columns:
ID - varchar2(50)
XMLCOLUMN - XMLType stored as bacisfile CLOB.
There are about 200.000 rows with 5K avg size of lob;

I need to create local "snapshot" of this table on my local db over db_link. Local table has following structure
ID varchar2(50),
TAGVALUE varchar2(50)

I tried 3 methods:
method 1:

truncate table local_table;
insert /*+ append */ into local_table
select extractvalue(xmlcolumn, '/tag_xpath') from remote_table@db_link;
commit;

method 2:
drop table local_table;
create local_table as select extractvalue(xmlcolumn, '/tag_xpath') from remote_table@db_link;

method 3:
declare
cursor cur is
select * from remote_table@db_link;
arr1 str_array;
arr2 str_array;
begin
open cur;
loop
fetch cur bulk collect into arr1, arr2 limit 1000;
forall I in 1..arr1.count
insert into local table values(arr1(I), arr2(I));
commit;
exit when arr1.count = 0;
end loop;
close cur;
end;
/

Method 1 - 2500 seconds,
Method 2 - 160 seconds,
Method 3 - 180 seconds

Difference between 2 and 3 is expected and understood. But huge difference between 1 and 2 - I don't understand. Why create as select works faster (and significantly faster in my case) than direct path insert?

Thanks,
Rustam
Tom Kyte
June 06, 2013 - 6:06 pm UTC

I'd have to see each run a few times to understand that the insert append is always slower - could be that the insert append warmed things up.

or at the very least, a way to reproduce, creates, data generator, etc.

INSERT SELECT vs CREATE AS SELECT

Rustam Kafarov, June 07, 2013 - 9:11 am UTC

It is reproducible on all tables. I suspect that there is a generic problem in the network and all my tests show that insert into .. select ... from @dblink simply doesn't work properly.
I'm confused because CTAS uses same approach - direct path write above HWM but works ok.
Tom Kyte
June 07, 2013 - 1:36 pm UTC

evidence please?

ops$tkyte%ORA11GR2> set timing on
ops$tkyte%ORA11GR2> insert /*+ append */ into t1 select * from big_table@ora11gr2@loopback;

1000000 rows created.

Elapsed: 00:00:04.02
ops$tkyte%ORA11GR2> create table t2 as select * from big_table@ora11gr2@loopback;

Table created.

Elapsed: 00:00:04.11
ops$tkyte%ORA11GR2> set timing off

Further tuning

Dheeraj Mehra, May 22, 2015 - 10:38 am UTC

Hi Tom,

Going back to the original query, how could we further tune below SQL in addition to APPEND hint etc.?

We are using 12c

insert into local_table select * from table@database_link;

Thanks,

Dheeraj

what is oracle (error) code to know for copying huge data from one db to another db.

Mangesh, June 07, 2016 - 1:51 am UTC

what is oracle (error) code to know for copying huge data from one db to another db.
Connor McDonald
June 07, 2016 - 2:16 am UTC

Any error code generally isn't a good one.

Load Huge data From Sql Table to Oracle Table

Sachin, November 02, 2021 - 10:32 am UTC

I am loading data via Wherescape RED, which uses Database link to load data from SQL server to oracle. Similar to below.

insert into OracleTable select * from SQLTable@database_link;

But it is taking lot of time to load. I did partitioning on SQL table but it didn't improve the performance now I am planning to create partioning Oracle table as well.

Is the approach correct to load data Faster or is there a better way to load huge data (Billions)
Connor McDonald
November 09, 2021 - 2:52 am UTC

For billions, the best way will be to unload the data to flat file in SQL Server and load it via SQLLoader or external table in Oracle.