Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, Anthony.

Asked: June 05, 2000 - 6:44 am UTC

Last updated: November 26, 2014 - 9:16 pm UTC

Version: Version 8.1.5

Viewed 10K+ times! This question is

You Asked

What is the easiest way of copying all the data from one table in a database to another table in a second database ? I have tried using copy but without any luck.

and Tom said...

Well, the sqlplus copy command is pretty easy -- you don't mention what "without any luck" means but if you can:

- sqlplus user/password into the local database AND
- sqlplus user/password@REMOTE into the remote database...

You can sqlplus copy the data from T1( x, y, z ) (located on REMOTE) into T2(x,y,z) (local) using:

SQL> copy from user/password@REMOTE insert t2(x,y,z) using select x,y,z from t1;

If you have a specific error you are getting with the COPY command -- please add that ( a cut and paste of your attempt and the error would be great )

Another option is

o export the table from REMOTE using exp.
o import the table into local using imp.


Another option is to use a database link and simply:

insert into t2 select * from t1@remote_db_link;







Rating

  (27 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

copying data...

ASP, September 28, 2001 - 9:35 am UTC

But what if remove database table has around 20,000 rows ? and what about exception trapping while inserting ?

lob columns?

Mikito Harakiri, October 25, 2001 - 8:49 pm UTC

With imp/exp I hit a problem that on remote database users tablespace is called 'users', while on local it's 'users_data'. Now I have to go to documentation to figure out if those stupid switches would save the day...

Also with schlobs the elegant
insert into t2 select * from t1@remote_db_link;
doesn't work.

I wonder why export/import is not plain sqlplus statements where I can just specify the right 'where' clause...


Tom Kyte
October 26, 2001 - 7:45 am UTC

Yes, when you deal with multi segment objects (tables with LOBS, partitioned table, IOTs with overflows for example), using EXP/IMP is complicated if the target database doesn't have the same tablespace structure. That is because the CREATE statement contains many tablespaces and IMP will only "rewrite" the first TABLESPACE in it (it will not put multi-tablespace objects into a single tablespace, the object creation will fail of the tablespaces needed by that create do not exist).

I dealt with this issue in my book, in there, I recommend you do an:

imp .... full=y indexfile=temp.sql

In temp.sql, you will have all of the DDL for indexes and tables. Simply delete all index creates and uncomment any table creates you want. Then, you can specify the tablespaces for the various components -- precreate the objects and run imp with ignore=y. The objects will now be populated.


You are incorrect with the "schlobs" comment (both in spelling and in conclusion).

scott@ORA815.US.ORACLE.COM> create table t ( a int, b blob );

Table created.

scott@ORA815.US.ORACLE.COM> desc t
Name Null? Type
----------------------------------- -------- ------------------------
A NUMBER(38)
B BLOB

scott@ORA815.US.ORACLE.COM> select a, dbms_lob.getlength(b) from t;

no rows selected

scott@ORA815.US.ORACLE.COM> insert into t select x, y from t@ora8i.world;

1 row created.

scott@ORA815.US.ORACLE.COM> select a, dbms_lob.getlength(b) from t;

A DBMS_LOB.GETLENGTH(B)
---------- ---------------------
1 1000011

So, the "elegant insert into select * from" does work.

imp/exp can be plain sqlplus statements -- use indexfile=y (if you get my book, I use this over and over in various places to get the DDL). In 9i, there is a simple stored procedure interface as well.

complicate data coping

nk, October 26, 2001 - 12:42 pm UTC

tom,
that is a good answer.
i have a situation where the data transfer should be done after some manipulations, which i
cannot do with a single sql, so i go for pl/sql procedure.
in the procedure i used pl/sql tables for intermediate purpose.
so there i want to sort the data present in pl/sql tables? is it possible?

Tom Kyte
October 26, 2001 - 3:10 pm UTC

No, there is no builtin facility to sort plsql tables. You would need to "sort it" yourself.

Best to use a global temporary table perhaps instead of plsql arrays.

if oracle would write web browser

Mikito Harakiri, October 26, 2001 - 6:02 pm UTC

OK, I see how I can handle tablespaces.

With "create from select" I actually stepped upon "proccess is out of memory" on big table, but you are right, this is not logical error. Still, it doesn't work for objects and nested tables, right? And I'm not happy with lob, because there are so many limitations, for example innocent:
FOR lobrec In (select lob_col from tbl@dblink) LOOP
doesn't work.

One last remark about oracle storage model. Are people happy with necessity to micromanage it? Could I simply tell oracle what the path to data files is and forget about tablespaces/extents/datafiles altogether? I always thought that machines are better than humans in such routine tasks as memory allocation. If oracle was in web browser business, before launching the browser I would have to allocate memory for images, then, separately, for cached pages, etc...


Tom Kyte
October 26, 2001 - 6:23 pm UTC

Sorry you are not happy. I guess I won't be able to fix that.

Yes, I believe people are "happy" with the storage model and all databases have them.

I would say there are darn FEW limitations on LOBs, given they are 4 gig things potentially and are really implemented as pointers. There are few in 8i and fewer in 9i.

In 9i, if you like, you can setup the database to manage all of the files. You can just issue:

create tablespace foo;

we'll create the file, name it, size it, etc.

Oracle actually was in the web browser business. It was called Power browser (search google for "power browser oracle"). You'll be disappointed to find out that no, you did not have to allocate your own memory for images and then separately for cached pages, etc....

Is this a restriction ...

Dinesh Sivasankaran, October 24, 2002 - 12:06 pm UTC

Hi Tom,

Desc t1: (Table has only 2 columns)
ITEM NUMBER(8)
QA_STATUS VARCHAR2(1)

Desc t2:
item number(8),
qa_status varchar2(1),
.
. ( This has lot of columns)

When I try to do this,

set echo on
set arraysize 15
set copycommit 0
set long 5000

copy from admin/admin@dev -
insert t1 ( item ) -
using -
select ITEM -
from t2 ;

Array fetch/bind size is 15. (arraysize is 15)
Will commit when done. (copycommit is 0)
Maximum long size is 5000. (long is 5000)

CPY0007: Select list has fewer columns than destination table

Is this a restriction in copy command? If the table t1 has only 1 column (item) then it works.

Please advise.

Thanks,
dinesh ...

Tom Kyte
October 24, 2002 - 2:45 pm UTC

It is a limitation -- here is a workaround to the limitiation:

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t1 ( x int, y int );

Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> drop table t2 cascade constraints;

Table dropped.

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t2 ( x int, y int, z int );

Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> insert into t2 select rownum, rownum, rownum from all_users;

46 rows created.

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> copy from ops$tkyte/x@ora817dev.us.oracle.com -
> insert t1 ( x ) using select x from t2;

Array fetch/bind size is 15. (arraysize is 15)
Will commit when done. (copycommit is 0)
Maximum long size is 5000. (long is 5000)

CPY0007: Select list has fewer columns than destination table

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> create or replace view v1 as select x from t1;

View created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> copy from ops$tkyte/x@ora817dev.us.oracle.com -
> insert v1 ( x ) using select x from t2;

Array fetch/bind size is 15. (arraysize is 15)
Will commit when done. (copycommit is 0)
Maximum long size is 5000. (long is 5000)
   46 rows selected from ops$tkyte@ora817dev.us.oracle.com.
   46 rows inserted into V1.
   46 rows committed into V1 at DEFAULT HOST connection.

ops$tkyte@ORA817DEV.US.ORACLE.COM>



A view to hide the other columns. 

Unsuccessful copy

Sasa, October 25, 2002 - 3:46 am UTC

Hi Tom,

I tried to copy from remote to local DB and made a following test:

-- remote -
CREATE TABLE test_copy AS SELECT owner, object_name, object_type FROM all_objects;


-- local --
CREATE TABLE test_copy AS SELECT owner, object_name, object_type FROM all_objects WHERE 1 = 2;


---

copy FROM test/test@remote INSERT test_copy(owner, object_name,object_type) USING SELECT owner, object_name,object_type FROM test_copy;

and got message:

Array fetch/bind size is 15. (arraysize is 15)
Will commit when done. (copycommit is 0)
Maximum long size is 5000. (long is 5000)

ERROR:
ORA--3930258: Message -3930258 not found; product=RDBMS; facility=ORA


Thanks,

Saša


Tom Kyte
October 25, 2002 - 6:14 am UTC

try doing it the other way -- instead of copy FROM, connect to the other database and copy TO. push it, don't pull it.

Unsuccessful again

Sa?a, October 25, 2002 - 9:17 am UTC

Hi Tom,

I did it what you had recomended but something was wrong again(connect to the other DB and copy TO - "push it dont pull it" as you said)

copy to test/test@localDB INSERT into TEST_COPY(owner, object_name,object_type) using SELECT owner, object_name,object_type FROM TEST_COPY;

but got error :

SP2-0498: missing parenthetical column list or USING keyword

I am not aware of SP2 error message till now and couldn't find any of these.

Regards,

Saša


Tom Kyte
October 26, 2002 - 11:42 am UTC

lose the INTO in "insert into test_copy"

it doesn't belong there.

How is this efficient over database link

Atul, December 11, 2003 - 12:11 am UTC

Tom,
which is the FASTEST way to migrate data from one database to other across the network.
Long bacnk, I read in some discussion forum that SQL*Plus copy command is more efficient than that of database link. Can you please provide more insight on this?
Regards,
Atul

Tom Kyte
December 11, 2003 - 5:47 am UTC

dblinks are faster
sqlplus copy is faster

both are true (sometimes)
both are false (sometimes)

my answer is, has been and will be -- benchmark it with some of your data in your environment and see what works best for you.


sqlplus copy -- array fetches, incremental commits (which can be very very bad), conventional path writes, always logged and always generates undo. Just like writing a program that reads N rows from db1 and inserts N rows into db2 and commits every M fetches (in fact, thats exactly what it is). can work with longs (but not long raws)


dblinks -- single row stream, no incremental commits, can direct path (using APPEND or CREATE TABLE AS SELECT), can be done nologging, can bypass undo.

which works best depends on various factors (network latency, location of the sqlplus client, etc)

How is this efficient over database link

Atul, December 12, 2003 - 2:14 am UTC

Tom,
Thanks for the reply.
You said:
-- SQLPLUS Copy: ..... always generates undo

How about if I have table created with nologging option and I am inserting into it using copy command.
Will this still generate undo?
Regards,
Atul

Tom Kyte
December 13, 2003 - 10:48 am UTC

yes.

it just uses "insert into values" that always generates undo.

you would have to use insert /*+ APPEND */ to bypass undo -- that is a dblink option, not a sqlplus copy option.

(and undo is not related at all to NOLOGGING -- nologging is all about redo, not undo. you could skip redo generation using the direct path insert above -- with the append -- as well as the undo generation if you use nologging).

Generate insert script for data

A reader, January 13, 2004 - 5:08 pm UTC

Hi Tom,

Assume all table columns are simple scalar type, such as varchar2, number, and date. Is there a script that I can use to generate the SQL insert script for all data?

For example, if a table contains:

TAB1
COL1 COL2 COL3
1 AB 13-JAN-04

Then I want to generate a insert statement for it:

INSERT INTO TAB1 (COL1, COL2, COL3)
VALUES (1, 'AB', TO_DATE('13-JAN-04', 'DD-MON-YY');


Thanks.


Tom Kyte
January 13, 2004 - 5:57 pm UTC

you'll have to write it yourself as I'm 1000% against such an idea for a whole lot of reasons.

if you'd like a comma delmited file that tools like sqlldr or bcp for sqlserver or whatever can use -- i've got those. insert statements -- nope.

1000% against!

Paul, January 14, 2004 - 6:27 am UTC

I guess that's "very much against" then? ;-)

I'm curious about this because I've written a script to generate insert statements just was requested and I use it a lot (almost exclusively when migrating data from development/test to production, where the implementation procedures in my company specify that a script is required).

Could you let me know what are the main objections you have, please?

Tom Kyte
January 14, 2004 - 3:42 pm UTC

(it was not a typo, 1000% is what I meant....)

bind variables...............

parsing.............

sqlldr avoids both issues...

INSERTS via an anonymous PL/SQL block

Gary, January 14, 2004 - 6:11 pm UTC

Presumably a script which generates a file like :

DECLARE
procedure INS (p_col1 IN VARCHAR2, p_col2 IN ...) IS
BEGIN
INSERT INTO table_a (col1, col2...)
VALUES (p_col1,
to_date(p_col2,'DD-MON-YYYY HH24:MI:SS'... );
END;
BEGIN
INS('ABC','11-MAR-2004 12:13:14',...);
INS('XYZ','11-MAR-2004 12:13:14',...);
...
END;

would be an improvement over a series of INSERTs.
I've used something similar to this lots of times (when we are talking about dozens of rows rather than thousands/millions).
It's also easier to add exception handling (constraint violations) or if you need to merge data



Tom Kyte
January 14, 2004 - 6:34 pm UTC

NO NO NO NO....

no BIND variables!!!!!!!!!!!!!!

1000% against!

Paul, January 15, 2004 - 4:26 am UTC

I didn't think it _was_ a typo! It just goes against my mathematical upbringing! It reminds me of the football (soccer) cliche we have here in England when the player says the team "tried 110%". Hmmm ...

Anyway, personally I would be 95% against using this method (obviously I agree 100% with your comments on bind variable and parsing). I use my little script when transferring tiny (< 200 rows) amounts of lookup data from development to production when the users/managers/dba/whoever want to check the data before implementation.


Tom Kyte
January 15, 2004 - 8:43 am UTC

I use this:

[tkyte@localhost tkyte]$ sqlldr_exp scott/tiger dept
LOAD DATA
INFILE *
INTO TABLE dept
REPLACE
FIELDS TERMINATED BY '|'
(
deptno
,dname
,loc
)
BEGINDATA
10|Accounting|NEW YORK
20|RESEARCH|DALLAS
30|SALES|CHICAGO
40|OPERATIONS|BOSTON

(with redirection of course). see

</code> http://asktom.oracle.com/~tkyte/flat/index.html <code>

if you are interested..... (and in 10g, it'll look like this!)



scott@ORA10G> !ls -l /tmp/dept_unload.dat
ls: /tmp/dept_unload.dat: No such file or directory

scott@ORA10G>
scott@ORA10G> create table dept_unload
2 organization external
3 ( type oracle_datapump
4 default directory TMP
5 location( 'dept_unload.dat' )
6 )
7 as
8 select * from dept
9 /

Table created.

scott@ORA10G>
scott@ORA10G> !ls -l /tmp/dept_unload.dat
-rw-r--r-- 1 ora10g ora10g 1520 Jan 15 14:45 /tmp/dept_unload.dat


Export table procedure - performance

A reader, January 22, 2004 - 4:24 pm UTC

Hi Tom,

I created a procedure for exporting the data of any simple table into a .sql script. It works fine, but seems a little slow. Is there anything you can tell that can be easily tuned? Thanks.

create or replace procedure export_table (
p_table varchar2,
p_file utl_file.file_type
)
as
no_such_table EXCEPTION;
PRAGMA EXCEPTION_INIT(no_such_table, -942);
l_col_list varchar2(1000);
l_val_list varchar2(32767);
l_rc1 sys_refcursor;
l_rc2 sys_refcursor;
l_rowid ROWID;
l_value varchar2(2000);
l_col_count pls_integer;
begin
select stringAgg(column_name), count(column_name)
into l_col_list, l_col_count
from user_tab_columns
where table_name = p_table
order by column_id;

open l_rc1 for 'select rowid rrowid from ' || p_table;

utl_file.put_line(p_file, '-- Insert data into ' || p_table);
loop

fetch l_rc1 into l_rowid;
exit when l_rc1%notfound;

utl_file.put(p_file, 'INSERT INTO ' || p_table || '(' || l_col_list || ')' || chr(10));
utl_file.put(p_file, 'VALUES(');

for d in (select * from user_tab_columns where table_name = p_table order by column_id) loop
open l_rc2 for 'select ' || d.column_name || ' val from ' || p_table || ' where rowid = :1 ' using l_rowid;
loop
fetch l_rc2 into l_value;
exit when l_rc2%notfound;
if l_value is null then
utl_file.put(p_file, 'NULL');
else
case d.data_type
when 'NUMBER' then
utl_file.put(p_file, l_value);
when 'CHAR' then
utl_file.put(p_file, '''' || replace(l_value, '''', '''''') || '''');
when 'VARCHAR2' then
utl_file.put(p_file, '''' || replace(l_value, '''', '''''') || '''');
when 'LONG' then
utl_file.put(p_file, '''' || replace(l_value, '''', '''''') || '''');
when 'LONG RAW' then
utl_file.put(p_file, '''' || l_value || '''');
when 'ROWID' then
utl_file.put(p_file, '''' || l_value || '''');
when 'DATE' then
utl_file.put(p_file, '''' || l_value || '''');
else
utl_file.put(p_file, '''' || l_value || '''');
end case;
end if;
if d.column_id < l_col_count then
utl_file.put(p_file, ',');
else
utl_file.put(p_file, ')'||CHR(10)||'go'||CHR(10));
end if;
end loop;
utl_file.fflush(p_file);
close l_rc2;
end loop;
end loop;
close l_rc1;
exception
when no_such_table then
if l_rc1%isopen then close l_rc1; end if;
if l_rc2%isopen then close l_rc2; end if;
when others then
if l_rc1%isopen then close l_rc1; end if;
if l_rc2%isopen then close l_rc2; end if;
raise;
end;
/

Tom Kyte
January 22, 2004 - 8:28 pm UTC

ugh, why why why would you build insert statements???????

just unload to a flat file:

</code> http://asktom.oracle.com/~tkyte/flat/index.html <code>

no inserts, just no inserts.

Timestamps?

A reader, February 10, 2004 - 12:04 pm UTC

It seems that SQL*Plus copy doesnt support timestamps:

--copy command of table with timestamp datatypes.

Array fetch/bind size is 15. (arraysize is
Will commit when done. (copycommit is 0)
Maximum long size is 80. (long is 80)

CPY0012: Object datatypes cannot be copied

SQL> 

Remote LONGs/LOBs .. not so easy

Paul Gallagher, February 25, 2004 - 4:15 am UTC

Mikito Harakiri posted that the elegant "insert into t2 select * from t1@remote_db_link;" doesn't work with LOBs, but you responded with a "proof" to blow away this objection:

scott@ORA815.US.ORACLE.COM> insert into t select x, y from t@ora8i.world;

But I think your "proof" is deceptive. The 9i SQL Reference A96540-01 is pretty explicit about the limitations on remote LONGs and LOBs, and same applies in 10.1. As I understand it, a single SQL statement cannot contain multiple distributed references to LONGs/LOBs. i.e. "insert into t select x, y from t@ora8i.world;" will NOT work if this is between 2 remote systems (you will get an "ORA-00997: illegal use of LONG datatype" or similar error)

I suspect "ORA815.US.ORACLE.COM" and "ora8i.world" are not truely "remote"? In which, this statement will work;-)

These restrictions on LONGs and LOBs in distributed scenarios have gotta go! When will they be treated like other built-in types? As a db user I don't want or need to care about internal issues of representation etc .. just that "if I can do it with an INT or VARCHAR, why not a LONG/LOB?". Increasingly we are dealing with large data objects - images, XML documents, video etc. But at the same time we are also getting more "distributed" - and these systems are not necessary all Oracle (may be an HS dblink). Already I see many people "solving" this problem outside the database - writing custom import/export/loader programs to deal with "non-simple" data types and structures etc

Got any insights on when we might see these limitations fall, or ways to work around them elegantly?

Tom Kyte
February 25, 2004 - 9:19 am UTC

they were two different databases -- I was logged into 815, I was selecting from 8i (another database, another machine).


they were "truly remote". Not sure what you mean there. They were two separate instances.

If you mean:

I want to log into A and copy LOB data from B to C, then you are correct, you cannot insert into t@b select lob from t@c however -- i would also say that you wouldn't really want to do that -- you really want to pull to A the lob data from B to squirt it up to C?

But anyway, yes, that limit is there -- but the "cannot use them at all" is not there.

RE: Remote LONGs/LOBs .. not so easy

Paul, February 26, 2004 - 3:11 am UTC

Thanks for the extra input, but no the "A to do B to C LOB transfer" is not what I had in mind. Actually, it is the issue of HS remote connections and the use of LONG types that is my immediate concern.

I've tabulated my findings below.
(o) "ora-ora DBLINK" means an Oracle to Oracle database link
(o) "ora-ora via HSODBC" means an Oracle to Oracle database link via Heterogeneous Services and the Oracle ODBC driver
(o) the "select" test is a simple select statement on the remote table object
(o) the "insert/select" test is a single sql command cobiming local and remote objects like "insert into t0 select a,b from t0@remote1;"

Looking at the findings, a clear conclusion is that its the LONGs that are the problem in a remote connection scenario. This is exacerbated by the fact that when using Heterogenous Services most large object types are translated as LONGs not LOBs. So from that point on you are pretty much stuffed.

+--------------------+---------------------+
| ora-ora DBLINK | ora-ora via HSODBC |
+------+-------------+-------+-------------+
Type |select|insert/select|select |insert/select|
--------+------+-------------+-------+-------------+
int etc | ok | ok | ok | ok |
--------+------+-------------+-------+-------------+
LOBs | ok** | ok | ok*** | ORA-997* |
--------+------+-------------+-------+-------------+
LONG | ok | ORA-997* | ok*** | ORA-997* |
--------+------+-------------+-------+-------------+
* ORA-00997: illegal use of LONG datatype
** using dbms_lob procedures can result in error ORA-22992: cannot use LOB locators selected from remote tables
*** note that LOBs are translated to LONG/LONG RAW via HS ODBC and you are very limited in what you can do before running into either
ORA-22992: cannot use LOB locators selected from remote tables,
ORA-02070: database REMOTE2 does not support operator PLSFUN in this context, or
ORA-00997: illegal use of LONG datatype

Tom Kyte
February 26, 2004 - 10:04 am UTC

insert as select LONG doesn't even work on a single instance, never has.

ops$tkyte@ORA920PC> create table t1 ( x long );
Table created.
 
ops$tkyte@ORA920PC> create table t2 ( x long );
Table created.
 
ops$tkyte@ORA920PC> insert into t1 values ( 'hello world' );
1 row created.
 
ops$tkyte@ORA920PC> insert into t2 select * from t1;
insert into t2 select * from t1
                      *
ERROR at line 1:
ORA-00997: illegal use of LONG datatype



 

A reader, March 15, 2004 - 11:05 am UTC

Tom,

We are planning to move our couple of databases (9.2.0.4) from Unix(Sol) to Linux(Red Hat).

Database sizes are 800Gb and 1.5TB.

Since the OS are different cannot use TTS,what do you suggest would be the best approach to move data.

Thanks.

Tom Kyte
March 15, 2004 - 12:54 pm UTC

any possibility of 10g in the near future? then you can use TTS to move the data. Or at least high speed data dumps (new export/import processes)

Else, you'll be doing a bit of manual work to migrate - typically using table or schema level exports in parallel and likewise on the back end to reload with import and doing indexes "by hand" in parallel nologging and adding constraints with enable/rely (to make them go on "real fast" since you know the data is clean)

Copy records in PL SQL

Prasad Gunaratne, October 15, 2004 - 1:54 am UTC

Table t’s records should be moved to table t_history once the status =’final’ and payment is made.

create table t
(
id number,
status varchar2(10),
fee number,
is_fee_paid smallint,
/* another 70 more columns*/
)

-- history table of t. exact column defintions
create table t_history
(
id number,
status varchar2(10),
fee number,
is_fee_paid smallint
/* another 70 more columns */
)


-- archive process scheduled by dbms_job
create or replace procedure archive_t as
begin

for t_rec in (select * from t where t.status='final'
and t.is_fee_paid= 1)
loop
insert into t_history values(t_rec.id,
t_rec.status,
t_rec.fee,
t_rec.is_fee_paid
/* all other 70 columns*/)
delete from t where id= t_rec.id;
commit;
-- need to copy 'child' tables of 't' before deleting parent 't' record.
--one of them has 60 columns as well.
end loop;

Is there any better way to write "archive _t" procedure and make
it more readable and short? Looking for a way to copy the entire record at once without referring to individual column names?

Thank you.



Tom Kyte
October 15, 2004 - 11:22 am UTC

begin
insert into t_history
select * from t where ......;

delete from t
where id in (select id from t_history);
end;


assuming t_history is structurally the same as t of course.


do not even commit each record, ugh.

Copy records in PL SQL

Prasad Gunaratne, October 16, 2004 - 12:44 am UTC


Executing a bat file is not working with the 'copy' command

j.register, December 01, 2004 - 3:09 pm UTC

Trying to execute a 'copy' command sql that does run in native sqlplus. Why will the same .sql file not run in windows 2000 server from a .bat file? I am getting 'parsing' errors - sometimes cannot find the 'select' statement, other times if I break up the command into different lines I get an error that the 'INTO' command is missing. For COPY it is not needed and not in my sql:
truncate table new_table;
COPY FROM userid/password@db1 to userid/password@db2
insert new_table using select * from db1.old_table;
bat file:
set ORACLE_SID=db2
set ORACLE_HOME=c:\oracle\ora81
sqlplus userid/password@db2 @copy_table.sql

thanks for any help you may provide!
jacqueline.register@us.ngrid.com

Tom Kyte
December 01, 2004 - 7:43 pm UTC

does running the script itself manually work?

from the command line?

or just not from the bat file.

External Table, Oracle 10g Data Pump

Berton Coutinho, June 23, 2005 - 10:20 am UTC

Tom,

I am trying to create a flat file by extracting data from the app_allocation table using external table. On executing the script below Oracle creates the file as requested. However, I am unable to access this file on unix. Your help is highly appreciated.

Thanks

Berton


CREATE OR REPLACE APP_DATA_DIR
AS '/USR2/TEST/APP/DATA';

CREATE table allocation_OUT
organization external
( type oracle_datapump
default directory app_data_dir
location ('test1.csv')
)
parallel 1
as
select *
from app_allocation
where app_brand = '03';

exit;

{} ls -l test1.csv
-rw-rw---- 1 oracle dba 1032192 Jun 23 09:44 test1.csv

{} vi test1.csv
"test1.csv" Permission denied



Tom Kyte
June 23, 2005 - 6:51 pm UTC

time to talk to your dba.... they will need to help you gain access to this file.

idea -- java stored procedure to safely chmod it


Bug in copy_table

VA, August 02, 2005 - 10:15 am UTC

FYI

The sqlplus COPY command seems to have a bug even in 9iR2 regarding number columns.

When I do

copy from user/pass@remote -
create t -
using -
select * from t;

And T has NUMBER columns, the COPY command silently converts it to NUMBER(38,0) so when the data is copied over, all my pennies are lost!

This has bitten me big time as I assumed that the copy would do a perfect copy. I looked everywhere else to find why my numbers were not matching.

Thanks

INSERT INTO .. SELECT

Asim Naveed, December 22, 2005 - 6:41 am UTC

Given that t2 and t1@remote_db_link have exactly
same columns, COLUMNA,COLUMNB with same datatypes and lengths.

Now consider the follwoing two statements
INSERT INTO t2(columnA, columnB)
SELECT columnA, columnB
FROM t1@remote_db_link;
---------------------------------
INSERT INTO t2 SELECT * FROM t1@remote_db_link;

Will the above two INSERT statements produce same results.

I have a little doubt that how second INSERT statement
knows that t2.columnA = t1.columnA and t1.columB = t2.columnB.
Does it compare column names?

Also if t1 has columns c1 varchar2(10), c2 varchar2(10) and t2 has columns c3 varchar2(10),c4 varchar2(10)
and then i do INSERT INTO t1 SELECT * from t2 what will
happen ?

Thanks




Tom Kyte
December 22, 2005 - 10:52 am UTC

by luck, yes they will - if

desc t2
desc t1

return the columns in the "same default order" (the order specified in the create table statements), it'll work by luck.

I would prefer you to be "explicit" however.

having variable in USING query?

MJ, March 27, 2014 - 11:53 am UTC

Is there a way to use a SQL Plus variable in the query for "COPY FROM ... CREATE ... USING <query>"?

I'm thinking about a code like this:

variable vv_user varchar2(20);

exec :vv_user := 'SYS'

copy from a/a@db:1521/rac -
create yb_20140327_users (username) -
using select username from all_users where username=:vv_user;


unfortunatelly it returns me

SP2-0502: select username from all_users where username=:vv_user
SP2-0503: *
SP2-0501: Error in SELECT statement: ORA-01008: Not all variables bound

Is there a way to overcome this?
Tom Kyte
March 28, 2014 - 6:43 am UTC

you can use a sqlplus substitution variable:

define foo=SYS

copy from scott/tiger@ora11gr2 -
create t (username) -
using select username from all_users where username='&foo';



insert into table

Samby, May 02, 2014 - 4:34 pm UTC


Hi Tom,

How to handle exception in the below scenario

insert into t2 select * from all_users;

My question is suppose i have a column which i need unique value say object_name .

what i want there should be only 1 entry for that and other will be there in my error logging table ,and it should process all records.

By using SQL%BULK_EXCEPTIONS and save exception we can do that. Is there any other alternative.

Thanks a lot

Is COPY Command faster than INSERT APPEND /PARALLEL in SAME DB

Naveen, November 25, 2014 - 6:50 pm UTC

Is COPY command any faster than DIRECT path loads? APPEND or PARALLEL hint if we are inserting data in the same DB
Tom Kyte
November 26, 2014 - 9:16 pm UTC

depends on the volume of data.

The copy command will perform logic much like:

begin
    open c for select * from t;
    loop
        fetch c bulk collect into .arrays. limit N
        forall i in arrays.count insert into t2;
        exit when c%notfound
    end loop;




typically - just

insert into t2 select * from t;

will out perform that.

and insert /*+ APPEND */ into t2 select * from t

MIGHT outperform that as it will bypass the buffer cache, will skip a lot of undo generation and can also bypass redo log generation if desired.


If you have a smallish amount of data that fits into the buffer cache nicely - insert into t2 select * from t; might be a better fit than /*+APPEND*/. If you have a large amount of data - insert /*+APPEND*/ and possibly parallel query and maybe skipping redo generation would be more optimal