Skip to Main Content
  • Questions
  • How to move tables from one schema to another ?

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Prakash.

Asked: June 05, 2001 - 8:45 am UTC

Last updated: May 23, 2023 - 4:56 am UTC

Version: 8.1.5/81.6

Viewed 100K+ times! This question is

You Asked

Tom,

My basic requirement is like this : I need to take a dump of a schema. But I don't want to export certain tables which are really huge and I don't want them in the dump. Since there is no way to exclude tables in the export command, I created a new schema and moved these not-necessary tables into this new schema. Then I took the export of the original schema. It seems to work fine, except for the time.

Problem : Each of these tables that are not required are really huge with 50-80 million records. So, in order to move the table from one schema to another, I have (as far as I know) just two options.

1. Create table schema2.t1 as select * from schema1.t1;

2. Export these tables from schema 1 and use import with from_user=schema1 and to_user=schema2.

Currently I have used method 2. But because of these huge tables, it is taking really a long time and also huge disk space to store these dump files temporarily (though I am not too worried about the diskspace).

So, what I would like to know is that, is there way in Oracle 8i and above, wherein we can just move the table from one schema to another. I am looking for some command like "Rename t1 to t2" but in a different schema.

Please advise.

Thanks
Prakash.

and Tom said...

you could just:

create table t1 unrecoverable as select * from other_schema.t1;

for each table you want (you could write a query to write all of these queries and delete the tables you DON'T want to copy)

Then:

exp userid=other_schema owner=other_schema ROWS=N
imp userid=new_schema fromuser=other_schema touser=new_schema IGNORE=Y

that'll get all of the indexes, constraints, triggers, packages, etc -- but no data.

follow up to comment

No, there is not. If the volume of data is large and you want it to appear as if the table is in the other schema, you should use a view. The actual ownership of the table would remain with USER1 but user2 would appear to "own" the table as well.



Rating

  (37 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Creating the tables does take a long time....

Prakash, June 05, 2001 - 11:00 pm UTC

create table t1 unrecoverable as select * from other_schema.t1;

The above statement takes quite a lot of time, because the volume of data that I am dealing with runs to 50-100 million records.

I wanted to know, if there is any command which will just change the ownership of the table, without having to move across the data.

Large Table Suggestion

Matt, June 06, 2001 - 8:55 am UTC

It sounds like the tables you are working with are fairly large. I have used the create table unrecoverable fairly successfully with tables that are in the 6-8GB range and have found it loads fast, typically only takes about 20Minutes on 6GB tables(Note: on 4 Processor HP K Class Server with EMC Array). I would suggest the following possible enhancements to the create table unrecoverable:
1)Use parallel query (i.e) create table x parallel (degree 5) as select * from y. Note: remember that each parallel query server will require the inital extent in the tablespace so be careful to size the initial extent resonably.. Probably should use locally managed tablespace..
2)If at all possible try to distribute the I/O accros mulitle controllers.
3)Create the indexes unrecoverable with parallel query.
4)Remember if your database is in archive log mode you can not recover the new table until your next Hot/Cold backup. Might want to do an export of the table after creating it.


Sriram, June 06, 2001 - 4:43 pm UTC

If your basic objective is to selectively export tables

You might do two exports
exp username/password rows=n
/* to export the user with just the structures and objects */

Followed by exp username/password tables=(tbl1, tbl2 ..)
/*Spool output from user_tables.table_name
delete your large table entries ,
cut and paste to get your list */

Import the files into schema 2. Use the ignore=y option in the second import. The structures for those large tables may get into the second schema but not the data.

What if the "Source" table and "Target" table exist in same schema

Subra, December 19, 2001 - 4:15 pm UTC

You have absolutely fantastic way of explaining the hard concepts. Thanks always for your tips.
Here is a slightly different requirement, with lot of constraints.
I have a "Source" table with about 20 million rows of data, that I need to copy (100%) into "Target" table. "Target" table is empty at the begining of the process.
If I was a dba or have "CREATE" privelege, I could have

CREATE TARGET UNRECOVERABLE
AS SELECT * FROM SOURCE ;

Or if I had export / import priv, then I would use that technic.

Since I have no "CREATE" privelege, I wrote a program to open a cursor (on source.rowid) on "Source" and insert the data into "Target" a row at a time. It works but takes a good 2 hours to complete the process.

Please review and suggest any better way in view of my constraints.

Thanks
Subra

Tom Kyte
December 19, 2001 - 4:50 pm UTC

insert /*+ append */ into target select * from source;


that'll do a direct path insert into target (no rollback/redo if you don't have any indexes on target). It'll be as fast as you can get. You can even do it in parallel, see the server concepts guide for details.


I have used transportable tablespace feature for those kind of move

David Shi, December 19, 2001 - 5:57 pm UTC

If your large tables reside in separate tablespaces ( I image you would with the size of those tables ), you can use transportable tablespace feature to export tablespace definition, drop them, and then import them back into a separate schema. The time needed is very minimal with this approach. Make sure to test it out in your test database just in case.

You may use database link

Ash, December 19, 2001 - 8:02 pm UTC

You can simply use database link to transfer tables from one schema to other.
Once you have created database link, you can use create table xyz as select * from xyz@database_link_name.

another possibility

Andrei Sokolenko, December 20, 2001 - 9:10 am UTC

To quickly move big tables between schemas
I use EXCHANGE PARTITION feature of Oracle 8i.

for example:

SQL> connect as user "A"

SQL> create table large_table
     (
       a number,
       b char,
       c date
     )

-- just for this example only. :)
SQL> grant ALL on large_table to "B";



SQL> connect as user "B"

SQL> create table large_table 
     (
       a number,  
       b char,    
       c date
     )
     partition by range (a)
     (
       partition dummy values less than (maxvalue)
     )





Then you can use the following command to quickly move 
"A.large_table" to "B.large_table"

SQL> connect as user "B";

SQL> alter table large_table exchange partition dummy
     with table A.large_table;

And return it back to schema A:

SQL> alter table large_table exchange partition dummy
     with table A.large_table;
-- of course, it is the same SQL command

 

Tom Kyte
December 20, 2001 - 10:00 am UTC

very nice, cool idea.

Difference in the Execution Plan Oracle 8.1..7.2

Ron, July 31, 2002 - 11:41 am UTC

Tom,
We exported schema1 to schema2(its a complete export/Import) on the same instance.I have a SQL running in schema1 hitting different instance(joins 3 tables from the schema1 and 3 tables from the instance2(remote)), and it is returning the results within 2 sec and the plan is also looks good.Now I am trying the same SQL from the schema2(which is exact replica of schema1) and it is taking 2hours.I checked the plan and it is completely different.We analysed all the tables in the SQL and all the indexes are there, everything looks same as the schema1 could you tell me what could be the reason for the difference in execution plan?
Thanks a lot
Ron

Tom Kyte
August 05, 2002 - 9:38 am UTC

how is the dblink created -- using a static proxy account or "connect as current user". are you querying the same three tables at the remote site?

What are the plans and what are the queries and make SURE all of the same structures are there (indexes and all)

Difference in the Execution Plan Oracle 8.1..7.2

Ron, August 05, 2002 - 3:03 pm UTC

Tom,
I checked everything and we have all the indexes and we also analyzed the tables.We are using the public DB Links.
The Script we used to create this link is
"CREATE PUBLIC DATABASE LINK REMOTE_INSTANCE_NAME
USING 'REMOTE_INSTANCE_NAME'". we are using the same tables and it is same SQL also.

Tom Kyte
August 06, 2002 - 8:12 am UTC

Well, the users are going to be different -- are you *sure* they are the same tables and the users have the *same* privs and all.

The users connecting to the remote database are different since you do not have "connect to identified by" in the create database link.

What are the plans -- do a TKPROF and show us that (as the "good" user and as the "bad" user). My crystal ball is in the shop this week.

Milind bhavsar, August 07, 2002 - 4:43 am UTC

Hi Tom ,
I have been reading this column for quite some time
and it is simply marvelous
for this problem of moving between schema
Suppose you do following to change ownership of EMP table from SCOTT to ADAMS
what do you think about this Is it safe enough to be done on production

SYS@RSI-PUN-MILINDB> select username, user_id from dba_users ;

USERNAME USER_ID
------------------------------ ----------
SYS 0
SYSTEM 5
OUTLN 11
SCOTT 19
DBSNMP 16
ADAMS 20
JONES 21
CLARK 22
BLAKE 23
EM_REPO 29
TEST 25
REPO 26
GEKO 28
TEST1 30
TEST2 31

15 rows selected.

SYS@RSI-PUN-MILINDB> select table_name from all_tables where owner = 'SCOTT' ;

TABLE_NAME
------------------------------
A
BONUS
DEPT
EMP
EMP2
PLAN_TABLE
SALGRADE
TMP1
TMP2
USERPWD

10 rows selected.

SYS@RSI-PUN-MILINDB> select table_name from all_tables where owner = 'ADAMS' ;

no rows selected

SYS@RSI-PUN-MILINDB> select OBJ#, OWNER# from obj$ where name like 'EMP' ;

OBJ# OWNER#
---------- ----------
3174 19

SYS@RSI-PUN-MILINDB> UPDATE OBJ$ SET OWNER# = 20 WHERE OBJ# = 3174 ;

1 row updated.

SYS@RSI-PUN-MILINDB> select table_name from all_tables where owner = 'ADAMS' ;

TABLE_NAME
------------------------------
EMP

SYS@RSI-PUN-MILINDB> select table_name from all_tables where owner = 'SCOTT' ;

TABLE_NAME
------------------------------
A
BONUS
DEPT
EMP2
PLAN_TABLE
SALGRADE
TMP1
TMP2
USERPWD

9 rows selected.


Tom Kyte
August 07, 2002 - 2:27 pm UTC

that is the worst thing you can do.

You lose, game over player one.

That is the best way to totally corrupt your database.

Never, not under any circumstances -- without support telling you "do this update" -- will you EVER update the data dictionary.

It is the worst thing you can do (besides not using bind variables).


Don't do that, EVER. Period. Anytime you are tempted to update a SYS. table -- stop, don't.




Milind

Johan, August 07, 2002 - 11:26 am UTC

Milind, you have just broken your database and invalidated any support from Oracle by directly updating the data dictionary. Just think - who knows how many other data dictionary tables relate to this table and also need to be updated to maintain consistency ? Only Oracle know the answer to that - therefore, you can not update the data dictionary directly. Only use oracle supplied commands to indirectly update the data dictionary and in this case use the good advice supplied by Tom and other contributors.

Was this a simpler question?

Shrek, August 07, 2002 - 5:26 pm UTC

Tom may have probably overlooked the beginning part of your question.

You said:
"Since there is no way to exclude tables in the export command"
and then you went on and asked a more difficult question.

I guess the way to do it is a table level export? Am I missing something here?


Tom Kyte
August 07, 2002 - 6:44 pm UTC

If you use a table level export, you miss all of the other schema objects that might be also interesting like views, sequences, procedures, packages, etc.

p001, January 30, 2003 - 7:07 pm UTC

What is the difference between unrecoverable and nologging?

create table t1 unrecoverable as select * from t2;

create table t1 nologging as select * from t2;





Tom Kyte
January 30, 2003 - 7:25 pm UTC

they both inhibit redo generation for certain operations -- unrecoverable is no longer documented and considered "obsolete" -- there is a subtle difference however:


ops$tkyte@ORA920> @mystat "redo size"
old   4: and lower(a.name) like '%' || lower('&1')||'%'
new   4: and lower(a.name) like '%' || lower('redo size')||'%'

NAME                                VALUE
------------------------------ ----------
redo size                          155196

ops$tkyte@ORA920> create table t1 UNRECOVERABLE as select * from all_objects;

Table created.

ops$tkyte@ORA920> @mystat "redo size"
old   4: and lower(a.name) like '%' || lower('&1')||'%'
new   4: and lower(a.name) like '%' || lower('redo size')||'%'

NAME                                VALUE
------------------------------ ----------
redo size                          227724

ops$tkyte@ORA920> create table t2 NOLOGGING as select * from all_objects;

Table created.

ops$tkyte@ORA920> @mystat "redo size"
old   4: and lower(a.name) like '%' || lower('&1')||'%'
new   4: and lower(a.name) like '%' || lower('redo size')||'%'

NAME                                VALUE
------------------------------ ----------
redo size                          296140

ops$tkyte@ORA920> select table_name, logging from user_tables where table_name like 'T_';

TABLE_NAME                     LOG
------------------------------ ---
T1                             YES
T2                             NO


<b>they generated about the same amount of redo (this is in archive log mode, normally, this would have generated 3.4 meg of redo), but the LOGGING mode is set differently</b>
 

after import to other schema, how about the constraints

Junior, October 15, 2003 - 9:09 am UTC

Tom:

After import tables to others schemas. What should I do to get the constrains work?
1.
export schema1.t1, schema.t2.
export schema1.t3, and t3 is child of t1.

2.
import t1, t2 to be schema2
import t3 still under schema1

3.
the foreign key for t1 and t3 is gone.

What should I do to get it back instead of running scripts to recreate them.

Thank you!

Tom Kyte
October 15, 2003 - 6:25 pm UTC



you should export t1, t2, t3 as a set!

if you want to move things about like this -- from schema to schema -- you are going to be doing manual changes. You've totally just moved t1 - we cannot "guess" what t1 might be or where it might be - there could be 10,000 t1's to choose from and even if there were just 1 t1, how could we assume it was the "right one"?

Frequent drop user create user

Thaha Hussain, April 17, 2004 - 4:36 am UTC

Dear Tom,

In our company we are frequently doing the following things.

1. Bringing dmp files from customers (Say, xyzComp.dmp)
2. Dropping the user, if it exists (xyzComp)
3. Creating the schema (xyzComp)
4. Grant privileges(Always the same)
5. import the Dmp file to the user (xyzComp)

I have to automate the above task. The dmp file name, user name, default tablespace, temporary tablespace should be parameterized. The procedure should check where the user exists already etc.

Which is the best way of doing it? Please explain.

Thaks,
Thaha Hussain, Bahrain

Tom Kyte
April 17, 2004 - 9:55 am UTC

it'll be an OS script. You'll be writing a program external from the database in the language you are most comfortable with.

.....continuation..........<correction >

Thaha Hussain, April 17, 2004 - 4:38 am UTC

..... check whether the user .......

Moving table from schema S1 to schema S2.

A reader, August 03, 2004 - 10:29 pm UTC

Tom,
I have a table T1 in schema S1. The table T1 is referenced by a few other tables in schema S1 and also by tables in other schemas (say schema's S2 and S3).

Schema name: S1
Table name: T1
PK Column name: T1_C1

Schema name S1
Table name: T2
Column T1_C1 is a FK column coming from table T1

Schema name S2
Table name: T3
Column T1_C1 is a FK column coming from table S1.T1

Schema name S3
Table name: T4
Column T1_C1 is a FK column coming from table S1.T1

Public synonyms are present for all tables.

The requirement I have is to move the table S1.T1 to a new schema say,
S4 with the same table name. ie. S4.T1.

The option I was thinking of is
1) Create the grant reference script for table S1.T1
2) Create the FK scripts for table S1.T1
3) Export table T1 from schema S1 to schema S4 or create table with appropriate SQL script
4) Execute scripts #1, #2 and #3
5) drop public synonym for table T1 in schema S1
6) create public synonym for table T1 in schema S4

I would like to get your opinion on the best way of acheaving this with the least impact.

Thank you


Tom Kyte
August 04, 2004 - 9:14 am UTC

it is going to be impactful -- this will be a totally "down operation" (offline, no activity on the affected tables). I'd think 14 or 15 times if moving a table is really "necessary".

but anyway, yes, you would need to get the fkeys so you can recreate them, drop the public synonym, move the table (create table as select), reindex the table, grant on the table and so on - and then create your public synonym, add fkeys back.

Exchange partition

A reader, August 04, 2004 - 9:22 am UTC

The exchange partition idea by one of the readers was nice. But I didnt quite get

"Then you can use the following command to quickly move 
"A.large_table" to "B.large_table"

SQL> connect as user "B";

SQL> alter table large_table exchange partition dummy
     with table A.large_table;

And return it back to schema A:

SQL> alter table large_table exchange partition dummy
     with table A.large_table;
-- of course, it is the same SQL command"


The first one does what is needed i.e. put a.large_table into the dummy partition.

Why is the second alter table needed? 

Tom Kyte
August 04, 2004 - 10:49 am UTC

the second "puts it back"

he just showed how to move a table from A to B and back again.

Excluding large tables during Export

Sunil K Shetty, August 05, 2004 - 3:54 am UTC

How about using either of these two methods to exclude large tables during export.

Method 1: (Dangerous but still can be used with due care)
Modifying (After taking backup) View  SYS.EXU9TABU - Oracle 9i or SYS.EXU81TABU - Oracle 8i

create table t_small (a number);

create table t_large (a number);

insert into t_small select rownum from all_objects where rownum < 11;

10 rows created.

insert into t_large select rownum from all_objects;

36347 rows created.

SQL> commit;

Commit complete.

SQL> host exp test/test file=test.dmp log=test.log owner=test
Connected to: Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production

About to export specified users ...
<output truncated>
 about to export TEST's tables via Conventional Path ...
. . exporting table                        T_LARGE      36347 rows exported
. . exporting table                        T_SMALL         10 rows exported
. . .
<output truncated>

Export terminated successfully without warnings.
---Now connect as SYS
SQL> conn / as sysdba
Connected.
SQL> create or replace VIEW SYS.EXU9TABU ( OBJID, DOBJID, NAME, OWNER, OWNERID, TABLESPACE, TSNO, FILENO, BLOCKNO, AUDIT$, COMMENT$, CLUSTERFLAG, MTIME, MODIFIED, TABNO, PCTFREE$, PCTUSED$, INITRANS, MAXTRANS, DEGREE, INSTANCES, CACHE, TEMPFLAGS, PROPERTY, DEFLOG, TSDEFLOG, ROID, RECPBLK, SECONDARYOBJ, ROWCNT, BLKCNT, AVGRLEN, TFLAGS, TRIGFLAG, OBJSTATUS ) AS
SELECT  "OBJID","DOBJID","NAME","OWNER","OWNERID","TABLESPACE","TSNO","FILENO","BLOCKNO","AUDIT$","COMMENT$","CLUSTERFLAG","MTIME","MODIFIED","TABNO","PCTFREE$","PCTUSED$","INITRANS","MAXTRANS","DEGREE","INSTANCES","CACHE","TEMPFLAGS","PROPERTY","DEFLOG","TSDEFLOG","ROID","RECPBLK","SECONDARYOBJ","ROWCNT","BLKCNT","AVGRLEN","TFLAGS","TRIGFLAG","OBJSTATUS"
FROM    sys.exu9tab
WHERE   ownerid = UID
and name not in('T_LARGE');  <<== To exclude Large table

View created.

SQL> host exp test/test file=test_one.dmp log=test_one.log owner=test

About to export specified users ...
. about to export TEST's tables via Conventional Path ...
<output truncated> 

. . exporting table                        T_SMALL         10 rows exported

<output truncated>
Export terminated successfully without warnings.

After the export recreate the view without 
" and name not in('T_LARGE') " condition

Method 2 : Using FGAC

CREATE OR REPLACE FUNCTION hide_table(p_schema varchar2, p_table varchar2) RETURN varchar2 is
user_context varchar2(1000);
BEGIN
IF SYS_CONTEXT('USERENV','SESSION_USER')='TEST' THEN
 user_context := '1=2'
ELSE USER_CONTEXT := '';
END IF:
RETURN USER_CONTEXT;
END hide_table;

--Add Policies to hide the rows in table T_LARGE
execute dbms_rls.add_policy('TEST','T_LARGE',NO_EXPORT','TEST','HIDE_TABLE');

 Now if we export the schema TEST, zero rows for T_LARGE table is exported. After the export drop the policy

execute dbms_rls.drop_policy('TEST','T_LARGE','NO_EXPORT');


Do remember to change the view or drop the policy immediately after the export.

 

Tom Kyte
August 05, 2004 - 12:50 pm UTC

FGAC is a cool way I've proposed before.


I would not touch the export views.

Create table base on a table in different database

baba, August 05, 2004 - 4:19 pm UTC

Thank you very much for all input ...

To create a table base on a table in diff. database
- export that table to access then import it back to the database schema that need it


is there better way to create a table base on a table in different database ??



Tom Kyte
August 05, 2004 - 8:37 pm UTC

"to access"????


how about

create table t as select * from t@other_database;



ORA-904

A reader, November 09, 2004 - 11:11 pm UTC

Oracle 9.2.0.4

I have a full database export (exp full=y). I want to extract 10 tables belonging to 2 schemas to a 3rd new schema. I create the new schema and do

file=full.dmp
fromuser=(a,b)
touser=(c,c)
tables=(t1,t2,t3,...,t10)

Everything works fine, the logfile shows no errors, but when I try to access 4 of the 10 tables, I get ORA-904

Even when I do

select 1 from c.t6, I get

ORA-904: :invalid identifier

desc c.t6 works fine.

When I do it one pair at a time like

fromuser=a
touser=c
tables=(t1,t5,t6,t10)


fromuser=b
touser=c
tables=(t2,t3,t4,t7,t8,t9)

everything works fine

Is this a bug? Or am I missing something?

Thanks

Tom Kyte
November 10, 2004 - 7:21 am UTC

I did this:

@connect /

drop user a cascade;
drop user b cascade;
drop user c cascade;

grant dba to c identified by c;
grant dba to b identified by b;
grant dba to a identified by a;

@connect a/a
create table t1 ( x int );
create table t5 ( x int );
create table t6 ( x int );
create table t10 ( x int );

@connect b/b
create table t2 ( x int );
create table t3 ( x int );
create table t4 ( x int );
create table t7 ( x int );
create table t8 ( x int );
create table t9 ( x int );

@connect /
!exp userid=/ full=y


and then:

!imp userid=/ 'fromuser=(a,b)' 'touser=(c,c)' 'tables=(t1,t2,t3,t4,t5,t6,t7,t8,t9,t10)'


and logged in as C:

c@ORA9IR2> select object_type, object_name,
2 decode( status, 'INVALID', '*', '' ) status,
3 decode( object_type,
4 'TABLE', (select tablespace_name from user_tables where table_name = object_name),
5 'INDEX', (select tablespace_name from user_indexes where index_name = object_name),
6 'LOB', (select tablespace_name from user_segments where segment_name = object_name),
7 null ) tablespace_name
8 from user_objects a
9 order by object_type, object_name
10 /

OBJECT_TYPE OBJECT_NAME S TABLESPACE_NAME
------------ ------------------------------ - ------------------------------
TABLE T1 SYSTEM
T10 SYSTEM
T2 SYSTEM
T3 SYSTEM
T4 SYSTEM
T5 SYSTEM
T6 SYSTEM
T7 SYSTEM
T8 SYSTEM
T9 SYSTEM


10 rows selected.

c@ORA9IR2> column status format a10
c@ORA9IR2> @rset
c@ORA9IR2> set termout off
c@ORA9IR2> select 1 from c.t6;

no rows selected


so, no, I cannot reproduce the issue on my end




"How to move tables from one schema to another ?",

Chares Crow, October 06, 2005 - 10:41 am UTC

Unbelievable!
This is the best information.
I got your book too.
Keep it coming!


Exchnage Partition

A Reader, December 07, 2005 - 12:01 pm UTC

Another possible approach to move very large table across different schemas ('A' to 'B'), mentioned above is very efficient but this leaves you with partitoned table with a 'dummy' partition in schema 'B' as compared to non partitioned table in schema 'A'.Now what is the most efficient way to convert it to regular table just like in the source.

Please advise,
Thanks

Tom Kyte
December 08, 2005 - 1:29 am UTC

just swap one more time...

ops$tkyte@ORA9IR2> connect a/a
Connected.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table large_table ( a number, b varchar2(1), c date );

Table created.

ops$tkyte@ORA9IR2> insert into large_table values ( 1, 1, sysdate);

1 row created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> connect b/b
Connected.
ops$tkyte@ORA9IR2> create table temp ( a number,  b varchar2(1),    c date)
  2  partition by range (a)
  3  (
  4    partition dummy values less than (maxvalue)
  5  );

Table created.

ops$tkyte@ORA9IR2> create table large_table ( a number, b varchar2(1), c date );

Table created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter table temp exchange partition dummy with table A.large_table;

Table altered.

ops$tkyte@ORA9IR2> alter table temp exchange partition dummy with table large_table;

Table altered.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select * from large_table;

         A B C
---------- - ---------
         1 1 08-DEC-05

 

Alberto Dell'Era, December 08, 2005 - 11:11 am UTC

> just swap one more time...
> ...
> alter table temp exchange partition dummy with table A.large_table;
> alter table temp exchange partition dummy with table large_table;

another possible application for "exchange segment" ...
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:48704116042682#49216618020078 <code>

what happen when I update data dictionary?

MERY, January 25, 2006 - 2:14 am UTC

Hi Tom

I have a table with primary key constraint on sys schema with 500G storage. I want to move it to USER1 schema on the limited time (I have time constraint for this action). What happen if I change owner of my table and that's index on OBJ# table and my table constraint on CON$ table? What type of corruption may be occured?

Tom Kyte
January 25, 2006 - 1:32 pm UTC

all kinds of bad things could happen.

there is quite simply no "change the ownership of this object to some other user".

You will have to rebuild this object.

DO NOT update your data dictionary unless and until support says "do this"

Dave, February 21, 2006 - 4:04 pm UTC

for a laugh - read this from the oracle-l ist :-)

</code> http://www.freelists.org/archives/oracle-l/02-2006/msg00788.html <code>

Tom Kyte
February 22, 2006 - 8:23 am UTC

ouch, that hurts.

A reader, April 06, 2006 - 3:15 pm UTC

Tom, I was trying to compile some selected packages into my personal schema for doing some developmemnt/unit testing. (Actually I need only 1 package out of them, but the due to the dependencies I need to get about 25 of 'em). It is such a painful process to compile package after package body for 25 packages. I cannot export/import because I don't have import_full_database role nor dba role. I am sure many developers would have been in this situation. It would be really really convenient if there was a command like 'create package x.pkg1 as copy of y.pkg1 (like CTAS for tables).

Is there any easier way to copy program units from one schema to another (other that exp/imp) or getting code and recompiling? If not, how can I file an enhancement request for the above command? Your help is appreciated much.

Tom Kyte
April 08, 2006 - 8:22 am UTC

here is a first stab - could use better "debug" - like if the select bulk collect doesn't return anything - it should print out "couldn't see that code" and return rather than fail (which it will).

A layer on top of this routine could be built to copy a package + package body in a single call

and so on - but it gets you started:




create or replace
procedure copy_code( p_owner in varchar2,
p_name in varchar2,
p_type in varchar2,
p_preserve_case in boolean default FALSE )
AUTHID CURRENT_USER
as
l_theCursor integer;
l_owner varchar2(30) := p_owner;
l_name varchar2(30) := p_name;
l_code dbms_sql.varchar2a; -- varchar2s in older releases
begin
execute immediate 'set role all';
l_theCursor := dbms_sql.open_cursor;
if ( NOT p_preserve_case )
then
l_owner := upper(l_owner);
l_name := upper(l_name);
end if;

select text
bulk collect into l_code
from all_source
where name = l_name
and owner = l_owner
and type = upper(p_type)
order by line;

l_code(0) := 'create or replace ';

dbms_sql.parse( l_theCursor,
l_code,
0, l_code.count-1,
FALSE,
dbms_sql.native );

dbms_sql.close_cursor( l_theCursor );
exception
when others
then
if ( dbms_sql.is_open(l_theCursor) )
then
dbms_sql.close_cursor(l_theCursor);
end if;
RAISE;
end;
/


from the same "Reader" as above

A reader, April 06, 2006 - 3:17 pm UTC

Just wanted to ask, if there is anyway to use dbms_metadata to programatically compile the program units from one schema to another?

Tom Kyte
April 08, 2006 - 8:24 am UTC

see above, gave code snippet.

So move partitioned tables similarly...

Duke Ganote, November 28, 2009 - 4:16 pm UTC

So partitioned tables could be moved across schema similarly? Here I show the idea in just one schema, but looks like it's applicable across schema:

sql> create table source_partitioned
    ( a number,  b varchar2(1),    c date)
     partition by range (c)
    ( partition p1 values less than (date '2009-01-01')
    , partition pmax values less than (maxvalue)
    );
sql> create table target_partitioned
    ( a number,  b varchar2(1),    c date)
     partition by range (c)
    ( partition p1 values less than (date '2009-01-01')
    , partition pmax values less than (maxvalue)
    );
sql> create table partition_holder ( a number, b varchar2(1) , c date );
sql> insert into source_partitioned values ( 1, 1, date '1998-01-01');
sql> insert into source_partitioned values ( 2, 2, sysdate);
sql> alter table source_partitioned exchange partition p1 with table partition_holder;
sql> alter table target_partitioned exchange partition p1 with table partition_holder;
sql> select * from source_partitioned;

         A B C
---------- - -------------------
         2 2 2009-11-28 16:54:12
sql> select * from target_partitioned;

         A B C
---------- - -------------------
         1 1 1998-01-01 00:00:00
sql> alter table source_partitioned exchange partition pmax with table partition_holder;
sql> alter table target_partitioned exchange partition pmax with table partition_holder;
sql> select * from source_partitioned;
sql> select * from target_partitioned;

         A B C
---------- - -------------------
         1 1 1998-01-01 00:00:00
         2 2 2009-11-28 16:54:12

pretty fast

Duke Ganote, November 29, 2009 - 5:52 am UTC

Operating on unindexed partitions, took about 8 minutes to swap about a gross (about 144) of partitions between schemas. It had taken about 6 hours to load the data initially.

BEGIN
FOR REC IN (
  SELECT partitioN_name
    FROM dba_tab_partitions
   WHERE table_owner = 'SRC'
     AND Table_name = 'XCOVERAGE'
     AND partition_name between 'PAR_1998_02'
                            and 'PAR_2009_09'
   ORDER BY 1 ) LOOP
  -- move partition from source to temp
  EXECUTE IMMEDIATE
    ' alter table src.xcoverage exchange partition '
    ||rec.partition_name||'  with table dpg_cvg_tmp';
  -- move partition from temp to target
  EXECUTE IMMEDIATE
    ' alter table dpg_cvg_par exchange partition '
    ||rec.partition_name||' with table dpg_cvg_tmp';
END LOOP;
END;
/
Elapsed: 00:08:13.59
select count(*) from src.xcoverage;

       COUNT(*)
---------------
              0
Elapsed: 00:00:01.60
d1:cidw\cidwwrite> select count(*) from dpg_cvg_par;

       COUNT(*)
---------------
    167,074,754
Elapsed: 00:00:51.93

majid, February 23, 2010 - 12:50 am UTC

thanks tom,

for the help

partition movement

A reader, March 10, 2011 - 12:46 am UTC

how would you exchange the partition between two tables with different structure?
Tom Kyte
March 10, 2011 - 10:30 am UTC

trick question.

You cannot of course, it would not even being to make a tiny bit of sense to do so.

shiva, September 16, 2011 - 6:39 am UTC

Hi Tom,
Hope you are doing well. Now i wanted to move some of my custom tables and sequences from one schema to another schema. i wanted to know the process which drops the objects in current schema and creates in another schema as same as original..

Thanks in advance..
Tom Kyte
September 16, 2011 - 2:14 pm UTC

data pump export and data pump import (or even the old exp/imp) can do that.

eg:

$ exp owner=a
$ imp fromuser=a touser=b

reply for above

shiva, September 21, 2011 - 8:10 am UTC

Hi Tom, Thanks for reply .I wanted to move some selective objects from schema to schema. Could you give me a sample script.
Tom Kyte
September 21, 2011 - 9:44 am UTC

use export data pump or export - export the tables in question and import them into the other schema, drop the old ones.

it is that simple.

export

A reader, September 21, 2011 - 9:51 am UTC


Move tables to new schema

Mike, May 11, 2023 - 5:11 am UTC

Greetings!

We need to move list of tables in one schema to new schema. We have applications referring the old schema name in their query and dml code. Temporarily, to make the application continue to work. We will use synonym or view to reference the migrated table in the new schema. This way, I think application can still work without any changes.

Which one is more efficient and best practice., Synonym ot View .in step4 below.

So the approach we are thinking of is:

1. Data pump export/import those tables to new schema

2. Drop those tables in the old schema

3. Restore the grants to application schema/user]accounts/roles on the new schema

4. Create private synonym in old schema pointing to the corresponding moved tables in new schema

Or

create view in old schema pointing to the corresponding to moved tables in new schema.


Thank you |
Connor McDonald
May 16, 2023 - 2:57 am UTC

Either is fine. Pros and cons of each.

Some times people prefer views because

a) the tools they use interrogate "user_tab_columns" etc to find definitions, and
b) if someone drops/renames the source object, the view gets a status of "invalid" which makes the problem more readily visible.

Some times people prefer synonyms because views do not automatically reflect table changes, eg

SQL> create table t ( x int, y int );

Table created.

SQL> create or replace
  2  view my_view as select * from t;

View created.

SQL> desc my_view
 Name                          Null?    Type
 ----------------------------- -------- -----------------
 X                                      NUMBER(38)
 Y                                      NUMBER(38)

SQL> alter table t add z int;

Table altered.

SQL> desc my_view
 Name                          Null?    Type
 ----------------------------- -------- -----------------
 X                                      NUMBER(38)
 Y                                      NUMBER(38)


Mike, May 23, 2023 - 2:12 am UTC

Thanks Connor
Connor McDonald
May 23, 2023 - 4:56 am UTC

glad we could be of service

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library