Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Robert.

Asked: January 21, 2009 - 1:11 pm UTC

Last updated: November 10, 2011 - 5:34 pm UTC

Version: 10.2.0

Viewed 10K+ times! This question is

You Asked

Hi Tom,

What is best way to 'refresh' a schema from one database to another?

Here is what I am doing and I feel there ought to be a better/easier way...

1. Export (expdp) USER1 schema from prod
2. Determine USER1 privileges, etc. on DEV database.
3. Determine grants to USER1 objects on DEV database.
4. On DEV...Drop USER1 cascade.
5. Create USER1 with same defaults and privs
6. Import USER1 schema from prod.
7. Grant any privs which previously existed in DEV but not in PROD

Is there an easier, more robust, and/or straightforward way to do this?

Thank you,

Robert.

and Tom said...

I would use rman to transport the tablespaces that contain the schemas data

http://docs.oracle.com/cd/B19306_01/backup.102/b14191/ontbltrn.htm#CACFBHIE

it'll create them from your backups, yet further testing of your backups

user1 - one would presume - would "be there" or would be "installable" (your developers have an install script - don't they?)

if they do not, they should - have them create that and maintain it. How else do the changes get into production?

Rating

  (31 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Need all objects not just segments

Robert, January 21, 2009 - 3:35 pm UTC

Hi Tom,

But I need the prodecures, sequences, etc. too.

Thanks,

Robert.
Tom Kyte
January 21, 2009 - 5:51 pm UTC

again, I would presume you have the INSTALL SCRIPTS wouldn't you?

if you don't - you need to get them. (configuration management)

it seems extremely fishy to extract from production to populate development code wise.

All objects followup

pws, January 21, 2009 - 4:12 pm UTC

To repeat Tom's point,

"user1 - one would presume - would "be there" or would be "installable" (your developers have an install script - don't they?)

if they do not, they should - have them create that and maintain it. How else do the changes get into production? "


This is yet another validation of development and implementation best practices. Not only have you tested backups by restoring a tablespace but you validated the user create and source creation scripts.

They are in a source code repository right....right?


Yes... but...

Robert, January 21, 2009 - 5:15 pm UTC

Yes, they should and likely do have their procedures, etc. in source code repository that could be recreated in database.

However what about the sequences... they will not match the current/next values in the tables (e.g. you could get duplicate ids (e.g. unique constraint violations when inserting) if we recreate the sequences).

Also, exp/imp from prod is easier and quicker to refresh database code in my given scenario, just refresh everything at it all at once with single command.

Tom Kyte
January 21, 2009 - 6:01 pm UTC

easy enough to do the sequences

ops$tkyte%ORA10GR2> exec DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM, 'SQLTERMINATOR', TRUE);

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select dbms_metadata.get_ddl('SEQUENCE',sequence_name,user) ddl_col FROM user_sequences
  2  /

DDL_COL
-------------------------------------------------------------------------------

   CREATE SEQUENCE  "OPS$TKYTE"."S"  MINVALUE 1 MAXVALUE 9999999999999999999999
99999 INCREMENT BY 1 START WITH 1 CACHE 20 NOORDER  NOCYCLE ;



ops$tkyte%ORA10GR2> drop table t;

Table dropped.

ops$tkyte%ORA10GR2> create table t as select s.nextval id from all_objects;

Table created.

ops$tkyte%ORA10GR2> select dbms_metadata.get_ddl('SEQUENCE',sequence_name,user) ddl_col FROM user_sequences
  2  /

DDL_COL
-------------------------------------------------------------------------------

   CREATE SEQUENCE  "OPS$TKYTE"."S"  MINVALUE 1 MAXVALUE 9999999999999999999999
99999 INCREMENT BY 1 START WITH 49841 CACHE 20 NOORDER  NOCYCLE ;



ops$tkyte%ORA10GR2> select max(id) from t;

   MAX(ID)
----------
     49838




You asked for my opinion, I gave it. I'd rather not have to touch production really if possible. Why bother it?

Don't touch production?

Robert, January 21, 2009 - 7:35 pm UTC

Aren't you running your dbms_metadata against production?
Tom Kyte
January 22, 2009 - 8:34 am UTC

tell me how much of a touch do you anticipate that being

versus running an export to full scan all of the data?

You are, obviously, free to do what you want - but you asked me. And I'd use backups.

I wouldn't even bother with this single schema thing, just restore the backup (system and all) if you want it "easy"

Thanks!

Robert, January 22, 2009 - 12:59 pm UTC

Tom,

You are my main philosophical mentor/guru when it comes to IT/Oracle stuff.
I most very highly value your opinion.
Thank you for your generous help!

Thanks,

Robert.

To Robert

Alexander, January 22, 2009 - 1:09 pm UTC

Robert, I am looking to do the same thing you are. You might want to look at duplicate database. It gives you the option of skipping tablespace you don't want to copy. Since duplicate database does not allow you to skip system, your code would come over as well.

I am not having much luck getting it working though, I have a case open with Tom on another thread ;)

more robust schema refresh using export/import

Tom Evert, January 23, 2009 - 8:42 am UTC

This is a slightly better way.

1. Export USER1 schema from prod
2. Drop USER1 objects on DEV database but not the user. I drop everything excepting database links since these should be different in my development environment (at least the passwords should be different)
SELECT 'DROP ' || OBJECT_TYPE || ' ' || OWNER || '.' || OBJECT_NAME || ';'
FROM DBA_OBJECTS
WHERE OWNER = 'SCHEMAOWNER'
and OBJECT_TYPE <> 'DATABASE LINK';
You may have to run this script multiple times until all the objects have been deleted. Sometimes I have append a "cascade" to get an object to drop.
3. Import USER1 schema on DEV database. You will get errors because the database links exist but that's expected.
4. Recompile invalied objects

You are done and you did not change any permissions in the target DEV database.



use datapump

keshta, January 23, 2009 - 9:44 am UTC

Hi,

you can use impdp over dblink and you can replace your objects

impdp x/p NETWORK_LINK=dblink INCLUDE=TRIGGER,SEQUENCE,FUNCTION,..... TABLE_EXISTS_ACTION=REPLACE

Is this possible?

Russ Cannon, January 26, 2009 - 5:44 pm UTC

Is it possible to import directly into a target database during an export from the source without creating a dump file in the process?
Tom Kyte
January 28, 2009 - 8:12 am UTC

using data pump (10g and above) yes.

using exp/imp - no.

impdp direct into target database

Keshta, January 28, 2009 - 5:39 am UTC

Hi,

you can import direct into target database(from primary or standby)and the impdp is very fast.
i imp the data with impdp with dblink and i am very happy.

Regrads,
Keshta

impdp with network_link actually slower

neil kodner, February 04, 2009 - 1:24 pm UTC

In my testing, I've found import using network_link to be slower than expdp/impdp. Have you come across this? I can't seem to find my script output, but the results were stunning.
Tom Kyte
February 04, 2009 - 1:32 pm UTC

define stunning first and foremost.

I haven't heard "stunning" differences, no.

exp/imp without dumpfile

Paul MULDER, February 05, 2009 - 3:40 am UTC

> using data pump (10g and above) yes.
> using exp/imp - no.

I *sort of* disagree here (A bit):
Supposing a Unix-ish platform, one could create a named pipe, exp to it, and at the same time imp from it.
No idea of performance impact, however.

Paul
Tom Kyte
February 05, 2009 - 10:41 am UTC

Ok, sure, but I didn't even consider having two database on the same machine - I guess I don't think of cases I consider "bad ideas"...

Same thing with two machines...

Paul MULDER, February 06, 2009 - 5:11 am UTC

Sure, But there are still some possibilities to go to another machine, but really, only use it if you DO NOT have ANY MEANS of storage availible, accessible form BOTH.

One could do this:
pm@host_1> mknod /tmp/exp_imp_pipe p

pm@host_1> set ORACLE_SID=SOURCE_DB
pm@host_1> tnsping DEST_DB
...
Attempting to contact <i>(ADDRESS=(PROTOCOL=TCP)(HOST=host_2)(PORT=1521))</i>
OK (60 msec)
pm@host_1> exp xxx/yyy file=/tmp/exp_imp_pipe <other options> &
pm@host_1> imp aaa/bbb@DEST_DB file=/tmp/exp_imp_pipe <other options>
...

Sure, it's horribly slow, and dangerous, and stuff alike, but it (sort of) works.

One could also do something like this:
(Quite elaborate...)
pm@host_1> mknod /tmp/exp_imp_pipe p
pm@host_1> cat /tmp/exp_imp_pipe | netcat host_2 5555 &
pm@host_1> set ORACLE_SID=SOURCE_DB
pm@host_1> exp xxx/yyy file=/tmp/exp_imp_pipe <other options>

On the other host:
pm@host_2> mknod /tmp/exp_imp_pipe p
pm@host_2> netcat -l -p 5555 >/tmp/exp_imp_pipe &
pm@host_2> set ORACLE_SID=DEST_DB
pm@host_2> imp aaa/bbb file=/tmp/exp_imp_pipe <other options>

(NOTE: I tested this using simple echo "blabla" across the network to the same host, I do not have 2 db availible to really test it)
(ANOTHER NOTE: You need NETCAT for this, a really cool utility)
oracle@taupe:~> mknod /tmp/test p
oracle@taupe:~> cat /tmp/test | netcat taupe 6665 &   <b>(taupe is the host, 6665 the port)</b>
[1] 32030
oracle@taupe:~> echo "Kyte RuleZ" >/tmp/test 
oracle@taupe:~> 

<b>And on the 'other' host</b>
oracle@taupe:~> mknod /tmp/test2 p
oracle@taupe:~> netcat -l -p 6665 taupe >/tmp/test2 &
[1] 32023
oracle@taupe:~> cat </tmp/test2
Kyte RuleZ
[1]+  Done                    netcat -l -p 6665 taupe >/tmp/test2
oracle@taupe:~> 

Effectively creating a named pipe across the network, through which the 'dumpfile' flows, arriving at the other host, en 'eaten' by imp

Paul
Tom Kyte
February 06, 2009 - 3:35 pm UTC

ok, ok, you win :)


expdp/impdp vs tablespace backup

danield, February 07, 2009 - 11:14 pm UTC

Hello Tom,

I went through those postings with a curiosity since we also need to synchronize our mirror with prod on a regular basis (once per month at minimum). And we are using expdp/impdp. With expdp/impdp, we are automatically bringing along:

SEQUENCES, OBJECT_GRANTS, CONSTRAINTS, COMMENTS, PACKAGE_SPECS, FUNCTIONS, PROCEDURES, REF_CONSTRAINTS, TRIGGERS

I've thought that expdp/impdp are there for exactly this reason... I agree that with rman - we would practice our backup/recovery, is there any other compelling reason?

Robert - we are using FLASHBACK_TIME with export to prevent 'fuzzy' export and EXCLUDE=TABLE_STATISTICS EXCLUDE=INDEX_STATISTICS on import in order to save time by not collecting stats (in mirror) after import since we are bringing prod's stats via exporting them from prod and importing into mirror.

Thank you both for your time.

DanielD
Tom Kyte
February 09, 2009 - 6:17 pm UTC

you are not talking about a backup, so I'm confused.

I don't know what you thing we are in conflict about - exactly "what" reason are you talking about?


if you want an entire database, backups would be the best way
if you want most of a database, backups would be the best way

I'm not sure what problem you are trying to address.

Alexander, March 06, 2009 - 11:41 am UTC

Hi Tom,

If I want to use duplicate database and clone into an existing database, would I basically have to remove everything and use the filesystems that are already there?

(So in essence I wouldn't have a database there anymore because I don't believe you can dup into an existing db).
Tom Kyte
March 06, 2009 - 2:30 pm UTC

if you "duplicate", you are creating NEW.

if you "clone", you are creating NEW.

A database is a set of files (datafiles, redo, control, initialization), it semantically doesn't make sense to "duplication into" an existing database - you want a brand new database.

So, if the goal is to have this machine have a copy of some other database and NOT have the existing current database it has, you would

a) remove the existing database
b) duplicate the other database onto this machine.

Alexander, March 06, 2009 - 2:37 pm UTC

Ok I'm with you.

My goal is to have test environments I can refresh on demand by leveraging the speed of RMAN.

Based on all my research, I can sync a test instance with dup database, but I have to first create it new using duplicate database.

If you have access to the book "Oracle RMAN Recipes for Oracle Database 11g" I have page 471 specifically in mind ("Resynchronizing a Duplicate Database").

Alexander, March 09, 2009 - 10:55 am UTC

Tom do you see any security problems with making a production filesystem with backups accessible to a test machine for use with duplicate database? Our system admins have a security policy were they don't allow that.

I don't know any other way to accomplish this. Otherwise I'd have to copy a week's worth of backups, and those poses other problems.
Tom Kyte
March 09, 2009 - 1:09 pm UTC

You would have to ask your system admins what they are protecting against. What their goal is, to see if your goal conflicts with their goal.

If for example, they say "we don't want someone stealing the backup and taking it home" - then you have to question how you could even have this test system - it would seem the test system would fall under the guise of a production system and would have to be managed in the same way the production system is to avoid someone stealing the database


No one can answer your question "do you see any security problems". It totally depends on what you are tasked with protecting against.

A reader, March 13, 2009 - 12:04 pm UTC

Tom,

If making the filesystems visable on two hosts isn't doable (looks like it might not be an option for me), can you help suggest a fool proof way to script what backups to grab?

Here's the thing; Current with exports, everything goes into one file so it's very easy to scp + ls -lrt + tail -1 and grab the latest file.

With RMAN backups, there's all kinds of files and RMAN gives them strange unique names. That's one hurdle, the other is that we run weekly fullls, daily incremental. So if I want a current refresh I'd have to copy over a whole weeks worth of backups? Do you have and better ideas?

Thanks as always.
Tom Kyte
March 13, 2009 - 2:51 pm UTC

... can you help suggest a fool proof way to script
what backups to grab?
....

not really, all you have said is "what is not possible".

You haven't told me "this is what we have to work with, this is how it will be"


if you can grab an export..... you can grab X, X could be - what - a backup? You have a recovery catalog right? You can run queries, you can run reports............


if you like your import, you can do it, I would suggest the restore a backup - because it a) tests your ability to do so, b) shows you can do it, c) is the most efficient way to accomplish it.


Alexander, March 13, 2009 - 3:19 pm UTC

Will you be talking to people after you present two weeks from today in Boston? I'm tempted to wait until then to try explain what I'm talking about because I must not be doing a good job.
Tom Kyte
March 13, 2009 - 3:24 pm UTC

Greetings from Prague (getting to be late at night, just fyi :) )

well, Boston will actually be a hit and run for me, I fly in from Miami the night before, I speak there in the morning and have a 12:15pm flight out of Logan.....

I cannot be assured of having time to have any sort of in depth talk before having to leave - sorry...


Alexander, March 13, 2009 - 3:38 pm UTC

Ok no big deal. I only ask because the only other time I saw you speak you actually annouced to everyone you'd be hanging around to chat which really surprised me because I expected a stampede to head in your direction.

At your leisure, if a picture helps:

-rw-r-----   1 oracle   dba      1341972480 Mar  9 03:10 OCP16P_knk9euek_1_1
-rw-r-----   1 oracle   dba      1251573760 Mar  9 03:11 OCP16P_kmk9euek_1_1
-rw-r-----   1 oracle   dba      1381851136 Mar  9 03:12 OCP16P_klk9euek_1_1
-rw-r-----   1 oracle   dba      1494745088 Mar  9 03:12 OCP16P_kok9euek_1_1
-rw-r-----   1 oracle   dba         1787904 Mar  9 03:13 OCP16P_kpk9ev21_1_1
-rw-r-----   1 oracle   dba        12222464 Mar  9 03:13 OCP16P_c-3469898179-20090309-00.bkp
-rw-r-----   1 oracle   dba         3938304 Mar 10 03:00 OCP16P_lok9hils_1_1
-rw-r-----   1 oracle   dba        95515136 Mar 10 03:00 OCP16P_lnk9hils_1_1
-rw-r-----   1 oracle   dba       170508800 Mar 10 03:01 OCP16P_llk9hils_1_1
-rw-r-----   1 oracle   dba       207581696 Mar 10 03:01 OCP16P_lmk9hils_1_1
-rw-r-----   1 oracle   dba       217055232 Mar 10 03:09 OCP16P_lsk9hinn_1_1
-rw-r-----   1 oracle   dba       218185728 Mar 10 03:09 OCP16P_lqk9hinn_1_1
-rw-r-----   1 oracle   dba       139673600 Mar 10 03:09 OCP16P_lpk9hinn_1_1
-rw-r-----   1 oracle   dba      1215766528 Mar 10 03:13 OCP16P_lrk9hinn_1_1
-rw-r-----   1 oracle   dba         2848256 Mar 10 03:13 OCP16P_ltk9hje4_1_1
-rw-r-----   1 oracle   dba        12222464 Mar 10 03:13 OCP16P_c-3469898179-20090310-00.bkp
-rw-r-----   1 oracle   dba        99182080 Mar 11 03:00 OCP16P_mqk9k71o_1_1
-rw-r-----   1 oracle   dba       209961472 Mar 11 03:01 OCP16P_mok9k71o_1_1
-rw-r-----   1 oracle   dba       231483392 Mar 11 03:01 OCP16P_mpk9k71o_1_1
-rw-r-----   1 oracle   dba       394851840 Mar 11 03:01 OCP16P_mnk9k71o_1_1
-rw-r-----   1 oracle   dba        52830208 Mar 11 03:09 OCP16P_msk9k756_1_1
-rw-r-----   1 oracle   dba        44212224 Mar 11 03:09 OCP16P_muk9k756_1_1
-rw-r-----   1 oracle   dba        62300160 Mar 11 03:09 OCP16P_mrk9k756_1_1
-rw-r-----   1 oracle   dba       356073472 Mar 11 03:11 OCP16P_mtk9k756_1_1
-rw-r-----   1 oracle   dba         2773504 Mar 11 03:11 OCP16P_mvk9k7nt_1_1
-rw-r-----   1 oracle   dba        12222464 Mar 11 03:12 OCP16P_c-3469898179-20090311-00.bkp
-rw-r-----   1 oracle   dba        78996480 Mar 12 03:00 OCP16P_nsk9mrdu_1_1
-rw-r-----   1 oracle   dba       306390016 Mar 12 03:01 OCP16P_nrk9mrdu_1_1
-rw-r-----   1 oracle   dba       312379904 Mar 12 03:01 OCP16P_nqk9mrdu_1_1
-rw-r-----   1 oracle   dba       444203008 Mar 12 03:02 OCP16P_npk9mrdu_1_1
-rw-r-----   1 oracle   dba        56664064 Mar 12 03:09 OCP16P_nuk9mrhl_1_1
-rw-r-----   1 oracle   dba        46710784 Mar 12 03:09 OCP16P_o0k9mrhl_1_1
-rw-r-----   1 oracle   dba        64184320 Mar 12 03:10 OCP16P_ntk9mrhl_1_1
-rw-r-----   1 oracle   dba       432807936 Mar 12 03:10 OCP16P_nvk9mrhl_1_1
-rw-r-----   1 oracle   dba         2085888 Mar 12 03:10 OCP16P_o1k9ms20_1_1
-rw-r-----   1 oracle   dba        12222464 Mar 12 03:11 OCP16P_c-3469898179-20090312-00.bkp
-rw-r-----   1 oracle   dba       116985856 Mar 13 03:00 OCP16P_ouk9pfpp_1_1
-rw-r-----   1 oracle   dba       249969664 Mar 13 03:01 OCP16P_otk9pfpp_1_1
-rw-r-----   1 oracle   dba       296806400 Mar 13 03:01 OCP16P_osk9pfpp_1_1
-rw-r-----   1 oracle   dba       372827648 Mar 13 03:01 OCP16P_ork9pfpp_1_1
-rw-r-----   1 oracle   dba        57188352 Mar 13 03:08 OCP16P_p0k9pft1_1_1
-rw-r-----   1 oracle   dba        46383104 Mar 13 03:08 OCP16P_p2k9pft1_1_1
-rw-r-----   1 oracle   dba        66527232 Mar 13 03:09 OCP16P_ovk9pft1_1_1
-rw-r-----   1 oracle   dba       397639680 Mar 13 03:10 OCP16P_p1k9pft1_1_1
-rw-r-----   1 oracle   dba         3045888 Mar 13 03:10 OCP16P_p3k9pgdq_1_1
-rw-r-----   1 oracle   dba        12222464 Mar 13 03:10 OCP16P_c-3469898179-20090313-00.bkp


I have to say to myself, I need programatically copy from a list like that what backup files I need.

And for all I know the time it would take to copy these would negate the benefit of using RMAN over exp/imp.

I am trying to get here

http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmdupdb.htm#i1006946

I wish I could just upgrade to 11g and take the backups out of the picture.
Tom Kyte
March 13, 2009 - 4:03 pm UTC

... And for all I know the time it would take to copy these would negate the benefit of using RMAN over exp/imp. ...

really - you said you needed to scp the dmp file (a large percentage of the backups) and IMP is not *the fastest thing* in the world....


refresh schema

A reader, March 14, 2009 - 4:33 pm UTC


impdp and constraints

Laurie Murray, April 21, 2009 - 2:49 pm UTC

Hi, We are on Oracle version 10.2.0.3, and using impdp to refresh a test database with production. Database size is about 400 gigs. MOST of the impdp is very fast! However, once it gets to this step: "Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/CONSTRAINT" it hangs for literally two days! I can see it slowly creating constraints on the target db. I don't want to do exclude=constraints because I need the constraints in the target db. Perhaps it is checking all the data against the constraint conditions? I don't need that to be checked; it is valid in the source db. Other than this very large performance problem with the "constraint" step, impdp works very nicely for us for db refresh as it is fast, simple, and grabs all that we need. How can I speed up the Constraint section of impdp? Thanks in advance...
Tom Kyte
April 21, 2009 - 3:53 pm UTC

why not just restore a backup?

I always prefer that

reason:

a) it proves you can (this is important)
b) the resulting database is identical to the one you are testing. The order of the rows on disk are the same, the amount of whitespace is the same

So, why not restore what you need?

impdp and constraints

Laurie Murray, April 21, 2009 - 6:09 pm UTC

I think it's just been a comfort-level thing. For the next go round I'll learn about and use an RMAN restore for the db refresh. Thanks, Tom.
Tom Kyte
April 22, 2009 - 1:49 pm UTC

you can get the constraints to go on "faster" via a manual process of putting the constraints on in the "enable rely novalidate" mode. rather than putting them on using impdb - but, they would be "different" (it would use non-unique indexes for primary keys for example...)

but the restore will be best.

Alexander, April 22, 2009 - 2:19 pm UTC

Tom,

Follow up comment to this

< http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1432274900346100233#1564598500346310550 >

If I copy a bunch of backup files from one host to another, and run duplicate database, will it know to use only those backups? Or is it going to read all the backup information out of the target's control file and try to perform a complete recovery? I don't want to do that, I want to use only what I copied (so basically no archive logs).
Tom Kyte
April 24, 2009 - 4:28 pm UTC

how can it read the targets controlfile, it doesn't exist yet?


if you copied the backups to a machine, you can always just restore from that (eg: do not use duplicate, but a cancel based recovery from the backups)

Alexander, April 24, 2009 - 4:39 pm UTC

I'm going to cut and paste from the documentation as an example:

Example 13-3 Duplicating When Datafiles Use Inconsistent Paths

CONNECT TARGET /;
CONNECT CATALOG rman/cat@catdb;
CONNECT AUXILIARY SYS/oracle@dupdb;

# note that a RUN command is necessary because you can only execute SET NEWNAME
# within a RUN command
RUN
{
  # The DUPLICATE command uses an automatic sbt channel.
  # Because the target datafiles are spread across multiple directories, 
  # run SET NEWNAME rather than DB_FILE_NAME_CONVERT
  SET NEWNAME FOR DATAFILE 1 TO '/oradata1/system01.dbf'; 
  SET NEWNAME FOR DATAFILE 2 TO '/oradata2/undotbs01.dbf'; 
  SET NEWNAME FOR DATAFILE 3 TO '/oradata3/cwmlite01.dbf'; 
  SET NEWNAME FOR DATAFILE 4 TO '/oradata4/drsys01'; 
  SET NEWNAME FOR DATAFILE 5 TO '/oradata5/example01.dbf'; 
  SET NEWNAME FOR DATAFILE 6 TO '/oradata6/indx01.dbf';
  # Do not set a newname for datafile 7, because it is in the tools tablespace,
  # and you are excluding tools from the duplicate database.
  SET NEWNAME FOR DATAFILE 8 TO '/oradata7/users01.dbf'; 
  DUPLICATE TARGET DATABASE TO dupdb
    SKIP TABLESPACE tools
    LOGFILE
      GROUP 1 ('/duplogs/redo01a.log', 
               '/duplogs/redo01b.log') SIZE 200K REUSE, 
      GROUP 2 ('/duplogs/redo02a.log', 
               '/duplogs/redo02b.log') SIZE 200K REUSE;
}


Target exists, aux does not.

You connect to both.

I am asking, if I just copy files using O/S commands (like the docs say you can do), how is RMAN going to determine what to use to restore my aux database? What is going to clue it in to the fact that I only have a small portion of the backup files location on the aux machine when it has a history of other stuff in it's control files.

There's nothing in the duplicate command like "use this time" or "use this tag" etc...

Tom Kyte
April 27, 2009 - 1:23 pm UTC

back up, where did target *come from*

I don't know what you have set up here.


You show a duplicate but ask about a restore? The restore would just have you copy the backups over, and restore from there - like restoring to a new host. There would be no 'target'

Until time...

Dana, April 26, 2009 - 11:29 am UTC

From http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmdupdb.htm#i1006982

This duplication example assumes the following:

The target database trgt and duplicate database dupdb are on different hosts but have exactly the same directory structure.

You want to name the duplicate database files the same as the target files.

You are not using a recovery catalog.

You are using automatic channels for disk and sbt, which are already configured.

You want to recover the duplicate database to one week ago in order to view the data in prod1 as it appeared at that time (and you have the required backups and logs to recover the duplicate to that point in time).

CONNECT TARGET SYS/oracle@trgt
CONNECT AUXILIARY SYS/oracle@dupdb

DUPLICATE TARGET DATABASE TO dupdb
NOFILENAMECHECK UNTIL TIME 'SYSDATE-7';

Alexander, April 27, 2009 - 3:05 pm UTC

Isn't that what duplicate is doing, restoring?

Replace "restore" from above with "duplicate into" if that helps....

"I am asking, if I just copy files using O/S commands (like the docs say you can do), how is RMAN going to determine what to use to duplicate into my aux database? What is going to clue it in to the fact that I only have a small portion of the backup files location on the aux machine when it has a history of other stuff in it's control files."

Is this more clear as to what I am asking? All I am talking about is duplicating a database, no restores. TARGET ===> AUX where the target is production on a different host and my auxiliary db is test.
Tom Kyte
April 27, 2009 - 3:14 pm UTC

no, it is not just restoring, it (duplicate) wants to have at the source database too. If duplicate was restore, it would be called restore - not duplicate.


if you copied over the backups and you want a point in time copy of the database, just *restore* from it and be done with it (that was my suggestion, just restore - done).


If you have everything you need in the copy of what you've copied to do the restore, it won't care about the other missing bits.

Alexander, April 27, 2009 - 3:47 pm UTC

I was talking about the physical process for how duplicate and a restore write blocks to disk. I imagine they're similar if not the same since the end product in either case is an identical copy of the source right?

I am choosing duplicate because it looks to me like it will be easier to script and reuse.

If this isn't a case of when to use it, then when is? Why even have the feature?
Tom Kyte
April 27, 2009 - 4:39 pm UTC

basically, it adds this last step:


...
Generates a new, unique DBID for the duplicate database (except when you create a standby database with DUPLICATE ... FOR STANDBY, in which case RMAN does not create a unique DBID)
......


http://docs.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmdupdb.htm#i1008888

it would be easier if the copied database was to be on the same host (to use duplicate would be easier)


so, duplicate basically creates a duplicate that is "different" from the original (different database id) in a manner that lets that copied database coexist with the original (you can back it up - it won't get confused)

whereas a simple restore would just bit for bit, byte for byte leave you with the database you started with.


My point was, if you have the backups available on the remote machine and are just testing with this (it'll not be backed up and whatever, there are no database id conflicts) - then if you just restore, you need not even touch the original source production machine - but duplicate will.

IMPDP over network and long datatype

Sal, October 26, 2011 - 6:12 pm UTC

Hi Tom,

I used impdp over network for the 1st time, and I got this ORA-31679: Table data object <table_name> has long columns, and longs can not be loaded/unloaded using a network link.

I am moving one large schema, about 300 GB. There are only a few small tables that have long. Can you suggest any tips/tricks so that I can still do impdp over network, and still be able to get these tables copied over?

Thanks for your usual help :)
Tom Kyte
October 27, 2011 - 11:08 am UTC

impdp, as documented and as it is telling you - will not work with longs over a network.

You'll have to skip those few tables - or convert them into clobs - if you want to do this.

refresh test with Production

Reader, November 10, 2011 - 11:02 am UTC

Hi Tom,

This is where I go whenever I couldn't decide on the approach I should go with.

I have read this whole thread but I want ask this question once again with 11.2 and ASM DB environment.

We have a PeopleSoft application on 11.2 Oracle DB. We have to refresh test with production once in a while. 99.9% of the objects are in one schema and the rest of the objects in other but I am not worried about those. To put it in short I am only worried about that one schema. Size of the DB is 35 GB.

Our Production and Test are on two different servers.

I tried Datapump with export and import as two different tasks, export took around an hour and import took about 3 hours.

What is the best method to do this if bothe the Databases are on DIFFERENT servers? Without down time and is fast.

What is the best method to do this if bothe the Databases are on SAME servers? Without down time and is fast.

I am willing to use any new features which are available in 11.2 and are not discussed in this thread.

Thanks...
Tom Kyte
November 10, 2011 - 1:43 pm UTC

Well, to restore something is going to require downtime on the target there at some point, you have to replace everything.


My preferred approach would always be

just restore the database from backups.


You can use rman duplicate, you can just restore from backup, whatever. What you don't want to do is unload an entire database and reload/reindex it (datapump or anything like it)

Just take a 100% full backup and restore it. It'll do things:

a) it'll be fast
b) it'll prove you can restore your backups!

refresh test with Production

Reader, November 10, 2011 - 2:15 pm UTC

Thanks Tom. When I ment no down time, I ment on the source DB, my mistake.


Tom Kyte
November 10, 2011 - 5:34 pm UTC

if you just use backups - no impact at all on production :)

More to Explore

Data Pump

All of the database utilities including Data Pump are explained in the Utilities guide.