Skip to Main Content
  • Questions
  • Creating test environment from prodution box.

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, Mangesh.

Asked: August 09, 2000 - 8:24 pm UTC

Last updated: November 01, 2013 - 8:34 pm UTC

Version: Oracle 8.05

Viewed 10K+ times! This question is

You Asked

Hi,
I am working as Analyst programmer for past 5 years .
I was always interested in database level jobs rather than
front-end .
I plan to take up a DBA job in near future.
My question is -
Which is the fastest way to create a test database
from a production database which is quite big in size (400 GB).
The easiest way , I know is, export/import. But thsi may not be
the fastest way.

I just wish to copy all database objects/structures like tables,indexes etc., and not the data.

Your guidance is highly appreciated.


Regards,
Mangesh



and Tom said...

If you just want to create a copy of a schema without the data - then exp with ROWS=N owner=SOME_USER and imp full=Y will be the fastest way we have -- short of running the original DDL scripts that is (the imp will take as long to run as the scripts would have. The only additional time is the time to export the schema definitions)

Rating

  (132 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

full=y and rows=n are conflicting options

Bhavesh Mehta, January 22, 2002 - 4:44 pm UTC

full=y and rows=n are conflicting options.
So you have to use rows=n and explcictly specify all
other options like tables, indexes etc.

The error you get is:
EXP-00026: conflicting modes specified
EXP-00000: Export terminated unsuccessfully

Would be nice to make these two option not conflicting.

Tom Kyte
January 22, 2002 - 6:07 pm UTC

Nope, you did something else there. I think you had a "tables=" or "owner=" hiding in that command line as well. cut and paste an example for us like this:

(i did this in 734, 806, 817, 901 -- all to the same successful effect)



$ exp userid=/ full=y rows=n

Export: Release 8.1.7.2.0 - Production on Tue Jan 22 18:06:10 2002

(c) Copyright 2000 Oracle Corporation. All rights reserved.


Connected to: Oracle8i Enterprise Edition Release 8.1.7.2.0 - Production
With the Partitioning option
JServer Release 8.1.7.2.0 - Production
Export done in US7ASCII character set and WE8ISO8859P1 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)
Note: table data (rows) will not be exported

About to export the entire database ...
. exporting tablespace definitions
. exporting profiles
. exporting user definitions
. exporting roles

Entire box copy

nirmal, January 23, 2002 - 6:15 am UTC

Well,

Our production database size is 100gb. Now i want to make test environment for the production server in another server. How can i do this?... hope that export/import is not the best solution in this case...

Rgds,
Nirmal

Tom Kyte
January 23, 2002 - 7:33 am UTC

If the server is the same environment (hardware/os) just backup production and restore to test.

great way to test if your dba's know how to do a restore to another machine, which is what they would have to do if the current machine gets burnt up in a fire or something.

Creating Test Databases from Production Databases

Alexandre, January 23, 2002 - 6:31 am UTC

I think using RMAN is the fastest way to do that...

Clone A Solution to This Problem

A reader, January 23, 2002 - 10:05 am UTC

Hi Tom,

I Think Clone of the 100 GB Database can be considered as a Good and viable solution to this problem.

Right ???

Regards,
Ganesh R

Tom Kyte
January 23, 2002 - 6:20 pm UTC

sure, but a clone is a copy and that's what we are talking about how to do here.

Actual Cloning.

A reader, January 24, 2002 - 12:31 am UTC

Actually i was trying to tell a way to do it ... Cant u directly copy the DB Files and creat a clone and not do it via Export and Imports.. Wont that be faster and more effecient and less amount of work from the DBA.

Regards,
Ganesh R

Reader

Reader, January 24, 2002 - 8:17 am UTC

Cloning REQUIRES same operating system on both
source and target. The originall question does
not mention the OS , hence exp/imp is a
viable generic option in my opinion


how to get roles etc

T, September 15, 2002 - 10:23 am UTC

Tom,
Our database is really small....3gigs or so(8.1.7.3.0 in Windows 2000 advanced server).

We are moving it to UNIX.
There are about 2000 users in the database...
There are 10 schemas (users with some objects)
Database is running Dictionary Managed Tbalespaces (DMT) and Rule based optimizer (RBO.
We want to move it to LMT and CBO.
I have created the export with compress=n and direct=y and just the owners that have schemas...and I will import those..
Now...the other users, all the grants and all the roles...
Can you help me with a script to do those...those have to be precreated before the import right?
Am I on the right track....with this approach..?

Thanks....

Tom Kyte
September 15, 2002 - 11:51 am UTC

Here is one approach (since unix is involved this'll be easier, better "script" tools for you there ;)

on windows, do:

c:\> exp userid=/ grants=y indexes=n rows=n constraints=n full=y triggers=n

that should go pretty fast and create a smallish file.  BINARY ftp that to Unixland and:

$ strings expdat.dmp > test.sql


Now, you have a DDL script file we can edit and clean up.  Using vi, edit the file and you'll see the first couple of lines are some junk you'll delete.

Then there will be the create tablespace statements -- you can remove them as well since you are doing them different.


After that, you'll find PROFILES (create profile statements), keep them or remove them as you desire....

Next, you'll find things like:

ALTER USER "SYS" IDENTIFIED BY VALUES 'D0204756F458CBB9' TEMPORARY TABLESPACE "TEMPORARY"
ALTER USER "SYSTEM" IDENTIFIED BY VALUES 'A3661D5636A21570' TEMPORARY TABLESPACE "TEMPORARY"
CREATE USER "OUTLN" IDENTIFIED BY VALUES '4A3BA55E08595C81' TEMPORARY TABLESPACE "TEMPORARY"
CREATE USER "OPS$ORA817" IDENTIFIED EXTERNALLY DEFAULT TABLESPACE "USERS" TEMPORARY TABLESPACE "TEMPORARY"
CREATE USER "OPS$CLBECK" IDENTIFIED EXTERNALLY TEMPORARY TABLESPACE "TEMPORARY"
CREATE USER "TMC" IDENTIFIED BY VALUES 'E7ECA5922344E8B2' DEFAULT TABLESPACE "USERS" TEMPORARY TABLESPACE "TEMPORARY"
CREATE USER "WEB$CLBECK" IDENTIFIED BY VALUES 'A145F35A5B688D72' TEMPORARY TABLESPACE "TEMPORARY"
.......

These will create your users in the new database (and optionally reset your SYS and SYSTEM passwords!).  For now, leave them as is (we'll add the "/" in a moment).  You'll now page down 20,000 lines (over your users...)


After that, you'll find things like:

....
CREATE USER "OPS$TKYTE" IDENTIFIED EXTERNALLY DEFAULT TABLESPACE "USERS" TEMPORARY TABLESPACE "TEMPORARY"
CREATE ROLE "SELECT_CATALOG_ROLE"
REVOKE "SELECT_CATALOG_ROLE" FROM SYSTEM
CREATE ROLE "EXECUTE_CATALOG_ROLE"
REVOKE "EXECUTE_CATALOG_ROLE" FROM SYSTEM

Here come your roles, edit as you wish, keeping the ones you want and discarding the rest (eg: your target database probably already has the select catalog role, no harm in keeping it -- it'll just give you an error when you run it on the target system that you'll have to ignore)


Little further down, you'll find:

....
CREATE ROLE "TEST_ROLE"
REVOKE "TEST_ROLE" FROM SYSTEM
GRANT UNLIMITED TABLESPACE TO "SYSTEM" WITH ADMIN OPTION
GRANT UNLIMITED TABLESPACE TO "OUTLN"
GRANT EXECUTE ANY PROCEDURE TO "OUTLN"
GRANT CREATE SESSION TO "RECOVERY_CATALOG_OWNER"
GRANT ALTER SESSION TO "RECOVERY_CATALOG_OWNER"
GRANT CREATE TABLE TO "RECOVERY_CATALOG_OWNER"

All of your system grants...

and after the grants:

...
ALTER USER "SYSTEM" DEFAULT ROLE ALL
ALTER USER "OUTLN" DEFAULT ROLE ALL
ALTER USER "OPS$ORA817" DEFAULT ROLE ALL
ALTER USER "OPS$CLBECK" DEFAULT ROLE ALL
ALTER USER "APPS" DEFAULT ROLE ALL
.......

keep those as well.

Later, after the CREATE SEQUENCE section (which you can delete, these should all be in the user level exports as well, don't need them) you'll find the CREATE DIRECTORY and CREATE CONTEXT, you'll want to keep them as they will not come over in the user level exports (they are "global" system objects, not owned by anyone)

Now, there will be a bunch of other creates -- skip over these till you come to CREATE PUBLIC SYNONYM.  We'll need to grab many of these.  Skip down in this section until you stop having synonyms for SYS and SYSTEM objects (they'll already be there).  If you had java installed, this will be thousands of lines to skip.  Once you skip them, - keep the ones you want, remove the rest.



Now we are into the create tables and stuff by user, we can tell that from the output of export:

About to export the entire database ...
. exporting tablespace definitions
. exporting profiles
. exporting user definitions
. exporting roles
. exporting resource costs
. exporting rollback segment definitions
. exporting database links
. exporting sequence numbers
. exporting directory aliases
. exporting context namespaces
. exporting foreign function library names
. exporting object type definitions
. exporting system procedural objects and actions
. exporting pre-schema procedural objects and actions
. exporting cluster definitions
. about to export SYSTEM's tables via Conventional Path ...

We can skip over that and goto the bottom -- there export will have objects like:

. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting indextypes
. exporting posttables actions
. exporting snapshots
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting user history table
. exporting default and system auditing options
. exporting statistics
Export terminated successfully with warnings.

again, edit and keep what you need.

Now you have a script that you need to add "/" to

:1,$s'^V^M'/'

(ctl-v ctl-m) should do that.  







 

"script to do those" wasn't very clear

t, September 15, 2002 - 10:25 am UTC

A script to create all the existing users, all the roles and all the grants (grants to schemas).....

Now you have a script that you need to add "/" to

t, September 15, 2002 - 12:40 pm UTC

Tom,
Thanks a bunch...You showed that this can be done without using third part tools (tools that Reverse engineer roles etc.)
I am learning UNIX commands and core DBA work at the same time....
the :1controlV controlM command that you gave
when I have VI editor open(the file is open in the VI editor)
then I just enter this command and it will put / after all the statements.....cool unix is cool....

Question:
Can I import users in tablespaces I like...or
just create the tablespaces that will be default in the old databasefor users...(so that create user statements that is coming in won't fail)
and then change them once they get to UNIX side...to any tablespace I want to...

Thanks...


Tom Kyte
September 15, 2002 - 1:06 pm UTC

You can edit any of the statements in there you like to change their default tablespaces, yes.

Substitute Pattern Match Failed

t, September 16, 2002 - 5:52 pm UTC

"again, edit and keep what you need.

Now you have a script that you need to add "/" to

:1,$s'^V^M'/'

(ctl-v ctl-m) should do that."



I edited the file in notepad...and then open it through vi..
while it was open....I typed:
again, edit and keep what you need.

Now you have a script that you need to add "/" to

:1,$s'^V^M'/'

I got "Substitute Pattern Match Failed"



Tom Kyte
September 16, 2002 - 8:23 pm UTC

notepad -- you let windoze software CORRUPT your file. sorry for you. you let those dos guys ruin the end of line marker.

Try again, don't TOUCH the file with windows stuff.

Substitute Pattern Match Failed..addendum

t, September 16, 2002 - 5:54 pm UTC

I meant, I typed:

:1,$s'^V^M'/'


not


again, edit and keep what you need.

Now you have a script that you need to add "/" to

:1,$s'^V^M'/'


Tom Kyte
September 17, 2002 - 7:42 am UTC

so, I must be missing something cause:

:1,$s'^V^M'/'

and

:1,$s'^V^M'/'


are looking pretty much identical to me.


useful

Ralph V., September 16, 2002 - 9:35 pm UTC


Would you still recommend this approach for bigger databases? Let's say I have a production DB on NT with several partitioned tables, several tablespaces and users. Now I want to transfer it to Unix. Editing a long script is somewhat daunting. I have tried this once when I followed your advice regarding transfering a multi-tablespace db to a standard (users-indx-sys-temp-rbs) database. I must have spent two hours editing that indexfile. :)

I haven't done full database imports yet, just schemas.

Thanks


Tom Kyte
September 17, 2002 - 7:50 am UTC

Well, actually -- if you want my opinion....

You have a system.
You want to move it.

You are saying "i don't have my scripts, I don't have any way to recreate my schema"

I would say "you need some configuration management in your life". I cannot imagine not being able to run scripts in order to recreate my environment. Oh well....

Anyway, if you have your own install scripts -- you are done.

If you do not, we'll give you monster script which you can edit. You can use other tools (eg: designer pops into mind -- and there are others) to reverse engineer the data dictionary from and create your schemas as well (and then you'll even have some configuration management capabilities!)

useful

Ralph V., September 16, 2002 - 9:37 pm UTC

Would you still recommend this approach for bigger databases? Let's say I have a production DB on NT with several partitioned tables, several tablespaces and users. Now I want to transfer it to Unix. Editing a long script is somewhat daunting. I have tried this once when I followed your advice regarding transfering a multi-tablespace db to a standard (users-indx-sys-temp-rbs) database. I must have spent two hours editing that indexfile. :)

I haven't done full database imports yet, just schemas.

Thanks


shortcomings of 'strings'

Connor, September 18, 2002 - 4:30 am UTC

Most ports of 'strings' have a limit around the 1024 or 2048 character mark at which point they will silently truncate the rest of the ddl. Bummer especially if you've got partitioned tables - long long ddl there.

To avoid this, you're then up against using show=y, but nope, can't use that because the line breaks all over the place. So now we're into indexfile hacks, but suddenly adding a ';' or '/' at the end of each ddl ain't so easy because its no longer a single line.

All I can say is what a relief it is to have dbms_metadata. :-)

Cheers
Connor


Backup/Restore to clone database

Edgar, December 18, 2002 - 8:31 am UTC

Hello Tom,

about your statement "If the server is the same environment (hardware/os) just backup production and
restore to test." Can I backup (using rman) from production on linux and restore it on devel. on windows? Or this is not possible and the only way to recreate production environment is export/import from windows? Thank you!

Tom Kyte
December 18, 2002 - 12:41 pm UTC

that is not the same OS, so -- in a word -- no.

A reader, December 18, 2002 - 2:22 pm UTC

Tom,

Can a rman backup done on Solaris Intel restored on Linux Red hat 8.0 and vice versa..

Thanks

Tom Kyte
December 18, 2002 - 2:33 pm UTC

different operating systems -- totally not supported even if it accidently works.

short answer = no.

MG, June 25, 2003 - 6:26 am UTC

Hi Tom,

I have install the oracle.

Now I want to create the DB from perl. Once I execute "create database...", I want to detect, whether this command was executed succesfully.

What is the standard way to do this? Currently I check the log files. Is there any other professional way?

thanks



Tom Kyte
June 25, 2003 - 12:06 pm UTC

why you would want to do this from perl is beyond me however, "check the return code" comes to mind?

System User

John French, July 01, 2003 - 5:32 pm UTC

Tom,

How do you make the Full Import from a Full Export not recreate all the System objects like advanced Queuing etc.

Thanks!!

Tom Kyte
July 01, 2003 - 7:32 pm UTC

well, you wouldn't since system is exported as normal.

you usually full import into a minimally configured database. I never full import (tad slow on todays sized databases) myself and use user level exp/imports in parallel personally.

imp/ exp

Junior, July 25, 2003 - 5:45 pm UTC

Tom:

1.
So I cannot just simply exp xx.dmp from windows 2000 and imp xx.dmp in UNIX- AIX?
2.
The import command for UNIX and WINDOWS is same?

Thank for the answer.

Tom Kyte
July 25, 2003 - 8:38 pm UTC

sure you can.. dmp files are cross system portable.

full export/Import

atul, July 28, 2003 - 12:24 am UTC

Hi,

I have to build test database from production.
Test database should contain same users like production and
same table definition without data under same user as it's in production..

For this should i use

1)in production:
exp / full=y rows=n file=proddump.dmp

2)Crate all the users in test

3)Dropping existing table from test.

4)In test
imp / files=proddump.dmp ignore=y

(Should i have to give ignore=Y here)

Are all steps given are OK?

Thanks,
Atul


Tom Kyte
July 28, 2003 - 7:03 am UTC

how about this easier process


a) restore your backup of production


done.

Is step 2) above : Create all the users in test necessary with Full Export ?

pasko, July 28, 2003 - 9:08 am UTC

Hi Tom,

Do we have to pre-create Users on Test DB, when we do Full Export/Import ?

Tom Kyte
July 28, 2003 - 9:11 am UTC

no, the import will do that.

Restore a backup of production if they have different block size?

Peter Tran, July 29, 2003 - 4:19 pm UTC

Tom,

Can you restore to another instance if it was configured with a different block size?

If my 1-terabyte production database was was orginally created with an 8K blocksize and I took a back up of it, and now I want to restore this database to a test instance that was configured with a 16K block size. Is this possible?

Or do I have to go the route of exp/imp. Your book recommends not to do exp/imp for anything over 15G.

Thanks,
-Peter

Tom Kyte
July 29, 2003 - 7:59 pm UTC

umm, no -- the block size is an attribute of the tablespace/files themselves.

in 9i, you can create a tablespace of 16k blocks -- and then "alter move" "alter rebuild" the tables/indexes into them.

EXP/IMP

atul, October 23, 2003 - 5:02 am UTC

Hi,

I have 2 bases on same node,i want to exp all the objects
from base 1 to base 2 without data.

But base 2 contains some tables etc..without data,
Should i need to drop them all first??


What is the way in exp/imp

Thanks.
Atul

Tom Kyte
October 23, 2003 - 12:44 pm UTC

what is a "base".

i suppose you meant "database".

I cannot answer your question, in fact NO ONE can answer your question -- except for you.

do you want to drop the tables? go ahead
do you need to? no, you can import with ignore=y, it'll ignore errors from "object already exists" type of failure.

How to create test schemas for 40 users

Olaf, November 03, 2003 - 4:39 am UTC

Tom,

I have to create 40 test schemas in Oracle. My customer always have 40 prod schemas and now want to have for each prod schema one test (1 to 1 matching the prod, also 40 prods and 40 tests)
How would you make this. Export/import of 40 schemas seems not to be the simlest and fastest way to do this.

Thanks in advance

Olaf

Tom Kyte
November 03, 2003 - 7:27 am UTC



why doesn't it?

Olaf, November 03, 2003 - 8:53 am UTC

wiht exp/imp I need 40 times (for each user) create dump file, create user test and imp it. I thought I can use something like "create as ...", as my data always in the database why should I export it?
I need to double the schemas in the same database under dif.
name: for user smith create smithtest, and do this for all users

Is there a Size Limit with IMP/EXP?

Richard, April 06, 2004 - 9:42 am UTC

Hi,

I need to get an exact copy of our production 8.1.6 DB from an NT machine onto our W2K server (boo!), which has 9i on it (hooray!).

As the Operating Systems differ, IMP/EXP appears to be the only way to go.

The data I wish to move around is 57GB in size. Is this too much for IMP/EXP?


Tom Kyte
April 06, 2004 - 9:52 am UTC

NT to W2k are file compatible -- they are the same. Ok to backup and restore.


but you can use files= and filesize= in order to generate N files of M bytes each (eg: split them up)

Thank You!!!!

Richard, April 06, 2004 - 9:58 am UTC

Hi Tom,

Many thanks for your help.

I thought (wrongly) that Oracle was ULTRA picky about OSs, and am delighted that an NT to W2K restore will be OK.

Cheers!

Moving data from one Server to New Server(or Test Server)

denni50, April 06, 2004 - 10:57 am UTC

We did our replication of our 20g Production DB to a new
faster IBM Server differently.

We went from an NT4 to Windows 2000 Server OS.
Everything was backed up and we did perform a full export
just in case.

1) We created the same directory structure on the new
server as the source server.
2) shutdown db and all Oracle Services on source server
3) Physically copied all the datafiles/logfiles/controlfiles
to their same locations on the new server.
4) copied the init<sid>.ora to the new server.

5) installed Oracle and configured the new server.

We were able to accomplish this in much less time and without any glitches.


Copy prod to test schema

olaf, April 27, 2004 - 5:04 am UTC

Tom,

I very often have to copy prod to test schemas.
I have 80 schemas, and the way I do it is drop test user,
create a new one with the same name, export prod and import the data in test.
How can I make it more simple? Copy TS ????

Olaf

Tom Kyte
April 28, 2004 - 11:58 am UTC

you can use transportable tablespaces but the steps would be more or less the same.

now you:

a) drop user in test
b) export user from prod
c) create user in test
d) import user into test


using transportable tablespaces you would:

a) drop user in test
b) make tablespaces read only in prod that contain data for this user and "exp" them using transportable tablespace option. set tablespaces back to read write.
c) create user in test
d) import user into test, reattaching the datafiles


it might be faster but the steps would be the same.


the fastest way I find is restoring prod to test from a backup. In that fashion, you not only get speed, you KNOW you can actually restore!

FULL IMP

branka, May 21, 2004 - 1:50 pm UTC

I would like to make full import of prod database into existing database. Do I have to imp one by one user or full imp wil overwrite existing database and create new one?

Tom Kyte
May 21, 2004 - 2:02 pm UTC

a full import will just "add", import will never drop existing things.

however, it would be "unusual" to say the least to full import into an existing database that isn't "empty".

Test with subset of data

Joe Simunac, May 24, 2004 - 8:21 am UTC

We need to create a copy of our production data for testing, but with only a subset of our data. What would be the best way to do this?

We've suggested to our DBA that he copy the database and then delete all IDs below a certain number (thus getting rid of older data), but he says this would be difficult and time consuming. I suspect we don't have complete referential integrity, so no cascading deletes. It's a third party product, so we can't make major changes because they would be wiped out in the next update (which happens often).

We're on 9.2.0.4. The database is about 90GB, with over 2000 tables. Thanks for any suggestions you can give.

Tom Kyte
May 24, 2004 - 8:43 am UTC

90gig is pretty tiny (my desktop computer could almost hold that today) -- in my experience the only test that counts is the one against the same volume of data that will be in production (else, it is not a test, might as well just have empty tables for what thats worth).


Especially with a 3rd party application - you don't know their schema, you don't understand their rules. You could totally mess up the data -- further invalidating your test.

I would concurr with your DBA, you would be setting yourself up for a painful experience trying to subset this.

Given that you must have enough space to stage this (you suggest copy and then delete) -- I would just go with the full up dataset. You'll be much more successful in all ways doing that.

Don't want LOBs in my export

Amit Kumar, May 27, 2004 - 3:11 pm UTC

Hi Tom,

I have a schema in which I want to create a table with a lob column. For my testing, I always import the production data (on sun solaris) to my windows test machine. SO far the dump file was small, export/import was fast and easy. With the lobs, I would think it will be slow and the dump file will be huge. For my testing, I really dont need the lob table everytime I export.

So, my question is : Can I export everything except this one table? I can probably do it if I write each and every table's name except the lob table. But is there any other way ?

Will I be much better off by keeping the lob table in another schema ? What do u suggest I do if i don't want the lob table in every export I do?

Thanks,

Amit

Perhaps my earlier posting should be somewhere else ...

Amit, May 29, 2004 - 6:11 pm UTC

Hi Tom,

My ealier posting is about creating test environment from production. Its not an export question per se but if you want I can put it in with the export/import questions.

Thanks,

Amit

Tom Kyte
May 30, 2004 - 10:35 am UTC

you won't be using export to get everything but a single column.

in 10g, you can create table as select and create an external table in a datapump file format (so in 10g, this is pretty easy).

In 9i, you'd need to find another way. export won't do it.

Export/Import

MM, June 02, 2004 - 6:57 am UTC

Tom,
I have the following situation:
There are 2 databases database A (source) and database B (target). I need to append data from database A to the data in database B (all the tables exist in database B). There will be about 50 tables with an average of 50 to 100 rows each. The data is mostly look up data. There are a couple of tables that has a sequence number (unfortunately no sequence generator). Which means I have to increment the sequnce number correctly when I append it to the data in database B.
I can think of 2 options
1. Create a database link. Move the data via sql inserts. During this process I can also write code to increment the sequence numbers correctly.
2. Export from A import into B. Import the tables that require special processing in a different schema. Write scripts to modify the data to match the target tables and insert them.
Do you see any other options available to me? If you donÂ’t then of the two which will be the best solution.

Thank you.
MM


Tom Kyte
June 02, 2004 - 8:47 am UTC

#1 seems the most reasonable if you have to 'transform' the data (fix it)

Export/Import

MM, June 02, 2004 - 9:34 am UTC

I have one table with more than a million rows. Do you think this approach of database link will work to move so many rows or export/import will be better for that. I know I said earlier that all tables have about 50-100. Most of them do except for this one table.

Tom Kyte
June 02, 2004 - 9:55 am UTC

1,000,000 rows is pretty small. either or will do.

Export/Import

MM, June 02, 2004 - 10:38 am UTC

I use the one database link and write sql. Then if I need to move the data to a different database then I have to recreate the script right? Or I have to build the inserts dynamically so the remote db link name can be changed. Is there a way around this? Can I use a generic name 'MY_LINK' for the database link. And whenever a database link is created( to link the target database) the link will be called MY_LINK. (Given that there could be only only MY_LINK at a time.)
Thanks
MM

Tom Kyte
June 02, 2004 - 11:22 am UTC

views, synonyms - they are tools for location transparency.

moving to a new box

Eugene, July 24, 2004 - 1:55 pm UTC

Hi Tom,
We are moving from 8.1.6 on NT4 to 9.2 on Win2003.
As I read before there are at least a couple of ways to move the data over to a new server. I, however, stil in doubt what it the best (error free and fast) approach to it. Our DBA proposing export/import, but I think she does it because it is something that she simply knows the best.
I am not an expert, but I proposed moving 8.1.6 to a new server and upgrade in place (to 8.1.7, 9.2).
I was told that it will most likely take longer and it will not validate corruption, as import will. Plus, she said why do two steps when we can do it in one. Our current d/base is about 100G.
Please advise,



Tom Kyte
July 24, 2004 - 8:47 pm UTC

how many hours do you have for the cutover?

how does she plan on doing the exp/imp? with a bit of parallelism or just a full?

moving to another server

Eugene, July 24, 2004 - 10:14 pm UTC

Tom,
Thanks for reply. Time is not an issue. I am talking about 54 hours (almost full weekend). How would you turn parallelism on?



Tom Kyte
July 25, 2004 - 10:45 am UTC

then I would let the DBA do it however the DBA wanted to do it -- given that they have X units of time. It would be up to them.

the parallelism isn't "turned on" so much as "do it yourself implemented". export big tables by themselves "in parallel" (at the same time), export entire schemas "in parallel" -- import them "in parallel" (at the same time).

It would be a technique the dba would have to set up and practice a couple of times.

Oh -- the one thing they must do is practice this three or four times before actually doing it. Otherwise there is almost no chance it'll actually happen the first time around.

Your exp/imp strategy

Goh, August 03, 2004 - 7:29 am UTC

Dear Tom,

I am giving an assignment to perform a database migration from v7 to v8.1.7. The schema and file structure for the database are exactly the same. The size of the database is about 200GB and running on very old and low end server (HP server). The new server is running on Sun server with 4 CPUs. The timeframe given is 6days. 2 days for planning and 4 days for migration execution. Could you pls briefly share your strategy in this database migration process ? and what need to take into consideration.
Planning Stage :
Execution Stage :

Thanks in advance

Rgds
Goh
Thanks in advance.

Tom Kyte
August 03, 2004 - 9:03 am UTC

going from the unsupported to the soon to not be supported.... hmmm. anyway.


200gig, couple of hours -- I'd set up parallel export streams (if your schemas are nicely divided up - user level exports) and import streams.

(if not nicely divided up, table level exports perhaps).


Where is the time for the test plan?

A reader, August 03, 2004 - 9:33 am UTC

Create a trace : using
alter database backup controlfile to trace
Edit the trace file
copy the database files to new machine to any directory
modify the create control file statement (rename database, filelocations etc)
run it against the target database

Alter database open resetlogs

This way you can clone the database






Tom Kyte
August 03, 2004 - 9:36 am UTC


you need to check the question again:

I just wish to copy all database objects/structures like tables,indexes etc.,
and not the data.


hmm.



refresh testing db

Sean, August 03, 2004 - 5:02 pm UTC

Hi Tom,

We have a 100G prod db(920). We have to copy it to testing db once a week, so the testing db and prod db will be exactly the same(both objects and data). We used to clone the db once a week. But since the change is always small, are there any other ways to get the snapshot of prod db without db cloning or full export and import.

Thanks so much for your help.



Tom Kyte
August 03, 2004 - 6:02 pm UTC

you can use streams

</code> https://docs.oracle.com#index-STR <code>



Thanks.

goh, August 05, 2004 - 1:54 am UTC

Dear Tom,

Ya, Testing stage is in plan. I encounter another problem while I am importing a schema/owner to a new created schema in different server. Let's say import from A to B (newly created) with ignore = Y, no rows and no grants. The default tablespace for A is XYZ. One of the table doesn't created which I don't know why. Thanks.

Export file created by EXPORT:V09.02.00 via conventional path
import done in US7ASCII character set and UTF8 NCHAR character set
import server uses UTF8 character set (possible charset conversion)
. importing ADW_CP_OWNER's objects into DEV2_ADW_CP_OWNER
IMP-00017: following statement failed with ORACLE error 959:
"CREATE TABLE "LOT_PEGGING" ("CTRL_ID" NUMBER(*,0) NOT NULL ENABLE, "PLAN_ID"
"" NUMBER(*,0) NOT NULL ENABLE, "PEGGING_ID" NUMBER(*,0) NOT NULL ENABLE, "W"
"ORK_ORD_NUM" VARCHAR2(40), "LOT_NUM" VARCHAR2(40), "PRODUCED" CHAR(1), "OP_"
"PL_START_DTTM" DATE, "OP_PL_END_DTTM" DATE, "DEMAND_PL_DTTM" DATE, "QTY" NU"
"MBER(20, 4), "QTY_FED" NUMBER(20, 4), "DEMAND_QTY_FED" NUMBER(*,0),
..
..
"TINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE ""
"ADW_CP_DATA" NOLOGGING NOCOMPRESS, PARTITION "LOT_PEGGING_16" VALUES LESS T"
"HAN (17) PCTFREE 5 PCTUSED 95 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 1048"
"57600 NEXT 41943040 MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1 F"
"REELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "ADW_CP_DATA" NOLOGGING NO"
"COMPRESS, PARTITION "LOT_PEGGING_32" VALUES LESS THAN (33) PCTFREE 5 PCTUS"
"ED 95 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 104857600 NEXT 41943040 MINEX"
"TENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_P"
"OOL DEFAULT) TABLESPACE "ADW_CP_DATA" NOLOGGING NOCOMPRESS, PARTITION "LOT_"
"PEGGING_64" VALUES LESS THAN (65) PCTFREE 5 PCTUSED 95 INITRANS 1 MAXTRANS"
" 255 STORAGE(INITIAL 104857600 NEXT 41943040 MINEXTENTS 1 MAXEXTENTS 505 PC"
"TINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE ""
"ADW_CP_DATA" NOLOGGING NOCOMPRESS, PARTITION "LOT_PEGGING_128" VALUES LESS "
"THAN (129) PCTFREE 5 PCTUSED 95 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 10"
"4857600 NEXT 41943040 MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1"
" FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "ADW_CP_DATA" NOLOGGING "
"NOCOMPRESS, PARTITION "LOT_PEGGING_MAX" VALUES LESS THAN (MAXVALUE) PCTFRE"
"E 5 PCTUSED 95 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 104857600 NEXT 41943"
"040 MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1"
" BUFFER_POOL DEFAULT) TABLESPACE "ADW_CP_DATA" NOLOGGING NOCOMPRESS )"
IMP-00003: ORACLE error 959 encountered
ORA-00959: tablespace 'ADW_CP_DATA' does not exist
About to enable constraints...
Import terminated successfully with warnings.


Rgds
Goh

Tom Kyte
August 05, 2004 - 12:45 pm UTC

a multi-segment object that spans multiple tablespaces does not, cannot use the simple "replace the tablespace name with the users default tablespace"

what you'll want to do is:

imp userid=u/p file= table=that_table indexfile=foo.sql


that'll create foo.sql with the DDL. fix the ddl for this multiple tablespace object and then run it.

imp cannot deal with partitioned tables, iots with overflows, tables with lobs - objects that span multiple tablespaces -- if the multiple tablespaces don't exist. there is no way to map dozens of different tablespaces for this one object!

Thanks a lots.

shgoh, August 06, 2004 - 6:23 am UTC

Dear Tom,

Thanks for the info.

Rgds
Goh

q on testing

A reader, September 07, 2004 - 12:51 pm UTC

I am planning to rewrite some of the views as part of
code cleanup. These views are currently well tested.
But the test is not automated unfortunately. I want
a way to ensure that my rewriting of the views does not
change any answers. What I am thinking of is this.
1. Set up tables with known data on which these views
are dependent.
2. do a spool and run selects from each of these views.
save this file.
3. For testing, I will run the same set of selects
against my new view definitions and do a diff between
the results.

What do you think? Is there a better approach?

Thanx!

Tom Kyte
September 07, 2004 - 1:07 pm UTC

I'd prefer to have someone just read the views and say "yes, they are semantically equivalent" (eg: they do the same thing)

else, using your method, I could "prove" that a coin when flipped will always up end heads.

thanx!

A reader, September 07, 2004 - 2:17 pm UTC

"I'd prefer to have someone just read the views and say "yes, they are
semantically equivalent" (eg: they do the same thing)"

Sure - that is already done. I want to doubly make
sure that they are correct. Also a way of testing
them - if automated - can be rerun again when a
future change is made to any of the views. Of course
you need to maintain the "gold" file when you fix a bug.
You dont think this is a good idea?

For the same reason, I remember seeing an option to
remove extra white spaces while spooling on this site.
Can you please repost it (if it exists)?


Tom Kyte
September 07, 2004 - 2:44 pm UTC

SQL> set trimspool on

but, to compare two queries result sets, just:


select <whatever>, count(src1), count(src2)
  from ( select a.*, 1 src1, to_number(null) src2 from (VIEW1) A
         union all
         select b.*, to_number(null) src1, 2 src2 from (VIEW2) B
       )
group by  <all columns>
having count(src1) <> count(src2)


eg:
scott@ORA9IR2> create or replace view v1
  2  as
  3  select empno, ename, dname
  4    from emp, dept
  5   where emp.deptno = dept.deptno;
 
View created.
 
scott@ORA9IR2>
scott@ORA9IR2> create or replace view v2
  2  as
  3  select empno, ename, dname
  4    from emp, dept
  5   where emp.deptno(+) = dept.deptno;
 
View created.
 
scott@ORA9IR2>
scott@ORA9IR2>
scott@ORA9IR2> select empno, dname, count(src1), count(src2)
  2    from ( select a.*, 1 src1, to_number(null) src2 from V1 a
  3           union all
  4           select b.*, null, 2 from V2 b
  5             )
  6   group by empno, ename, dname
  7  having count(src1) <> count(src2)
  8  /
 
     EMPNO DNAME          COUNT(SRC1) COUNT(SRC2)
---------- -------------- ----------- -----------
           OPERATIONS               0           1
 


That shows us that SRC2 (v2 -- view2) has a row that view 1 does not.....



 

thanx!!

A reader, September 07, 2004 - 2:52 pm UTC

that should help...however, in my case though,
the view names would be the same. So I will have
view v1 - the older definition.
and the same view v1 - a newer definition.

I would be comparing the two. I suppose I can
always change their names in the newer version to
say v1_test; test it out and then change thier names
back..

Thanx!

Tom Kyte
September 07, 2004 - 2:55 pm UTC

c/from v1 a/from (text that will become v1) a/

just use an inline view.



excellent!

A reader, September 07, 2004 - 3:46 pm UTC

thanx!!

create test simulation

krish, October 12, 2004 - 1:14 am UTC

Hi, Tom
What a wonderful assistance you provide..
We have 9i (9.2.0.1.0) production db (5 gb, 100 users) on
main server and test db on another dummy server.
We face problems while optimizing querries on both servers.
Query executes quite well and fast over dummy
but fails or delays the execution at main server.
(This might be due to 100 online users).

So we have to execute query over both the servers for
confirmed results.

Is there any way whereby we can simulate production
db over dummy db ??



Tom Kyte
October 12, 2004 - 8:00 am UTC

when testing -- you need to test against a full copy of production, with similar loads (users).

there are many automated testing tools (eg: mercury interactives loadrunner), there are even free load test tools out there (google around).

Even simply running scripts you write yourself can achieve that.

Giri, November 18, 2004 - 2:03 am UTC

Ctas? if different OS also..?

Tom Kyte
November 18, 2004 - 10:34 am UTC

no idea what you mean.

tablespaces dont match between source and target DB

Jianhui, November 30, 2004 - 3:48 pm UTC

Tom,
The DB, imp&exp version are 9204. Here is what i did.

exp from schema x@DB_1
imp into schema y@DB_2

DB_1 & DB_2 dont have exactly same tablespace layout, actually all data related tablespaces in DB_1 dont exist in DB_2

Interesting thing that i observed, some tables failed to import because of following error

IMP-00003: ORACLE error 959 encountered
ORA-00959: tablespace 'xxxx' does not exist

But some tables succeeded to import even their orignial tablespaces(in DB_1) dont exist in DB_2. 
. . importing table              "MYTABLE"     354815 rows imported

in DB_1, 
SQL>select tablespace_name from user_segments
  2  where segment_name='MYTABLE';

TABLESPACE_NAME
------------------------------
TBS_01


in DB_2
SQL>select tablespace_name from dba_tablespaces
  2  where tablespace_name='TBS_01';

no rows selected


How could you explain this? I guess later version import tool creates tables in default tablespace if the same tablespace as the original database is not found, but how come some failed some didnot? 


 

more information about imp&exp with different tablespace layout

Jianhui, November 30, 2004 - 4:00 pm UTC

Tom,
Just found out those tables failed to be imported are partitioned, guess that tells me something. But it would be great if you can explain how imp works without same tablespace, an URL would be great too.

Data Transfer

denni50, January 19, 2005 - 12:56 pm UTC

Tom

need your take on this:

we want to transfer data from the prod db to the test db.
this involves two tablespaces(data,index) that is joined
to the third party software owned by a super user.

we have been going back and forth with the vendor on how
best to get this done..they are recommending an entire
db copy(undo,redo,temp,data,index..etc)...they claim
it's easier and faster. I see a huge potential for disaster.

I'm suggesting transport_tablespaces..they say this can't
be done because of the impact on PL/SQL(software packages,procedures,triggers).

They also claim export/import is too complex because of the app design.

I am not in favor of the db clone prosposal(we already have a db, all we want is a schema data transfer involving two tablespaces).

what is your take on this?

your suggestions will be considered.

thanks

Tom Kyte
January 20, 2005 - 9:32 am UTC

why do you see a huge potential for disaster?

if your dba's cannot take a production database and give you a clone of that database with just the two tablespaces you need (including rbs, temp and system) in their sleep, with one hand tied behind their back, with their eyes closed -- THAT, that would be the disaster.


This would be standard "recovery my data over here please" sort of request.

Tom

denni50, January 20, 2005 - 10:06 am UTC

thanks for answering here...I asked another question
on this issue in another thread regarding 'COPY'.

the potential for disaster that I see is possible corruption of any of the files during the copy process in the event of a power outage, network disconnection...etc.

The last time we did this it took 17 hours to copy
all the undo,temp,sys,data,index files from prod to test.

My issue is why do we need to re-copy all the undo,redo,
temp,sys files when all we want is the data,index files
transferred. The test db is already an exact replica of
prod...all we want is to bring the test db data current with
prod db.

I saw an answer that you provided to another poster on
another thread about using RESTORE from backups to accomplish copying data from prod to test...so I recommended that option to the vendor's DBA.
He answered that that is a viable alternative...so
we seem to be making headway on that issue.

thanks Tom...you're the best(as if you didn't already know!)
:~)





Tom Kyte
January 20, 2005 - 10:50 am UTC

if that disaster were to occur -- and be a valid concern, the entire concept of backing up would be "a disaster"


backup and restore - use your backups and restore to another system. awesome PRACTICE for when true disaster strikes :)


you should only need:

system
rbs
data

temp is recreated using create temporary.
if backup was hot as it should be -- you need the archives to recover the database.

then alter database open resetlogs to create new online redo.

Export database

j, February 10, 2005 - 2:32 pm UTC

Hi Tom,

A simple question,

to do a full export of the db, do I need to log off all users,

thanks in advance for your help

Tom Kyte
February 11, 2005 - 7:36 pm UTC

no, you might need consistent=y, but you don't need to log anyone off.

what about tablespace

j, February 13, 2005 - 12:34 am UTC

hi tom,

what about tablespace when we export, if I am exporting a database to another server, what happend if I dont have the same tablespaces and datafiles in the new server.

thanks for your great help

Tom Kyte
February 13, 2005 - 9:16 am UTC

if the tables are "simple" (eg: not partitioned, no lobs, no overflow segments -- the table has a SINGLE segment -- partitioned tables, lobs, overflows are all indications of a multi-segment object), then imp will rewrite the create statement to use your DEFAULT tablespace

eg: if the dmp file has "create table t ( x int ) tablespace DOES_NOT_EXIST", when you goto import it, and it fails due to "tablespace not there", imp will rewrite that as "create t ( x int )" and retry it.


if the object is a multi-segment object, and some/all of the tablespaces (pre 10g, assuming IMP and not IMPDP -- the new data dump) do not exist, you would have to:

a) imp .... indexfile=temp.sql
b) edit temp.sql, get the create statements
c) edit them, fix the tablespace references
d) run them - to precreate the object
e) import with ignore=y



Multiple imports

Sanjay Jha, March 30, 2005 - 5:39 pm UTC

Tom,
Here is the requirement of my client.
We have a production database running on 8.1.7. There are major releases implemented from time to time to make substantial data structure changes, like adding new tables, modifying columns etc. They want to preserve a "baseline" after each release (which I do by taking a full export dump), let us call it dump1.

The requirement now is that as and when needed, they should be able to re-create a new database instance from this baseline dump1 (but with no data from that time).

However, they would like to get the data from the current production database as of that date, so I will have to make a fresh export, let us call it dump2.

So requirement is to get the data structure from dump1 and data from dump2 , so I do import first from dump1 with rows=n and full=y option and they are fine.It will bring all the users, tables, views and CONSTRAINTS too.

But when I try importing data from dump2 into the newly created database had problems of data load as constraints complained for Parent keys not found. So I disabled all the constraints first and thought that next import from dump2 (with full=y, constraints=n options)would go through without problems.

It did go without problem this time but it somehow enabled some of the constraints even though I had put constraints=n. All the other constraints remained disabled. Also, it might have imported new objects and constraints from dump2, which we did not want. Was it because I had exported dump2 with full=y option?

So, how can we just get the data and data only from dump2 into the new database and rest of the structure including CONSTRAINTS from dump1?

In short we want database structure as of baseline but the data of current production, any additional objects or constraints of current production should not "sneak" into our newly created database while getting the data. Is it feasible?

Also, why some of the constraints got enabled in my second import?

Tom Kyte
March 30, 2005 - 6:21 pm UTC

you don't tell me "what" constraints so I'll have to ignore that part (information is vital)

exp userid=x/y tables=(t1,t2,t3....) contraints=n indexes=n <whatever else=n>


that'll extract just the data, you'll still have to deal with any new/dropped columns yourself.

Sanjay Jha, March 30, 2005 - 9:59 pm UTC

Tom,
constraints are foreign keys and primary keys, that I referred to in my question earlier.

We want all the constraints to bring from baseline source(dump1).

But we need full database refresh not few tables, the list of tables will be too long and also there are multiple schemas. We need to bring in all the schemas.

Please give me a solution where one can rebuild an environment from scratch, by taking the database structure (everything except data) from a frozen baseline dump(dump1) and just the corresponding data (and nothing else) from the current production.

Incidentally can you refer to me to any procedure by which one can compare the two databases across the schemas(TOAD has schema comparison, but have not found any other tool which can do comparison at database/instance level and that too without creating a database link).

Please help.Thanks.

Tom Kyte
March 31, 2005 - 7:27 am UTC

if you imported with constraints=n, they will not come back in unless it was a bug, but not a bug that I've seen

for example:

drop table emp;
drop table dept;
create table emp as select * from scott.emp;
create table dept as select * from scott.dept;
alter table emp add constraint emp_pk primary key(empno);
alter table dept add constraint dept_pk primary key(deptno);
alter table emp add constraint emp_fk_dept foreign key(deptno) references dept(deptno);
alter table emp add constraint emp_fk_emp foreign key(mgr) references emp(empno);
                                                                                                         
select object_name, object_type from user_objects
union all
select constraint_name, 'CONSTRAINT' from user_constraints;
                                                                                                         
!exp userid=/ 'owner=(ops$tkyte)'
pause
                                                                                                         
drop table emp;
drop table dept;
select object_name, object_type from user_objects
union all
select constraint_name, 'CONSTRAINT' from user_constraints;
!imp userid=/ full=y constraints=n indexes=n
select object_name, object_type from user_objects
union all
select constraint_name, 'CONSTRAINT' from user_constraints;


running that will not have any primary/foreign keys in place.


the list of tables may be placed into a parameter file (and hence won't be "too long", you'll just need to maintain a file of tables you want -- something you can generate straight out of the data dictionary)


without a database link, you would have to compare things "in the client without using SQL".  I don't know of any tool that wants to reinvent the power of sql in their client application just to compare two database structures. 

A reader, May 25, 2005 - 11:47 am UTC

Hi Tom,

Consider the following situation:

I have a server setup which hosts 2 databases db1 and db2. Since this leads to a lot of maintenance we want to essentially have just one database db1 and have the main schema in db2 as part of db1. However we would still like to maintain the tablespaces pertaining to db1 separate from those of db2. But right now there are some tablespaces which have the same name in db1 and db2 like 'INDX'. What do you think will be the best and easiest way to carry out this operation.

Thanks.

Tom Kyte
May 25, 2005 - 3:08 pm UTC

is 10g in your future?

A reader, May 25, 2005 - 3:43 pm UTC

Hi Tom,

Yes the database we are using is 10g.

Thanks.

Tom Kyte
May 25, 2005 - 7:43 pm UTC

great, 10g lets you rename tablespaces.

rename them, transport them, attach them.

applications don't care what tablespace things are in, you just need to update admin scripts (maybe)

A reader, May 25, 2005 - 8:57 pm UTC

Hi Tom,

Can you give me examples of how to rename tablespaces, transport them and attach them?

Consider I have database db1 with tablespaces tb1 and tb2 which have tables created tbl1 and tbl2 respectively in the two tablespaces. I also have db2 with the same tablespace names and same table names. Now I rename tb1 and tb2 in db2 to db2tb1 and db2tb2. If I put them in db1 then are you saying that automatically the tables which were originally created in the old tablespace names in db2 will automatically be in the new tablespaces? When I transport the tablespaces and attach them, what else will come along with the tablespaces. Thanks a lot for your clarifications.

Thanks.

Tom Kyte
May 25, 2005 - 9:08 pm UTC

have you peeked at the docs at all?

alter - see sql reference for altering a tablespace

transporting, see the utilities guide for exp/expdp and imp/impdp

you would rename the tablespaces
transport them off (export the metadata and copy the datafiles)
attach them (import the metadata)

you get the "data and relevant objects (trigger)", you would need to move things like procedures/packages yourself.

To clone subset of data using exp/imp

Edgar, May 26, 2005 - 5:05 am UTC

Hi,

IMHO, it is worth idea also, if you concerned in creating test env containing subset of data:

1.) generate list (export parameter file) of schema.tables:partitions this subset must contain
2.) do table mode export consistent=y rows=y with no indexes, no grants etc.
3.) do shema(s) mode export with rows=n
4.) export grants for schemas with SQL*Plus script (generate script with script)
---
5.) create schema(s) in target database
6.) assign grants from step 4
7.) import data from step 2.
8.) import the rest from step 3.

9.) take care about analyze statistics: somtime it is good idea to copy them from production into test, not gather.
Then sql execution plans will be more real during tests.

Order of rows/norows exp/imp make scense becouse of sequences.

Regards,

A reader, May 26, 2005 - 5:06 am UTC

How to refresh TEST database with a hot backup taken from PRODUCTION without stored program units like packages, funcations, procedures etc.

Tom Kyte
May 26, 2005 - 9:38 am UTC

you won't, you can restore production to another instance, TRANSPORT just the data using export (allows you to copy datafiles) and import them (attach these files) to the real test instance.

that would just move the data.

Or you could do tablespace point in time recoveries on your data tablespaces as well -- but that basically just automates the above process.

Re: A reader

Edgar, May 26, 2005 - 5:33 am UTC

Hi,

A little strange situation...
Typically, You test changes (releases) You suppose to aply to production environment,
So, typicaly, it is good idea to clone production with all objects definitions and apply changes (release) in test env. In that case You have possibility to test not only new code, but also process of apllying changes.

Anyway, You can try:
1. export schemas with rows=n from test env
2. clone test env from backup of production
3. import with ignore=y

Ask Tom, maybe, also about transportable tablespaces?

Export and Import of PL/SQL code

Ravi, May 26, 2005 - 8:42 am UTC

Tom,
In your book you have mentioned you do not prefer to use Export and Import to move PL/SQL stored program units, but when I do an Export it does seem to be doing it
. exporting views
. exporting stored procedures

We are using Oracle9i Enterprise Edition Release 9.2.0.5.0, can you detail any anamolies with simple examples where things get 'out of sync' when moving PL/SQL.

If you could give a convincing reply, consider this, I am using TOAD's Export Source code for all my PL/SQL code (incl Views) and then once done, I am recompiling each 'invalid' item one by one. This seem to complete the process, am I doing anything wrong?

Cheers

Ravi

Tom Kyte
May 26, 2005 - 9:56 am UTC

use export to move procedures -- and nothing but procedures.

how would you do it?

I don't want tables, I don't want sequences, I don't want views -- in fact, I want to move just the procedures from schema X to schema Y

How would you use exp to do that.

that is why I prefer not to use it, it isn't very flexible that way.

Ravi, May 26, 2005 - 10:10 am UTC

So if you want the whole lot, ie Database Objects and PL/SQL stored units, you would still use Export?

Tom Kyte
May 26, 2005 - 10:35 am UTC

I rarely use exp/imp to move things. but it would depend on the circumstances.

(I'd rather run the install script for the application -- or restore a database)

Followup to export tablespaces, transport and reattach them

A reader, May 26, 2005 - 10:53 am UTC

Hi Tom,

Thanks for your reply on how to use expdp to transport tablespaces. According to your suggestion, you said that since this is 10g I can rename the tablespaces, export and transport the tablespaces and reattach them. My question is does this apply to SYSTEM, REDO, UNDO tablespaces also or just the user tablespaces. Please let me know.

Tom Kyte
May 26, 2005 - 10:57 am UTC

you are just moving the data, your data.

system -- no, you would not transport that.
undo (temp) -- no, you would not transport that.

redo is not a tablespace

This is simply a way to move your data from database 1 to database 2 without having to unload it and reload it

A reader, May 26, 2005 - 1:45 pm UTC

Hi Tom,

I tried to use expdp for transporting tablespaces using the following command:

C:\>expdp userid/pw@servicename dumpfile=dumpdb.dmp transport_
tablespaces='<list of tablespaces>' transport_full_check=y logfile=dumpdb.log directory=data_dir

The prompt returned in 2 mins and when I checked the log this is the contents:

;;;
Export: Release 10.1.0.2.0 - Production on Thursday, 26 May, 2005 11:42

Copyright (c) 2003, Oracle. All rights reserved.
;;;
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
ORA-39012: Client detached before the job started.

Please help.

Thanks.



Tom Kyte
May 26, 2005 - 2:39 pm UTC

what data pump setup had you performaned on the server?

have you read through:
</code> http://docs.oracle.com/docs/cd/B14117_01/server.101/b10739/tspaces.htm#sthref1231 <code>

A reader, May 26, 2005 - 3:23 pm UTC

I have done the following:

begin
sys.dbms_tts.transport_set_check('<list of tablespaces');
end;
/

After this, I did
select * from sys.transport_set_violations;

and I got "no rows selected".

Then I did alter tablespace 'tablespacename' READ ONLY;

for all the tablespaces in the list.

I created a directory
create or replace directory data_dir as 'c:\expdp';

After this, I issued the expdp command from the command prompt.

Thank you for the link to the article and that also states the same steps before export. Please let me know if I am missing something.

I also have another question. Immediately after issuing the expdp command should I start copying all the datafiles from source to destination?

Thanks a lot for your help.




Tom Kyte
May 26, 2005 - 3:31 pm UTC

you can start copying the datafiles as soon as you set it read only.

A reader, May 26, 2005 - 3:41 pm UTC

Thats great let me try copying the datafiles and let you know. And did you find any problems with the steps I took before doing expdp?

Thanks.

Tom Kyte
May 26, 2005 - 7:23 pm UTC

assuming c:\expdp was on the SERVER, looks ok so far.

A reader, May 31, 2005 - 4:32 pm UTC

Hi Tom,

I referred to this article for using transportable tablespaces.
</code> https://asktom.oracle.com/Misc/oramag/on-fetching-storing-and-indexing.html <code>
I have completed till step 5 in that article. Now I am trying to do the import into the target database. (I have not exported the metadata from the source database).
When I tried the import as

impdp userid/pw@servicename transport_datafiles="all datafiles" network_link='srcdb' transport_tablespaces=\(list of tablespaces\) nologfile=y

I am getting the error

ORA-39001: invalid argument value
ORA-31600: invalid input value srcdb for parameter REMOTE_LINK in function DBMS_
DATAPUMP.OPEN
ORA-02019: connection description for remote database not found

Please let me know what mistake I am doing.

Thanks.

Tom Kyte
June 01, 2005 - 8:07 am UTC

your database link isn't setup correctly.

Unless and until you can:


select * from dual@srcdb;

that'll not work.

Problem while exporting data

Gaurav Chandra Pandey, June 03, 2005 - 3:52 am UTC

Hi,

While taking the export of the database the export process hangs while
. exporting cluster definitions ....
The export process is scheduled in windows scheduler...
Please help me..because i am not able to export the data from 2 days.


Tom Kyte
June 03, 2005 - 7:24 am UTC

not much I can do here except tell you to look into the database and see what it is "doing". so, what is it doing.

No versions
No OS's
no nothing...

(support would be a great place to go as well)

A reader, June 07, 2005 - 11:49 am UTC

In reference to the previous question about transportable tablespaces where you said that the link srcdb might not be setup correctly. So I corrected that issue and I retried the import

impdp userid/pw@servicename transport_datafiles="all datafiles"
network_link=SRCDB transport_tablespaces=\(list of tablespaces\) nologfile=y

I am getting the error

ORA-39001: invalid argument value
ORA-31600: invalid input value SRCDB for parameter REMOTE_LINK in function DBMS
_DATAPUMP.OPEN
ORA-12528: TNS:listener: all appropriate instances are blocking new connections.

Even if I try connecting to the target database through SQL* Plus I am getting the same error ORA-12528. Is this because all the tablespaces in the target database are READ ONLY.

Please help.

Thanks.


Tom Kyte
June 07, 2005 - 1:03 pm UTC

sounds like your tns connect string is wrong or the srcdb database is not started.



A reader, June 07, 2005 - 1:53 pm UTC

I just checked, the tns connect string is correct and the srcdb database is started.

Thanks.

Tom Kyte
June 07, 2005 - 2:00 pm UTC



can you select * from it as stated above.

A reader, June 07, 2005 - 2:35 pm UTC

Hi Tom,

The problem was that somehow the listener.ora did not have the SID entry for the srcdb database. I added that and restarted the listener. Now when I am trying to connect to the database I am getting

ora-01033 - Oracle initialization or shutdown in progress.

Should I do shutdown and startup from command prompt? If I do that should I recopy all the datafiles from target database to source database? Please help.

Thanks.

Tom Kyte
June 07, 2005 - 2:41 pm UTC

do you have a dba ? I don't want to tell you what to do at this point.

A reader, June 07, 2005 - 2:59 pm UTC

I am the dba for this project and this is the first time I am trying something like this. Please help.

Thanks.

Tom Kyte
June 07, 2005 - 3:11 pm UTC

I'd rather not -- not sure of what your environment is, what is installed, how it is installed. Your comment about "copying datafiles" makes me a bit leery of suggesting anything.


Getting a database started and running (regardless of expdb or anything) would be something I would expect a DBA to have well in hand. I'm afraid to tell you what to do -- not knowing your environment.

script to create users

Susan, October 19, 2005 - 12:21 pm UTC

Hi Tom,

I'm using your suggested method from above to create a script for creating users, roles, synonyms. I'm going from 8i to 10g - is there anything I should consider regarding changes in grants to roles in 10G? I assume don't need to worry about existing roles, users and synonyms there's no harm in leaving them in script b/c they'll just error out when Oracle tries to recreate. Thanks.

Tom Kyte
October 19, 2005 - 12:53 pm UTC

remember that connect is "fixed" in 10gr2 - it only has create session... so the users with connect will have different privs.

Alex, November 22, 2005 - 10:40 am UTC

Hi Tom,

I was hoping you could elaborate on the backup section you wrote in Expert One on One on page 340. We currently use dumps from production to "refresh" our QA databases using IMP/EXP. As you stated in the book, this can take several hours. My questions are:

1.) How can I take advantage of RMAN to do this? Is it simply specifying a RECOVER command and using a existing backup?

2.) Can we use export files using RMAN or does the file have to be creating using RMAN backup?

3.) Why MIGHT this take 5 minutes, as opposed to the hours involved in an import?

I apologize if the answer is on this tread or in the documentation you mentioned in the book. I have only skimmed it but I do plan on reading all the recovery documentation oracle has to offer. There is tons of great information there.

Tom Kyte
November 22, 2005 - 11:52 am UTC

1) you would follow the directions to recover to a new host. Or, using the duplicate database command. However, it would be cool for your dba's to prove they can restore by doing this using a restore of a backup.

</code> http://docs.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmdupdb.htm#i1008564 <code>


2) it would be an rman backup

3) because you are just laying down datafiles and going - you are not reloading and reindexing and recompiling code.




Alex, November 22, 2005 - 12:20 pm UTC

Thanks, I was looking into the duplicate database feature, but that looks like it copies the data from one database to another, not from any kind of file?

Also, I'm not sure what you meant by:
"However, it would be cool for your dba's to prove
they can restore by doing this using a restore of a backup."





Tom Kyte
November 22, 2005 - 4:20 pm UTC

copies datafiles.



the thing you weren't sure on - just ask your dba's "hey guys, you can restore, why not just restore, it'll be really fast".

they have backups, they should be able to restore, and it would be great practice.

Alex, November 22, 2005 - 4:49 pm UTC

Oohh I think see what you are saying. No I can't ask anyone, we don't use RMAN at all. I'm trying to build a case to start using it. Speeding up the time it takes to refresh our databases would be huge. The only thing we do right now is IMP/EXP.

To give you a flavor for where our level of expertise is, our client shuts down every night (production) to do a cold backup, while running in achieve log mode.....Almost like putting a hard-parse hint in all our code.

I'm still not sure how I would go about using duplicate database for our needs though. The command looks like
this: DUPLICATE TARGET DATABASE TO newdb;

The target being the one I'm logged in as, the one to be copied, and newdb being of course the auxiliary database to be populated. So if I have a backup handy, I don't see how that fits in here? Again I apologize I'm new to this.

Tom Kyte
November 23, 2005 - 9:09 am UTC

has your client ever RESTORED this backup to another machine just to prove they can (else why bother with backups at all???)

Ask them to do that - that'll

a) give you your test instance
b) prove to management they can restore


forget rman for a minute, just do a traditional restore.

Alex, November 23, 2005 - 10:27 am UTC

We restore backups all the time, it just takes a couple hours. We've never had the need for any kind of disaster or media recovery since I've been here, but we do import new data from production all the time.

So we don't get our terminology mixed up,

"backup" for us = a dmp file from production
"restore" for us = IMP the dmp

That's all we do, end of story. So when you say "traditional restore", I think that may mean something different to you and I.

I got one problem out of someone about using RMAN, I guess in order for our client to give us backups in that format would be much too large, like 30 gigs. I don't know if you can compress those or what. Also the fact that our data is sensitive may come into play, it might have to be transported on some DOD approved grenade-proof device that only holds a gig or what have you. Thanks.

Tom Kyte
November 23, 2005 - 11:00 am UTC

your terminology is WRONG!!!



You wrote:

...
To give you a flavor for where our level of expertise is, our client shuts down
every night (production) to do a cold backup, while running in achieve log
mode.....
.....


I want to restore those backups - those are not exports!

Alex, November 23, 2005 - 11:38 am UTC

I know it's wrong, that's the point of me asking this question, I want to know what's right. I was just translating how those terms are used here.

The only reason I told you how our client shuts down every night is to give you a good idea of how clue less they are. It sounded like you might have been assuming a lot of knowledge on our part, I just wanted to clear that up.
However, I think they are exports they are taking (unless that's not possible while the instance is shut down and that's why you said they aren't exports). But as far as I know they're EXPs.

I will say though, after reading some of the IMP/EXP thread you recently responded to, that for what we are doing (moving data) IMP and EXP is the way to go. I thought RMAN backups could be used in place of a dmp it would just work faster. Say you had a copy of a production database, and you wanted to populate it with production data to test against, is an import correct or is restoring a backup right? Keeping in mind that this would be done often and regularly. That's basically what I want to know.


Tom Kyte
November 23, 2005 - 7:35 pm UTC

you cannot take exports when the database is down, the database must be up.

So, they must be taking proper backups (else, archive log mode isn't very useful either!)


the fastest was to dup a database will be to take the physical backup of the database and restore it. dumping it out (via export) and loading it back in (via import) is about the slowest way to do it.

Backup/Restore to clone database & Transp.Tablespace from 8i to 10g

stl, November 24, 2005 - 11:21 am UTC

Dear Tom:
I have two questions in reference to this thread...

1. about your statement "If the server is the same environment (hardware/os) just backup production and restore to test." Can I backup from production on Solaris SPARC and restore it on test on Solaris INTEL? Or this is not possible ? Does exist some workaround?... i know that in 10g it is possible to transport tablespaces between different platforms.

2. about transportable tablespaces, Can I transport tablespaces from Oracle 8i w/US7ASCII to Oracle 10g w/W1SO8859P1?

Thank you!



Tom Kyte
November 24, 2005 - 5:09 pm UTC

1) not a backup, no. backups are OS specific.

In 10gr1 there is the ability to do cross platform transports of TABLESPACES

In 10gr2 there is the ability to do FULL database transports if the systems have the same byte order.


2) character sets must be the same - data changes in different character sets - the same "string" is represented with different byte(s).

Backup/Restore to clone database & Transp.Tablespace from 8i to 10g

stl, November 25, 2005 - 2:18 pm UTC

Tom,

What does "FULL database transports" mean? Can i transport all tablespaces?

I need to transport in only one way, in this specific scenario is it posible some character set convertion ?

Thank you!


Tom Kyte
November 25, 2005 - 3:49 pm UTC

yes, that is what a full database transport is - sort of like backing up from OS1 and restoring to OS2 - the entire database.


character sets for tablespace transport *must be the same*.

Possible Causes - Export Hangs . .Exporting Cluster Definitions

VIKAS SANGAR, December 08, 2005 - 7:10 am UTC

Dear Mr. Kyte

What can be the possible causes of Export hangs while Oracle tries to Exports Cluster Definitions.

I am facing this problem, after successfully Importing(On my Windows 2003 server, P4 running Oracle 9iR2 Database) an User's Table Export(dmp file using Command line & Obejects like Views/ Procedures/ Sequences/ Triggers were exported via Pl/SQL Developer in *.SQl format) which were exported by one of our clients for testing purposes having Oracle9i R2 running on AIX Systems. (Few Tables have CLOB columns in them)

Now after making necessary changes, When, I am trying to re Export the same User from my Winows 2003 Server Machine the Export hangs when it reaches ..Exporting Cluster Definitions. I am however able to take Export of all of the User's Tables with data succesfully, but not the complete User.

What, I must do, to find out where the problem actually persists? I am unable to locate the exact location and Cause of problem?

Also can we Monitor/Trace/Analyse an Export procedure in Oracle 9i R2 to find out what actually goes on in the database while Export is running?

Your valuable Advice, in this regards will be of great help for me and many other People facing the similar Problem.

Take care, regards...
Vikas.


Tom Kyte
December 08, 2005 - 7:16 am UTC

You can use a logon trigger to enable sql_tracing of your account, run the export and see what happens.


begin
execute immediate 'grant alter session to ' || user;
end;
/

create or replace trigger trace_trigger
after logon on schema
begin
execute immediate 'alter session set events ''10046 trace name context forever, level 12'' ';
end;
/


also, you can always query the v$ tables to see what you are waiting on/sql executing currently from some other session.

Only for Discussion --> Export Hangs . .Exporting Cluster Definitions

VIKAS SANGAR, December 13, 2005 - 7:14 am UTC

Dear Mr. Kyte

Thanx a lot for your feedback.

On the above post, Since I could not find the descrepancies in the database of which I needed to export the User and the User Export used to hang while at reaching Exporting Clustered Definitions, few things that were done by me, resolved the problem but the process was Long.

First of all, I logged in as the user SYS and was able to take an Export of the user that was needed to be Exported. The Export prcess ran smoothly when executed by me when I logged in as User SYS. But, previously it used hang at reaching Clustered definitions when logged in with any other user (Even if it had Imp/Exp_Full_database, and dba Privs).

Can this be due to some Authorization/Rights/Privilege mismatch that the export hanged before when it was done by some other User?

Secondly, after successfuly exporting the User, when I tried to import the DMP file to any other fresh user (logging in as user SYS)in the same database, The import failed or was unsuccessful as it said Objects already Exist, this may due to the fact that there were other different users having the same objects in the database.

Thirdly, when I Imported the same DMP file as user SYS, in a newly created database, and tried importing the user the Import process ran successfuly. Now if I try to export the same user as Itself or as user SYS the Export process is running fine, and Its not having any Hang-ups while Exporting Clustered Definitions.

Can you please throw some light on, why it happened and what might be the possible cause of previous failures? What actually database attempted to do while Export was being run? What made Export hang when it reached Clustered definition?

Take care, regards.
Vikas.

Tom Kyte
December 13, 2005 - 9:38 am UTC

... Can this be due to some Authorization/Rights/Privilege mismatch that the export
hanged before when it was done by some other User? ...

doubt it, but sql tracing would have told you - you'd see the differences there.


Please utilize support if you are not willing to run the trace for me...

DBMS_DATAPUMP

Laxman Kondal, January 12, 2006 - 11:10 am UTC

Hi Tom

I testing NETWORK_LINK=source_database_link
for imp/export in Oracle10gR2 and using user SYSTEM. Granted EXP_FULL_DATABASE/IMP_FULL_DATABASE ROLE and this still needs some privilege to do it. What am I missing to be granted to system to do this network exp/imp.

system@WNGDEV> begin
2 :h1 := dbms_datapump.OPEN(operation => 'IMPORT',
3 job_mode => 'SCHEMA',
4 remote_link => 'DB_WDMRM014.REGRESS.RDBMS.DEV.US.ORACLE.COM',
5 job_name => 'TEST',
6 VERSION => 'COMPATIBLE');
7 end;
8 /
begin
*
ERROR at line 1:
ORA-31631: privileges are required
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 2926
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4367
ORA-06512: at line 2



H1
----------


system@WNGDEV>

Thanks and regards.

Tom Kyte
January 12, 2006 - 11:21 am UTC

could it be A Data Pump job initiated be a user with EXPORT_FULL_DATABASE/IMPORT_FULL_DATABASE roles specified a network link
that did not correspond to a user with equivalent roles on the remote
database.

Copy a subset of data to TEST weekly

Charlie Zhu, January 12, 2006 - 1:34 pm UTC

How about I just want a subset of PROD db data?
What's your suggestion to do it?

We don't have same storage in TEST server.

Thanks, Charlie

Options for deleting the load test data

A reader, January 16, 2006 - 11:41 am UTC

Tom,

We are running tests on a J2EE application/Oracle DB (running STREAMS) using the testing tool Web Load. Once we do a round of tests, we would like to delete the data in the data in the database since the start of the test and get it back to the baseline state. We would like to continue doing these tests in continuous cycles until the load testing of the application and database is complete.

One option we were considering for getting the database back to the way it was before starting the tests is to restore the database.

Wanted to know if there is any other option (flashback?) available for doing this? Any comments you may have on doing simlar tests in a better manner would be really appreciated.

Thank you



Tom Kyte
January 16, 2006 - 12:50 pm UTC

there is flashback database that you can use (10g and above)

otherwise, the old "restore from a backup" works nicely (if not slowly if it was a large database)

CreatingTest data from production

Menon, February 02, 2006 - 1:22 am UTC

Hi Tom
Consider two options for creating test data:
1. Copy the production data on a regular basis so that is a fairly good representation of your production data. There may be security issues here (for example sensitive data) which can be taken care of by scrambling selected columns of selected tables. The main advantage is that if your test system h/w and s/w stack are same as production (they better be), then you can run all sorts of tests in the test db (esp. performance tests) and be fairly sure that you will catch any issue. One disadvantage seems to be that your data gets wiped out every time you "refresh" so you have to make sure that you retain the data that you want to use as a baseline (e.g. baseline stats etc.) Other problem is that this approach may not scale as the volume of production goes high (The refresh time may creep up to uncomfortable levels and so on.)

2. Try and generate a representative slice of your production system in your test system. You can then run your tests on this "model". The problem here is that I think you can never really be sure if your tests would reflect the production...Esp. from performance point of view.

What are your thoughts on this? I personally feel approach 1 is not bad despite some shortcomings - it is the only faithful way to test (assuming you can do it.)

Tom Kyte
February 02, 2006 - 11:48 am UTC

1) I don't see what baseline "stats" have to do with it? You would just take the statistics from the production machine restore.

The time to refresh is the time it takes to restore your production system in the event of catastrophic failure - you'd want to know how long that is and work to minimize it, so think of it as a "plus", not a minus :)

2) is really hard (to get the slice) and doesn't really "prove" anything other than a functional test (test the functionality) does. You don't know if it'll perform nearly the same in real life.


I would be hard pressed to come up with a time when #1 would not be my choice.

Thanx Tom!

Menon, February 03, 2006 - 12:45 pm UTC

"1) I don't see what baseline "stats" have to do with it? You would just take the
statistics from the production machine restore."

That is true - I did not think it through...

"The time to refresh is the time it takes to restore your production system in the event of catastrophic failure - you'd want to know how long that is and work to minimize it, so think of it as a "plus", not a minus :)"

Good point:)

"2) is really hard (to get the slice) and doesn't really "prove" anything other
than a functional test (test the functionality) does. You don't know if it'll
perform nearly the same in real life. "

Agreed that is what I thought.

"I would be hard pressed to come up with a time when #1 would not be my choice. "

#1 is not possible in all cases, of course. For an ecommerce company which stores its info in a backend - yes. For a company that sells a product used by various custmers - you have to rely on some kind of model when testing. But it seems that if you can, you should use the exact same data in test as in production.

Thanx!

Neeraj, February 09, 2006 - 11:08 pm UTC

Source --> 9.2.0.6
Target --> 9.2.0.6

I am doing export-import(parallel schema wise)
Getting the ORA-01732 error for some MV's
Below is the detail:-
====================
1)
E1.Export full db with rows=n from source (only metadata).
E2.Export all schemas seperately from source with rows=y.

I1. Import E1 on target to create all metadata with constraints=Y.
I2. Import all schemas paralalley with constraints=Y.

. importing table "ISC_DBI_CFM_002_MV" 386 rows imported
. . importing table "ISC_DBI_CFM_003_MV"
Note: table contains ROWID column, values may be obsolete
IMP-00058: ORACLE error 1732 encountered
ORA-01732: data manipulation operation not legal on this view
. . importing table "ISC_DBI_CFM_004_MV" 1053 rows imported
. . importing table "ISC_DBI_CFM_005_MV" 229 rows imported
. . importing table "ISC_DBI_CFM_006_MV"
Note: table contains ROWID column, values may be obsolete
IMP-00058: ORACLE error 1732 encountered
ORA-01732: data manipulation operation not legal on this view
. . importing table "ISC_DBI_CFM_007_MV"

2)
E1.Export full db with rows=n from source (only metadata).
E2.Export all schemas seperately from source with rows=y.

I1. Import E1 on target to create all metadata with constraints=Y.
I2.Disable all Foreign keys.
I3. Import all schemas paralalley with constraints=Y.

. . importing table "ISC_DBI_CFM_002_MV" 386 rows imported
. . importing table "ISC_DBI_CFM_003_MV"
Note: table contains ROWID column, values may be obsolete
IMP-00058: ORACLE error 1732 encountered
ORA-01732: data manipulation operation not legal on this view
. . importing table "ISC_DBI_CFM_004_MV" 1053 rows imported
. . importing table "ISC_DBI_CFM_005_MV" 229 rows imported
. . importing table "ISC_DBI_CFM_006_MV"
Note: table contains ROWID column, values may be obsolete
IMP-00058: ORACLE error 1732 encountered
ORA-01732: data manipulation operation not legal on this view
. . importing table "ISC_DBI_CFM_007_MV"
IMP-00058: ORACLE error 1732 encountered
ORA-01732: data manipulation operation not legal on this view
. . importing table "ISC_DBI_CFM_008_MV"


Note:-
=======
in our case this error comes even when that MV is not present in the target (before import)

How to get out from the above issues while performing the parallel exp-imp?
Is there any sequence to import the database object while doing schema level import of all the users?


Tom Kyte
February 10, 2006 - 12:37 pm UTC

if you get me a teeny tiny example to play around with, I'd be glad to look at it, else please utilize support.

Neeraj, February 16, 2006 - 10:04 am UTC

Hi Tom
Sorry for the delay.
I have found the solution for the above mentioned error by me But I have a new error while doing the Export Import at the schema level.

We are using Oracle Applications 11.5.10.2
Database Version 9.2.0.6

To get the data from customer I am doing Schema level 
Exp-Imp.
There are many errors in APPS schema from many of them i have analyzed one of them and here is the issue:-

We followed the below steps for exp-imp
    ===============================
    1)
    exported the Metadata (Full=Y,Rows=N,Constraints=Y)
    2)
    exported all the schema's(Rows=Y and constraints=Y)
    3)
    Imported the Metadata(Full=Y,Rows=N,Constraints=Y)
    4)
    Dropped all the Materialized Views
    5)
    Disabled all the Foreign key Constraints
    6)
    Import all of the schema's (Rows=Y,Constraints=Y)
    
    
    Import errored out for some of the View's
    Below are my findings for 1 view:-
    ==========================
    
    "CREATE FORCE VIEW "APPS"."PER_COMMON_LOOKUP_TYPES_V"                       "
     "     ("ROW_ID","APPLICATION_ID","CUSTOMIZATION_LEVEL","D_CUSTOMIZATION_LEVE"
     "L","LAST_UPDATE_DATE","LAST_UPDATED_BY","LOOKUP_TYPE","LOOKUP_TYPE_MEANING""
     ","LAST_UPDATE_LOGIN","CREATED_BY","CREATION_DATE") AS "
     "SELECT FCLT.ROWID , FCLT.APPLICATION_ID , FCLT.CUSTOMIZATION_LEVEL , FL.MEA"
     "NING , FCLT.LAST_UPDATE_DATE , FCLT.LAST_UPDATED_BY , FCLT.LOOKUP_TYPE , FC"
     "LT.LOOKUP_TYPE_MEANING , FCLT.LAST_UPDATE_LOGIN , FCLT.CREATED_BY , FCLT.CR"
     "EATION_DATE FROM FND_COMMON_LOOKUP_TYPES FCLT, FND_LOOKUPS FL WHERE FCLT.CU"
     "STOMIZATION_LEVEL = FL.LOOKUP_CODE AND FL.LOOKUP_TYPE = 'CUSTOMIZATION_LEVEL'"
    IMP-00003: ORACLE error 1445 encountered
    ORA-01445: cannot select ROWID from a join view without a key-preserved table
    IMP-00017: following statement failed with ORACLE error 1445:
    
    ############################
    
    SQL> select count(*) from apps.PER_COMMON_LOOKUP_TYPES_V;
    select count(*) from apps.PER_COMMON_LOOKUP_TYPES_V
                              *
    ERROR at line 1:
    ORA-04045: errors during recompilation/revalidation of
    APPS.PER_COMMON_LOOKUP_TYPES_V
    ORA-01445: cannot select ROWID from a join view without a key-preserved table
    
    
    When we did FULL Import (Full=Y) we got the below message:-
    "
    IMP-00041: Warning: object created with compilation warnings
    "
    Which is fine
    BUT
    While doing the Schema level import suing the steps mentioned above(metadata-->drop MV's-->import
    schema's)
    we got the below mentioned error:-
    ===========================
    
    "CREATE FORCE VIEW "APPS"."PER_COMMON_LOOKUP_TYPES_V"                       "
     "     ("ROW_ID","APPLICATION_ID","CUSTOMIZATION_LEVEL","D_CUSTOMIZATION_LEVE"
     "L","LAST_UPDATE_DATE","LAST_UPDATED_BY","LOOKUP_TYPE","LOOKUP_TYPE_MEANING""
     ","LAST_UPDATE_LOGIN","CREATED_BY","CREATION_DATE") AS "
     "SELECT FCLT.ROWID , FCLT.APPLICATION_ID , FCLT.CUSTOMIZATION_LEVEL , FL.MEA"
     "NING , FCLT.LAST_UPDATE_DATE , FCLT.LAST_UPDATED_BY , FCLT.LOOKUP_TYPE , FC"
     "LT.LOOKUP_TYPE_MEANING , FCLT.LAST_UPDATE_LOGIN , FCLT.CREATED_BY , FCLT.CR"
     "EATION_DATE FROM FND_COMMON_LOOKUP_TYPES FCLT, FND_LOOKUPS FL WHERE FCLT.CU"
     "STOMIZATION_LEVEL = FL.LOOKUP_CODE AND FL.LOOKUP_TYPE = 'CUSTOMIZATION_LEVE"
     "L'"
    IMP-00003: ORACLE error 1445 encountered
    ORA-01445: cannot select ROWID from a join view without a key-preserved table
    IMP-00017: following statement failed with ORACLE error 1445:
    
    
    SQL> select count(*) from apps.PER_COMMON_LOOKUP_TYPES_V;
    select count(*) from apps.PER_COMMON_LOOKUP_TYPES_V
                              *
    ERROR at line 1:
    ORA-04045: errors during recompilation/revalidation of
    APPS.PER_COMMON_LOOKUP_TYPES_V
    ORA-01445: cannot select ROWID from a join view without a key-preserved table
    
    It seems that view "PER_COMMON_LOOKUP_TYPES_V" got created but there were no rows
    I found a Note(18849.1) which says to remove "ROWID" from the view defination and execute the same
    which worked for me but that is not the proper solution, see below:-
    
    Source Instance
    =============
    SQL> select TEXT from dba_views where VIEW_NAME='PER_COMMON_LOOKUP_TYPES_V';
    TEXT
    --------------------------------------------------------------------------------
    SELECT FCLT.ROWID , FCLT.APPLICATION_ID , FCLT.CUSTOMIZATION_LEVEL , FL.MEANING
    
    
    Target Instance(View created after removing the "rowid" from the view defination
    ===============================================================
    SQL> select TEXT from dba_views where VIEW_NAME='PER_COMMON_LOOKUP_TYPES_V';
    TEXT
    --------------------------------------------------------------------------------
    SELECT FCLT.APPLICATION_ID , FCLT.CUSTOMIZATION_LEVEL , FL.MEANING , FCLT.LAST_U
    
    
    THERE ARE MANY VIEWS WHICH HAVE THE SAME ABOVE MENTIONED ERROR WHILE DOING THE SCHEMA LEVEL
    EXPORT-IMPORT BUT THEY ARE NOT THERE WHILE DOING THE FULL EXPORT-IMPORT

Regards
 

A reader, April 19, 2006 - 4:47 pm UTC


Can I shrink the tablespace?

chase, January 10, 2007 - 12:14 am UTC

Here is my target!

1) Copy the structure of special schema (more than 1000 objects) from production system (huge datafiles and big initial storage statement)to test environment (very small hard disks)

2) shrink storage(expecting whithout storage statement while using exp/imp)

3) without lots of editor jobs for those 1000+ objects

How can I archive this goal?

Thx!

Duplicate schema

mooks, March 12, 2007 - 5:48 am UTC

Hi Tom

Currently all my schemas are sharing the same tablespace in oracle 9.2 and I would like to spilt it so that each schema has their own individual tablespace. Is there any ways to duplicate the schema but storing the objects and data in another new tablespace?

Thanks for your advices in advance.

Regards,
mooks


Tom Kyte
March 12, 2007 - 8:53 pm UTC

you don't want to duplicate the schema, you want to move it.


alter table T move tablespace NEW;
alter index I rebuild tablespace NEW;

Data Pump Network Import with replace

Phil, May 10, 2007 - 12:13 pm UTC

Hi Tom,

If I need to refresh a schema on our 10gR2 test database from 10gR2 production, I'm thinking the Data Pump Utility would be easier/faster than using the old exp/imp utilities.

What would the syntax be to accomplish that over the network if I wanted to make sure that the data was replaced in the tables and all objects such as Functions/Procedures/Packages/Foreign Keys would be replaced by the production version, if different? Would permissions need to be reissued, or would they be replaced with the production grants too?

Tom Kyte
May 11, 2007 - 10:55 am UTC

you'd drop the schema or specific objects and then do the data pump.

Just like with export/import.



still getting the Substitute pattern match failed error

DanielD, May 30, 2007 - 6:29 pm UTC

Tom,

I am getting the same error:

$ gzip -d arlis.dmp.gz
$ strings arlis.dmp > arlis.sql
$ vi arlis.sql
"arlis.sql" 18100 lines, 1591557 characters
EXPORT:V08.01.07
DSYSTEM
RUSERS
1024
Wed May 30 15:12:24 2007/export/home/oracle/arlis.dmp
#G##
#G##
+00:00
8.1.6

: 1,$s'^V^M'/'

Substitute pattern match failed

Furthermore, is there a way to remove STORAGE clause? From either arlis.dmp or arlis.sql from above?

Thank you,

Daniel
Tom Kyte
May 30, 2007 - 6:34 pm UTC

given that you are the only one with gzip on this page - not sure what you mean at all.

the other guy was using notepad - not vi...

ctl-v ctl-m works if you have a file with ctl-m's at the end of line.

not so otherwise

so, do you see ctl-m's...

Fixed by notepad++

DanielD, May 31, 2007 - 11:29 am UTC

Hello Tom,

Thank you for your time and answer. I've fixed the formatting by using notepad++ on windows. The storage clause should be taken care of by this:

:%s/STORAGE.*)//g

I got that 'one-liner' from my coworker, vi guru :)

Thank you,

DanielD

Replication cloning

Geo, October 10, 2007 - 4:00 am UTC

Hello Tom,

I have this situation: can you advise on a way to "clone" a replication enviroment to a test
configuration, but without all regional databases from the production servers. Version is 8.1.7.
There is one central database and four regional database and replication is working between global
and regionals. There is no replication between regionals.
In the needed test environment there will be only 2 Regional and 1 Global databases.
And then we need to upgrade the databases to 10.2.0.3

Thanks

32-bit OS vs. 64-bit OS

Nishith Pandey, March 07, 2008 - 5:46 am UTC

Hi Tom

We have our PROD server on Oracle Database Server 10gR1 (64-bit) on Windows Server 2003 Enterprise Edition 64-bit on HP Intanium 2 Processor.

Our TEST Server will be on Oracle Database Server 10gR2 (32-bit) on Windows Server 2003 Standard 64-bit Edition (SP1) on Intel(R) Xeon Processor.

We want PROD database cloning to TEST database along with data also.

1) Can we simply restore our PROD database to TEST?

Please also clarify us the following as we have other similar relevant issues:

2) Can 64-bit Oracle Database can be restored to 32-bit Oracle Database and vice versa?

3) Can Oracle 10gR1 can be restored to Oracle 10gR2?

4) Is Itanium based 64-bit Oracle different than Non-Itanium based 64-bit Oracle?

Thanks Tom !!
Tom Kyte
March 10, 2008 - 10:01 am UTC

1) no, there are many notes on metalink about how to do this (convert from 64-32bit and so on). They are entirely different operating systems.

That said, it would not be my choice (I would be yelling and screaming) to use two entirely different operating systems for test and production - but that is your call.

2) you have to convert - 32 bit and 64 bit OS's are different OS's...

3) you would have to upgrade, there is no such thing as restoring in this context, they have to be upgraded....

4) sure, different chips... different OS's




64-bit Itanium vs 64-bit normal

Nishith Pandey, March 10, 2008 - 3:43 am UTC

Hi Tom

Please also let us know whether 64-bit Itanium based Oracle (10gR1) can easily be restored to 64-bit Normal Oracle (10gR2) as Itanium Based 64-bit Oracle comes in separate bundle.

Thanks
Tom Kyte
March 10, 2008 - 11:34 am UTC

what the heck is "normal" Oracle.

different OS's cannot be restored to - no.

Sorry for misleading words

Nishith Pandey, March 11, 2008 - 7:55 am UTC

Thanks for the response, Tom !

By "Normal" Oracle, I mean Oracle 64-bit on Xeon Processor. As per Metalink, the OS files are compatible among these OS's (Windows 64-bit on Itanium vs Windows 64-bit on Xeon). So, I was hoping that the data files can directly be copied from PROD to TEST instance and by re-creating the control file, we can get a clone!!

Please let me know, what I am missing here.

Thanks in Advance !

Tom Kyte
March 11, 2008 - 9:06 pm UTC

this would absolutely be *not supported*, they are different OS's

there is a cross system transport - a full cross system transport - but not backup on one and restore to another. (10gr2 has full transports across same byte order systems)

Clone DB from win2000 to win2003

Simone Marsili, March 11, 2008 - 9:50 am UTC

Hi Tom,

we have production on win2000 32 bit and we are testing our databases (oracle 8.1.7.4 and 9.2.0.8) and applications on win2003 32 bit as we plan to migrate to this, somehow, 'newer' release of Microsoft OS.

We update our migration test environments using exp/imp, I was wondering if we could just restore our backups as we do in other test environment with same os version than production (win2000 32bit).

So, can we restore a backup made on win2000 on a server running win2003 with same filesystem (NTFS)?

You said yes from NT to win2000, it is the same from win2000 to win2003?

If it is not possible, what considerations beside filesystem I am missing?

Sorry for my poor english
and
Thanks a lot for your time
Simone
Tom Kyte
March 11, 2008 - 9:11 pm UTC

that should work - windows to windows - but please contact support to make "sure"

would not want to do it in production, but for testing it would likely be ok.

Thanks!

Simone Marsili, March 12, 2008 - 7:08 am UTC


A reader, June 09, 2008 - 11:29 am UTC

We have a database that is going to production next month. This is a 300 GB database. Now within our test environments we are trying to move this database using expdp (datapump) and it takes almost 8 hrs end to end. We need to come up with a better solution. After we go to production, there will be a lot of requests to bring the data to test environments from production. Is datapump the only way that I can do it right now? Also, I have to mention that in test environments within the same database we have multiple tracks which can accomodate these databases. Please let us know if there is anything new in Oracle 10g other than datapump that we can use to do the export and import.
Tom Kyte
June 09, 2008 - 2:11 pm UTC

why not just restore a backup?

A reader, June 09, 2008 - 4:59 pm UTC

The only problem with that is the backup options (like RMAN) is that those are done at the instance level. Like I mentioned in our test environments we have the following setup - database dev is divided into schemas dev1, dev2, dev3 each of which can fit a database. So we need a mapping from instance to schema level.
Tom Kyte
June 09, 2008 - 5:09 pm UTC

RMAN backups are done at the database level in fact, you do not backup an instance (an instance is simply processes+memory, a database consists of the files).

You can restore individual tablespaces if you like. You will have the issue of code however, but that shouldn't be an issue since you should be using a baseline set anyway - checking it in and out from source code control to 'upgrade' the test instance.

Nothing faster than "copy datafiles", tranport the data from production (rman can do this online in 10g, basically hot backup a portion of database one, restore it on database two and recover it on database two).

A reader, June 10, 2008 - 2:18 pm UTC

This online copy sounds great. Can you forward me a link to this or some examples if you have ? I would really love to try it.

A reader, June 10, 2008 - 10:53 pm UTC

For the transportable tablespaces using RMAN, does the destination also have to be configured in RMAN ?
Tom Kyte
June 11, 2008 - 7:53 am UTC

nothing is "configured in RMAN", RMAN is just a tool, you can use it against any database.

You do not have to be using RMAN to backup the databases - but I know of no reason NOT to be using RMAN to backup.

exp

A reader, June 10, 2008 - 11:50 pm UTC


A reader, June 11, 2008 - 2:01 pm UTC

I read the documentation in the link, but do you also have some examples ? I dont know how to use the transport tablespaces with RMAN.

Substitution pattern match failed

Bob, June 16, 2008 - 5:11 am UTC

Hi Tom,

I am having some problems in vi using:

:1,$s'^V^M'/'

If I type <ctrl-v> it is unrecognized although <ctrl-m> is recognized. Is it possible to enter the asc(II) values as an alternative?

Cheers!
Tom Kyte
June 16, 2008 - 12:00 pm UTC

In normal vi, hitting <ctl-v> (one keystroke) says take the next keystroke verbaitim.

You should not actually see ^V^M on screen.

I don't know what "asc(II)" means - on some keyboards if you want to enter an ascii character directly, you can hold the ALT key and key it in on the numeric keyboard and then release the alt key.

A reader, June 16, 2008 - 10:16 pm UTC

Hi - we have an environment where we need to copy the production data to a test environment and then from one test environment to another. Copying from production to the first test environment can be done using transportable tablespaces with RMAN. But we cannot do from one test environment to another because our test environments are not in archive log mode. In that case, which would be the preferred method of data transfer, because we also always have performance issues when using data pump export and import.

Thanks for your help.
Tom Kyte
June 17, 2008 - 9:46 am UTC

you can transport without archivelog mode. You put the tablespace read only and transport it then make it read write again.

and why copy from prod -> test1 -> test2, why not prod -> test1 AND test2

copy entire schemas using dbms_datapump

A reader, April 23, 2009 - 1:46 am UTC

Hi Tom,

I frequently need to replicate entire schemas for building test environments. Sometimes I have to do it across databases, sometimes within the same database. I had been using exp+ftp+imp method for a long time.

Recently I read in the documentation that from 10g datapump can be used to import over network, eliminating the need for dump files on the filesystem. This is very exciting. I tried the below script trying to copy contents of schema TEST to empty schema TEST2 (within the same db), but it doesnt seem to copy anything. Can you please guide me how to do this please?

1 declare
2 h number;
3 begin
4 h := dbms_datapump.open(operation=>'IMPORT',
5 job_mode=>'SCHEMA');
6 dbms_datapump.metadata_filter(handle=>h,
7 name=>'SCHEMA_LIST',
8 value=>'''TEST''');
9 dbms_datapump.metadata_remap(h,'REMAP_SCHEMA','TEST','TEST2');
10* end;
dev2> /

PL/SQL procedure successfully completed.

Is it that schemas can only be copied over across the dblink? I assume if we do not give the dblink, then the schema should be copied within the local db, is that not the case?

Thanks for your help.

Tom Kyte
April 27, 2009 - 10:32 am UTC

you need to read a bit further on, you'll be wanting to start the job and other things.

http://docs.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_api.htm#i1006610

shows an example of a schema export, and then a schema import with remapping.

datapump

A reader, April 27, 2009 - 11:16 am UTC


export

A reader, August 13, 2010 - 9:14 pm UTC


A reader, January 17, 2011 - 9:00 am UTC

I have this situation. We have a 200gb production database in oracle 10g. We have a performance environment that is required to be as close to production as possible. Right now we operate on an on demand basis. We clone the production database to a staging database, then use transportable tablespaces to move the data to the performance environment. We want to get away from this, and use some other way (like Data Guard) to maintain the performance environment. But we dont want to hook up the performance environment in our production Data Guard. Is there any other way to achieve what we are looking for ?
Tom Kyte
January 23, 2011 - 3:17 pm UTC

you can cascade in data guard. Using active data guard (11g) you can do this easily, you would create a data guard copy of the data guard copy. Then you can open it any time for testing - and then sync it up.

In 10g, you would have to do this "yourself" manually using flashback database (you would take the 2nd standby out of managed recovery mode, remember the SCN for it, open it read write, test, close it, flash it back, put it into managed recovery).

200gb is pretty small - you could do interesting things with splitting mirrors that would take not very much time at all as well. Just restoring a recent backup and recovering it would be pretty fast as well.

A reader, January 25, 2011 - 2:22 pm UTC

Can you elaborate a bit on splitting mirrors ? Is this something on the hardware end ?
Tom Kyte
February 01, 2011 - 12:11 pm UTC

it is something most raid arrays using mirroring can do. The raid arrays are front ended by lots of software - so it is a combination of hardware and software on the storage array.

Alexander, February 29, 2012 - 9:37 am UTC

Tom,

We have an issue trying to refresh test and development databases from production that are in different data centers. It's a complicated situation because:

1) Non prod and prod are about a 1000+ miles away from each other

2) The schema/tablespaces are rather large (for this type of thing) 115GB

3) We are using Golden Gate to replicate to reporting databases because the reports are intense and the operational database has no schema design for efficient queries

The operational databases we need to refresh are not shared, so that leaves opportunities like duplicate database possible. The reporting databases are shared with other applications.

Right now our options are to use imp/exp or imdp/expdp to the OLTP. The source is 10g so we have no real compression with data pump so we're copying 100GB. imp/exp is not any better because this schema has tons of lobs and it's really slow.

How would you handle this?
Tom Kyte
February 29, 2012 - 10:37 am UTC

why can't you just restore from backups? why do you have to read everything out of a database?

push comes to shove, sneaker-net isn't out of the question. If you cannot transfer 100gb fast enough (you should be able to??) just ship it.

Alexander, February 29, 2012 - 11:04 am UTC

First I've heard of "Sneakernet". No clue how that works but we can research it.

I get about 5 mbps between the data centers, is that awful? Server within the same data center i get 10x that. I don't know what is industry standard for a network thing like this.
Tom Kyte
February 29, 2012 - 11:54 am UTC

sneaker net - put data on removable media and walk it (in your sneakers) to another place and plop it into that other machine.


5mbps is less than I get at home, a lot less. My phone does better.



that means you can move about 0.5 MB per second.

that means your 100GB of data (102400 MB) will take

100GB times 1024 to get MB time 2 seconds (per MB)/60/60 to get hours...

ops$tkyte%ORA11GR2> select 100 * 1024 * 2/60/60  from dual;

100*1024*2/60/60
----------------
      56.8888889


over two days.

FedEx it, it'll be faster.

Alexander, February 29, 2012 - 12:30 pm UTC

Sorry i used the wrong notation, I meant MB/s, so about 3-5MB per sec.

Do you think a nfs mount between the two servers would help?
Tom Kyte
February 29, 2012 - 12:52 pm UTC

still, that is between 5 and 10 hours.

NFS wouldn't help, you are gated by your transfer rate.

compressing the data would help, if you gzip it down and get somewhere between 5:1 and 10:1 - you could reduce the transfer time down to somewhere between 0.5 to 2 hours.


Alexander, March 01, 2012 - 3:12 pm UTC

gzip took 20 minutes on a 16GB file, so it doesn't look like that will be much help.

I'm looking into impdb using a network link, or possibly keeping a Data Guard copy in that data center and copying from there.

I might be able to restore like you suggested, but that will blow away our GG setup (but it exists in production so I'm not sure what would happen if we duplicated).
Tom Kyte
March 01, 2012 - 3:36 pm UTC

It might help.

You can transfer at a rate of about 3-5MB/sec.
gzip is going at a rate of about 13MB/sec.

as long as gzip is going faster than your network....

get an nfs mount and gzip the file to the remote machine. gzip will read the uncompressed file, compress it and write it over the network to the remote machine.

Advisde

T, May 10, 2012 - 3:48 am UTC

Hi Tom ,

Thank for your support ... as usual .


i have DB 10.2.0.1.0 - 64bit installed in Windows 2003 64 bit Size of DB 2TB
and i need to create a copy of this db in Windows 2008 64 bit oracle 11.2.0.1.0 - 64bit its already installed .

What is the best steps ?

Tom Kyte
May 10, 2012 - 8:50 am UTC

one time copy? just restore it to the new machine and upgrade it with dbua.

Advisde

T, May 10, 2012 - 3:48 am UTC

Hi Tom ,

Thank for your support ... as usual .


i have DB 10.2.0.1.0 - 64bit installed in Windows 2003 64 bit Size of DB 2TB
and i need to create a copy of this db in Windows 2008 64 bit oracle 11.2.0.1.0 - 64bit its already installed .

What is the best steps ?

advise

T, May 13, 2012 - 1:36 am UTC

Yes tom , its one time copy .

Thank you for your answer , but you didn't give me the best step excatly "if you can please till" else thanks again .
Tom Kyte
May 13, 2012 - 3:31 am UTC

I did?

just restore the database to that machine - and upgrade it with dbua.

advise

T, May 13, 2012 - 4:56 am UTC

Restore : this is my question !

what do you mean by restore ? copy and paste ? or RMAN ?

my question is what is the full steps for take a 2TB data base 10g from server 2003 to 11g windows 2008 .

thanks for the 2nd answer , hope to see the 3rd.

Tom Kyte
May 13, 2012 - 10:22 am UTC

are you a DBA? If not, wouldn't your DBA be doing this?

If you are a DBA and you do not actually know how to restore a database to a new server - well - then we have a problem here.


If you are a DBA - do you know how to restore the database to a new server in the event of a loss of the original server? If so - do that process - whatever you would use to do that.

If you are using RMAN - then yes, I'm suggesting "rman"

If you are using user managed backups, then I'm talking about using whatever file copy process you have already set up and tested to do that.


Get the database on the new machine
Upgrade it.


Or, if you want, Upgrade the existing database on the existing hardware - back it up - and restore it.

Part Copy of Production data to Development Environment

Sahir Hadkar, October 08, 2012 - 6:10 am UTC

Hi Tom,

I have a Production database with the current size involving two principale schemas:

SQL> SELECT SUM(bytes)/1024/1024/1025 "Size in GB" FROM dba_segments where owner in ('SCHEMA1','SCHEMA2');

Size in GB
----------
201.927804

Out of these the mentioned tables and their indexes are taking up following space (175.326 GB):

SQL> SELECT SUM(bytes)/1024/1024/1024 "Size in GB" FROM dba_segments where owner IN ('SCHEMA1','SCHEMA2')
2 and segment_name in ('TAB1','TAB2','TAB3','TAB4','TAB5','TAB6','TAB7','TAB8','TAB9','TAB10','TAB11','TAB12');

Size in GB
----------
72.0917968

SQL> SELECT SUM(bytes)/1024/1024/1024 "Size in GB" FROM dba_segments where owner IN ('SCHEMA1','SCHEMA2')
2 and segment_name in (
3 select index_name
4 from dba_indexes
5 where owner IN ('SCHEMA1','SCHEMA2')
6 and table_name in ('TAB1','TAB2','TAB3','TAB4','TAB5','TAB6','TAB7','TAB8','TAB9','TAB10','TAB11','TAB12'));

Size in GB
----------
103.234375

SQL> select (72.0917968+103.234375) from dual;

(72.0917968+103.234375)
-----------------------
175.3261718

Thus these 12 tables along with the indexes takes about 88% of the total space in the 2 schemas.

All 12 tables mentioned are subpartitioned based on a partition and subpartition key.

We have 7 development databases which needs to be refreshed on demand with the production database.

Currently, the refresh is done using RMAN, but going forward as the production database size increases we are anticipating space problems on development databases.

The Solution which I tried:

The 12 tables are partitioned in such a way that the data in one partition is functionally independent of other partition.

I created new tablespaces and moved partitions to these new tablespaces based on their business functionality.
Thus a single table had some partitions on one tablespace and rest of the partitions on the other tablespace.

The idea was to transport the necessary tablespces (and not the entire database) across production to development database.

Later on my DBA told me that for a tablespace to be transportable across databases they should be self-contained.

And one of the violations for self contained tablespace according to oracle documentation is:

"A partitioned table is partially contained in the set of tablespaces.
The tablespace set you want to copy must contain either all partitions of a partitioned table, or none of the partitions of a partitioned table. If you want to transport a subset of a partition table, you must exchange the partitions into tables."

Thus my design failed.

Kindly note that the 12 big tables which I mentioned above contains data which is actually not needed for my unit testing on development database.
Is it possible to perform RMAN copy without including the data from these 12 tables?

Or could you think of any other solution so that I can do "part" refresh of production database to development.

Let me know if you need any more information?

Thanks a lot in advance!!

Tom Kyte
October 09, 2012 - 12:39 pm UTC

why not just datapump export what you want if you just want a small subset.

Part Copy of Production data to Development Environment

Sahir Hadkar, October 10, 2012 - 2:08 am UTC

Hi Tom,

The problem with the Data Pump export is as you pointed out the data subset is not "small".

I was thinking over my failed design. Let me know if it is possible to create transportable tablespace set such that it is 'self contained' and then backup the data.

e.g.:

SQL> EXEC SYS.DBMS_TTS.TRANSPORT_SET_CHECK('TBS1,TBS2,TBS3,TBS4,TBS5,TBS6,TBS7', TRUE);

PL/SQL procedure successfully completed

SQL> SELECT * FROM SYS.TRANSPORT_SET_VIOLATIONS;

VIOLATIONS
----------------------------------------------------

SQL> 


After that can we apply "skip tablespace clause" to skip certain tablespaces during Restoration and Recovery. And only those tablespaces which are not under skip tablespace are recovered?

Kindly advise on the same.

Best Regards!!

Tom Kyte
October 10, 2012 - 7:39 am UTC

... Let me know if it is possible to create
transportable tablespace set such that it is 'self contained' and then backup
the data. ...

obviously it is else the feature would never work?

I don't know what you mean???



how "not small" is this data subset - I didn't point out it wasn't small, I have no idea what size it is. All I know is that the numbers (a couple hundred GB's) isn't very large in the first place - so a subset of it seems like it would be pretty small.

A reader, February 27, 2013 - 8:46 am UTC

Hi Tom - We have a production 11.2.0.3 database with an active standby. When we were in the 10g world, we used to refresh a staging environment using rman duplicate. We would then refresh test environments from the staging environment. This is a time-consuming process. We want to think of some better options in 11g. We have been considering the following. Please let us know if there is anything better or if there are any issues with the following options.

1. Create a Goldengate environment that will be replicated from prod to staging environment. We can break goldengate as needed, refresh test environments and re-establish goldengate replication
2. Maintain a 2nd active standby, convert that to snapshot standby as required and re-establish data guard after refreshing test environment.

Thanks.
Tom Kyte
February 27, 2013 - 9:54 am UTC

Well, I like the idea of using my backups on a regular basis - it proves I can restore and I know how long it would take and what to do...

but - if you wanted something 'faster', #2 would be my preference. It can be created off of the existing standby (no impact on production, no change to production) and as long as you didn't have massive numbers of blocks changing during the time it was a snapshot standby (before flashing it back) - it would work nicely. Remember the time to flashback will be more or less a function of the amount of blocks changed (how many blocks to restore) and the amount of changes to those blocks (redo to be applied) in the database during the time it was open.

A reader, February 27, 2013 - 11:17 am UTC

Thanks Tom. Just out of curiosity, why do you not prefer using #1, the goldengate method ?

Also, can you clarify this - "It can be created off of the existing physical standby", how do we do that ?
Tom Kyte
February 27, 2013 - 11:59 am UTC

extra software to install, configure, manage, monitor.

potential impact on production


data guard is simple.


http://docs.oracle.com/cd/E11882_01/server.112/e25608/log_transport.htm#SBYDB5122

A reader, February 27, 2013 - 2:22 pm UTC

Thank you Tom. I will read more on the cascaded standby databases.

Apart from these two, do you have any other suggestions for 11g ?
Tom Kyte
February 28, 2013 - 7:42 am UTC

as I said, i'm quite fond of the restore from backup (duplicate) - that would still be my choice if at all possible...

doing the cascaded standby will just remove one duplicate from the loop - you'll still have the others to do - not sure it would be worth it.

restore

A reader, September 25, 2013 - 4:02 am UTC

TOm:

I have a oracle 11g pre-production database. It has ARCHIVE LOG ON and has 3 GB memory SGA.

I also have a same staging database on same machine. The staging does not hav ARCHIVE LOG ON and has smaller memory alloation 2GB. Everything else is similar.

I want to refresh the staging database with production backup.

I have an RMAN physical backup for production (data files + control file + spfile). If I did a RESTORE for prod data files into the staging database, would the memory and archvie log mode change too to match production?

Can I just copy ONE data file for the tablespace or would you restore everything in the backup and then change the SGA size and ARCHIVE LOG MODE?

Move database from 10.2.0.2 to 11.2.0.3

Christy, October 21, 2013 - 9:45 pm UTC

Hi Tom,

We have a brand new server (Windows 2008 R2, 64 bit) with a brand new installation of Oracle 11.2.0.3. Now I want to move our existing databases from the server (Windows 2003, 32 bit) with Oracle 10.2.0.2 to this brand new server. Is it true that I must use expdp and impdp? Or what's the best way to move the databases over to the new server? Our databases are about 60GB. Thanks a lot for the help.
Tom Kyte
November 01, 2013 - 8:34 pm UTC

a short list of options could be:


you could upgrade it to 11.2.0.3 on existing system, move it, convert it to 64 bit.

you could expdp/impdp it

you could replicate it using streams or golden gate.