Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, pauline.

Asked: February 09, 2001 - 5:07 pm UTC

Last updated: October 23, 2012 - 11:15 am UTC

Version: 8.1.5

Viewed 10K+ times! This question is

You Asked

Hi Tom,
I am trying to alter the tablespace to the transportation tablspace. First I do:
alter tablespace users read only;
Then do :
exp sys/change_on_install file=02072001.dmp transport_tablespace=y tablespaces=users triggers=n
constraints=n
But I got the error as:
Export: Release 8.1.5.0.0 - Production on Fri Feb 9 13:19:08 2001

(c) Copyright 1999 Oracle Corporation. All rights reserved.


Connected to: Oracle8i Enterprise Edition Release 8.1.5.0.0 - Production
With the Partitioning and Java options
PL/SQL Release 8.1.5.0.0 - Production
Export done in US7ASCII character set and US7ASCII NCHAR character set
Note: table data (rows) will not be exported
Note: constraints on tables will not be exported
About to export transportable tablespace metadata...
EXP-00008: ORACLE error 29341 encountered
ORA-29341: The transportable set is not self-contained
ORA-06512: at "SYS.DBMS_PLUGTS", line 1067
ORA-06512: at line 1
EXP-00000: Export terminated unsuccessfully

I checked packages under sys account and see the package named
'DBMS_PLUGTS' is there.
Could you point out how to solve the problem?
Thanks.

Pauline

and Tom said...

Your tablespace is NOT self contained (eg: it has an index but not the table, or the table and not the index).

Run:

exec sys.dbms_tts.transport_set_check( 'users', TRUE );
select * from sys.transport_set_violations;

in sqlplus to see what the errors are.

Rating

  (112 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

IMP-00003: ORACLE error 19722 encountered

Brigitte, September 23, 2002 - 8:23 am UTC

Hallo Tom,

I exported 3 Tablespaces (read only) with Oracle 1.7.2 and copied the files

Then I try to import with Oracle 1.7.0

/oracle/k04/HK2>imp parfile=pari.dat

Import: Release 8.1.7.0.0 - Production on Mo Sep 23 13:41:01 2002

(c) Copyright 2000 Oracle Corporation. All rights reserved.

Username: sys as sysdba
Password:

Connected to: Oracle8i Enterprise Edition Release 8.1.7.0.0 - Production
JServer Release 8.1.7.0.0 - Production

Export file created by EXPORT:V08.01.07 via conventional path
About to import transportable tablespace(s) metadata...
import done in WE8ISO8859P1 character set and WE8ISO8859P1 NCHAR character set
. importing SYS's objects into SYS
IMP-00017: following statement failed with ORACLE error 19722:
"BEGIN sys.dbms_plugts.checkDatafile(NULL,432251738,13,1280,14,13,4194302,"
"256,4609788,4789221,1,54525954,NULL,NULL,NULL); END;"
IMP-00003: ORACLE error 19722 encountered
ORA-19722: datafile $ORACLE_HOME/HK2/database/forever_tg2.dbf is an incorrect version
ORA-06512: at "SYS.DBMS_PLUGTS", line 1577
ORA-06512: at line 1
IMP-00000: Import terminated unsuccessfully

with pari.dat:
transport_tablespace=y
datafiles=('$ORACLE_HOME/HK2/database/forever_tg2.dbf',
'$ORACLE_HOME/HK2/database/forever_tk.dbf',
'$ORACLE_HOME/HK2/database/forever_I.dbf')
tablespaces=(forever_tg2,forever_tk.forever_i)
file=exphk.dmp

Is it impossible to transport tablespaces from Oracle 8.1.7.2 to Oracle 8.1.7.0?

Thanks

Brigitte


Tom Kyte
September 23, 2002 - 8:30 am UTC

1.7.2 isn't a version by the way. 8.1.7.2 is and I guess that is what you meant.


anyway, we are upwards compatible, not backwards. file formats change. You would be able to take a 8170 file and attach it to 8172 but not the other way in general.


You can go up, but not down.

A reader, December 13, 2002 - 10:05 am UTC

Hi Tom

I am getting errors.

C:\>exp userid='gaurang/gaurang@ora92 as sysdba' transport_tablespace=y tablespaces=users fil
e=c:\users.dmp

Export: Release 8.1.6.0.0 - Production on Fri Dec 13 10:07:23 2002

(c) Copyright 1999 Oracle Corporation. All rights reserved.


Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
Export done in WE8ISO8859P1 character set and UTF8 NCHAR character set
server uses WE8MSWIN1252 character set (possible charset conversion)
Note: table data (rows) will not be exported
About to export transportable tablespace metadata...
EXP-00008: ORACLE error 6550 encountered
ORA-06550: line 1, column 10:
PLS-00306: wrong number or types of arguments in call to 'CHECKPLUGGABLE'
ORA-06550: line 1, column 10:
PL/SQL: Statement ignored
EXP-00000: Export terminated unsuccessfully


Thanks

Tom Kyte
December 13, 2002 - 12:35 pm UTC

you cannot unplug a 9i datafile to be plugged into an 816 database.

this is 100% to be expected. You cannot unplug <HIGHER VERSION> datafiles and plug them into <LOWER VERSION> databases.....

that'll never work! Good thing it failed at this step, else you would have wasted that time copying the datafiles for nothing.

TTS Question

reader, February 14, 2003 - 3:12 pm UTC

As per my understanding, the target database should not have the TS with the same name as the one in TTS set. Hence my question :

I have a schema who should have the default TS as the one in TTS set. How do I get around this?

Thanks.

Tom Kyte
February 14, 2003 - 8:41 pm UTC

the target database MAY NOT (not should not, it is that it CANNOT)

the default TS is used for newly created objects without a tablespace clause. beyond that it is meaningless. By definition -- the TTS will be named the same it was and the objects will be in there (in that tablespace).

Production data copy using transport tablespace

Amitabha Bhaumik, February 15, 2003 - 9:27 pm UTC

We need to have a copy of our production data in another database to test out the regular backend datafixes that we currently have to do directly in production ( Its an Intercative Voice Response system and many a times users press the wrong button on the phone!!).
What are the advantages /disadvantages of creating transport tablespace for the existing production database to replicate the production database in a test environment.
Our oracle database is 8.1.7. on Unix.

thanks,
Amitabha

Tom Kyte
February 16, 2003 - 10:34 am UTC

well, in order to transport, the tablespaces will have to be READ ONLY in production for a bit of time. This time can be very short (eg: if you are using mirrors and can split the mirrors briefly to make the copy). This time can be longer (eg: the time it takes to actually copy the files and do the metadata export)

Another avenue to consider would be this -- use your HOT backups and restore them to test. It'll kill two birds:

o you get a fast copy
o you test your backups, your ability to restore



Using transportable tablespaces

Devanshi Parikh, May 07, 2003 - 3:49 pm UTC

If I have an 8.1.7.0 database(source) with character set USASCII7 and an 9.2.0.2 database(target) with character set
UTF8, can I unplug tablespace from the source database and plug them in the target database ? Other then the version difference, the character sets are also different.

Tom Kyte
May 08, 2003 - 9:26 am UTC

charactersets must match.




Very slow!

A reader, October 14, 2004 - 10:57 pm UTC

dbms_tts.transport_set_check shouldnt take too long to execute, right? I am running it with just one tablespace name and it is still running after 10 minutes!

Any idea why? Thanks

Tom Kyte
October 15, 2004 - 11:05 am UTC

trace it, see what it does.

it'll depend on the number of objects in that tablespace.

Very slow!

A reader, October 15, 2004 - 12:10 pm UTC

I did trace it and here is what it shows one particularly ugly SQL

SELECT obj1_owner,obj1_name,obj1_subname,obj1_type,ts1_name,obj2_owner,
obj2_name,obj2_subname,obj2_type,ts2_name,constraint_name,mesg_id
FROM
pluggable_set_check WHERE ((ts1_name NOT IN (select tbs from
sys.transts_tmp$)) AND (ts2_name IN (select tbs from sys.transts_tmp$))) OR
((ts2_name = '-1') AND (ts1_name IN (select tbs from sys.transts_tmp$))) OR
((ts1_name IN (select tbs from sys.transts_tmp$)) AND (ts2_name NOT IN
(select tbs from sys.transts_tmp$)) AND (mesg_id = 4))


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 3.22 3.14 0 22 0 0
Execute 1 0.01 0.01 0 0 0 0
Fetch 1 3062.20 2998.21 1249 255703 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 3065.43 3001.36 1249 255725 0 0

I am specifying 2 tablespaces in the comand, one data tablespace and its corresponding index tablespace (yes, i know your opinion on separating data and index tbs, buts thats another topic!)

The data tablespace has 47 segments and the index tablespace has 44 segments. Surely this shouldnt take 50 minutes to run!

Tom Kyte
October 15, 2004 - 3:05 pm UTC

I don't see any performanc issues related to that particular view logged in the past.


do you have stats on SYS dictionary tables?

(i would suggest opening a tar, you probably will have to work it through that in the end)

Very slow!

A reader, October 15, 2004 - 4:06 pm UTC

I did open a TAR and they are reviewing it.

On a related note, when I do 'alter tablespace ts read only', it is taking a very long time to complete. I know it is because it has to flush all the blocks from buffer cache to disk and stuff, but how can I monitor this to see how much longer I have to wait? It doesnt seem to be logging anything in v$session_longops.

Would something like

select count(*) from v$bh where ts#=(select ts# from v$tablespace where name='TS');

work. This number does seem to be going down very very slowly.

How would this work if there are many active read only users of this tablespace? Oracle would try to flush the blocks to disk and user activity would put them back in the cache?

Thanks

Tom Kyte
October 15, 2004 - 6:04 pm UTC

it also (in 10gr1 and before) has to wait for no transactions to be in flight as well......


so, it could take infinitely long.


only the dirty blocks need to go there -- you could checkpoint first and then alter read only, the read only would benefit from the dirty blocks being flushed by the checkpoint.

Very slow!

A reader, October 15, 2004 - 6:06 pm UTC

Sigh, I am getting concerned. Support said that since the query that is showing excessive LIOs is a recursive query into the dictionary, they recommended that I gather stats on SYS!

Gathering stats on SYS is a huge change, not to be taken lightly. Even the Metalink note says
"In the majority of cases, gathering Data Dictionary statistics should not be necessary (and would not normally be recommended) since the dictionary has been optimized to cater for most common database setups. If, however, the
performance of queries against the data dictionary becomes a issue (for example because the database is made up of an abnormally large number of application objects) then gathering dictionary statistics can be considered"

I would have to do a complete regression testing of all the apps, processes running against my database if I gather stats on SYS

What does any of this this have to do with dbms_tts.transport_set_check()?

The tablespaces I am checking have less than 100 objects in them. If dbms_tts.transport_set_check is taking 50 minutes to do its thing on such a small dataset, there is something seriously wrong. Gathering sys stats seems like a huge overkill for this issue!

Does anyone out there have this problem? Does sys.dbms_tts.transport_set_check() come back in a reasonable amount of time?

Thanks

Tom Kyte
October 15, 2004 - 6:14 pm UTC

in 8i, do not gather stats on SYS without a pretty full test.

do you have the row source operation records in there? can we see them? (from the tkprof_)

Very slow!

A reader, October 15, 2004 - 6:38 pm UTC

Sure, the tkprof is pretty hairy.

You been warned. :)

Hm, I tried to post this and got an error

mod_plsql: /pls/ask/wwv_flow.accept HTTP-400 Value param too long. Length is 40606. Upper limit is 32512

Can I send it to you by email?

Thanks

Very slow!

A reader, October 15, 2004 - 11:26 pm UTC

While you are reviewing the tkprof...

Why does transport_set_check() do such elaborate stuff? I am sure we can come up with a simple query to determine if a given set of tablespaces is self-contained? Using the techniques in

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:110612348061 <code>

we can get the comma separated list of tablespaces, query dba_segments, dba_indexes, and voila!

Why does it do all the complicated stuff I see in the trace?

Tom Kyte
October 16, 2004 - 10:45 am UTC

it is more complex than you think -- and getting more so. think of all of the multi-segment objects we are getting

o IOT's with overflows
o a table with 5 lobs might be in 6 tablespaces
o partitioning
o foreign keys pointing to other tables
o .....

it is fairly complex stuff.

alter tablespace read only

A reader, December 20, 2004 - 10:20 am UTC

I did

alter system checkpoint;

alter tablespace my_ts read only;

Its been 10 minutes and it has still not completed?

It is a tablespace containing read-only tables, so there are no transactions against it.

Why is the read only taking so long even after the checkpoint? How can I monitor it to see when it would finish instead of just waiting endlessly for it?

Thanks

Tom Kyte
December 20, 2004 - 10:27 am UTC

there cannot be any active transactions in the database when making a tablespace readonly.

A reader, December 20, 2004 - 10:33 am UTC

Gasp! You are kidding me, right?

There cant be any active transactions anywhere in the entire database, even in tablespaces not related to the one I am trying to make read only? I thought only the tablespace that I am making readonly shouldnt have any active txns?

Then this command will never complete in any active Production database?

Please tell me thats not what you meant?

If so, how can the transportable tablespace feature be used in real life? There is always some little txn going on somewhere in the database!

Tom Kyte
December 20, 2004 - 1:39 pm UTC

that is what i mean.  it must wait for all outstanding transactions to complete.

this will "happen" sometime -- unless you have transactions that stay open for days -- but then you have bigger problems.


Do this on a test instance:

session1:
<b>
ops$tkyte@ORA9IR2> drop table t;
Table dropped.
 
ops$tkyte@ORA9IR2> create table t ( x int ) tablespace users;
Table created.
 
ops$tkyte@ORA9IR2> insert into t values ( 1 );
1 row created.
 
ops$tkyte@ORA9IR2>
</b>



session 2:

ops$tkyte@ORA9IR2> alter tablespace tools read only;
 
<b>this'll block......</b>


session 3:

<b> 
ops$tkyte@ORA9IR2> insert into t values ( 2 );
1 row created.
 </b>

now, you might expect that either

a) that insert would block on the alter (but it doesn't) or
b) that alter would be blocked also now on session 3's insert (but it isn't)

continue on and:

session1:
<b>
ops$tkyte@ORA9IR2> commit;
Commit complete.
</b>


now session 2 will immediately unblock and continue on.


So,  you must have a very long running outstanding transaction that you are waiting on.
 

Thanks

A reader, December 20, 2004 - 1:51 pm UTC

Metalink Note 33402.1 has a very good explanation of this

SELECT s.SID, s.serial#, s.last_call_et,q.sql_text, s.logon_time,s.username, s.status, s.sql_address
FROM v$session s, v$sqlarea q
WHERE s.saddr IN (SELECT ses_addr
FROM v$transaction
WHERE start_scnb < (SELECT start_scnb
FROM v$sqlarea q, v$session s, v$transaction t
WHERE q.address = s.sql_address AND sql_text LIKE 'alter tablespace%' AND s.taddr = t.addr))
AND s.sql_address = q.address

These are my culprits! Yes, this is a stupid legacy client-server VB app that lets users initiate a transaction and waits forever for users to commit or rollback.

But, I still dont understand why the ALTER TABLESPACE needs to wait for some unrelated transactions somewhere in the database to make my tablespace readonly?

Thanks

Tom Kyte
December 20, 2004 - 2:08 pm UTC

how do you know they are unrelated....




Unrelated

A reader, December 20, 2004 - 2:32 pm UTC

Well, then what is the logic behind this design? Yes, no one knows that those transactions (started before the ALTER TABLESPACE) are likely to do.

So, why not just let the alter tablespace succeed and if those txns try to modify data in that tablespace, they will get the ORA-00372: file N cannot be modified at this time error?

Txns that attempt to modify data in the tablespace after it is successfully made readonly will get that error anyway, so why the arbitrary decision to require that txns started before the ALTER TABLESPACE need to be completed?

Also, that Metalink note says

The ALTER TABLESPACE ... READ ONLY statement places the
tablespace in a transitional read-only mode and waits for
existing transactions to complete (commit or roll back).
This transitional state does not allow any further write
operations to the tablespace except for the rollback of
existing transactions that previously modified blocks in the
tablespace. Hence, in transition the tablespace behaves like
a read-only tablespace for all user statements except
ROLLBACK.
After all of the existing transactions have either
committed or rolled back, the ALTER TABLESPACE ... READ ONLY
statement completes and the tablespace is placed in
read-only mode.

If this were true, your insert in Session 3 above should have failed, right? Why didnt it?

Thanks

Tom Kyte
December 20, 2004 - 2:43 pm UTC

what if the pre-existing transactions had already started writing to that tablespace.


My inserts were into a tablespace separate from the one I was making read only.

Oracle serializes it?

A reader, January 05, 2005 - 10:18 am UTC

If I issue a 'alter tablespace ts1 read only' in one window and 'alter tablespace ts2 read only' in another window, it appears the second one waits for the first one to complete?

The first one already waits for all active transactions started before it to complete, so it could wait a long time (potentially). The second one waits for the first one. So if some more txns get in by the time the first one is done, the second one will have to wait for them to finish too!

This is a bummer. For a large multi-terabyte database, it would have been nice to be able to snap off multiple tablespaces at the same time!

Thanks

Import cant read datafile?

CG, January 07, 2005 - 1:20 pm UTC

Tom,
I have exported a self contained set successfully with export.

When I try to import it I get this:
Connected to: Oracle8i Enterprise Edition Release 8.1.7.4.1 - Production
JServer Release 8.1.7.4.1 - Production

Export file created by EXPORT:V08.01.07 via conventional path
About to import transportable tablespace(s) metadata...
import done in WE8ISO8859P1 character set and WE8ISO8859P1 NCHAR character set
IMP-00017: following statement failed with ORACLE error 19721:
"BEGIN sys.dbms_plugts.checkDatafile(NULL,2538524650,15,5120,15,15,4194302"
",1280,6403681,61239771,60523983,NULL,NULL,NULL,NULL); END;"
IMP-00003: ORACLE error 19721 encountered
ORA-19721: Cannot find datafile with absolute file number 15 in tablespace SAMS_INTERFACE
ORA-06512: at "SYS.DBMS_PLUGTS", line 1626
ORA-06512: at line 1

The tablespaces that I want to import have been dropped prior to performing import.

But I consistently get this error. Why is it passing a NULL to the procedure?

Tom Kyte
January 08, 2005 - 3:58 pm UTC

you don't show us the entire example. sooo, that makes is really hard to comment on "why your attempt failed"

[tkyte@localhost tkyte]$ oerr ora 19721
19721, 00000, "Cannot find datafile with absolute file number %s in tablespace %s"
// *Cause: Can not find one of the datafile that should be in the Pluggable
// Set.
// *Action: Make sure all datafiles are specified via import command line
// option or parameter files.




here is the command I executed...

cg, January 10, 2005 - 9:28 am UTC

This is the export
c:\oracle\ora81\bin\exp.exe USERID='sys/<snip>@samsdb AS SYSDBA'
TRANSPORT_TABLESPACE=Y
TABLESPACES=(SAMS_INTERFACE,
SAMS_LOB,
SAMS_AUDIT,
SAMS_INDX,
SAMS_TABS,
USERS) TRIGGERS=Y CONSTRAINTS=Y GRANTS=Y FILE= C:\\BackupSamsDB\export\samsdb9349639\samsdb.dmp log=C:\BackupSamsDB\export\log\export_samsdb9349639.log

This is the import
c:\oracle\ora81\bin\imp.exe USERID='sys/<snip>@samsdb AS SYSDBA'
TRANSPORT_TABLESPACE=y DATAFILES=('C:\\BackupSamsDB\export\samsdb9349639\sams_interface.dbf',
'C:\\BackupSamsDB\export\samsdb9349639\sams_audit.dbf',
'C:\\BackupSamsDB\export\samsdb9349639\sams_lob.dbf',
'C:\\BackupSamsDB\export\samsdb9349639\sams_indx.dbf',
'C:\\BackupSamsDB\export\samsdb9349639\sams_tabs.dbf',
'C:\\BackupSamsDB\export\samsdb9349639\users.dbf')
TABLESPACES=(sams_tabs, sams_interface, sams_audit, sams_lob, sams_indx, users)
FROMUSER=(sams) TOUSER=(sams)
FILE= C:\\BackupSamsDB\export\samsdb9349639\samsdb.dmp
log=C:\BackupSamsDB\IMPORT\LOG\import_samsdb.log

Tom Kyte
January 10, 2005 - 10:17 am UTC

and are you sure that all of the needed files are in fact "there"

tablespaces consist of one or MORE files.

consider:

tkyte@ORA8IW> create tablespace testing
2 datafile 'c:\temp\testing.dbf' size 1m;
Tablespace created.

tkyte@ORA8IW> alter tablespace testing add datafile 'c:\temp\testing2.dbf' size 1m;
Tablespace altered.

tkyte@ORA8IW> create table t ( x int ) tablespace testing;
Table created.

tkyte@ORA8IW> alter tablespace testing read only;
Tablespace altered.

tkyte@ORA8IW> host exp userid="""sys/manager as sysdba""" transport_tablespace=y tablespaces=(testing)

Export: Release 8.1.7.4.1 - Production on Mon Jan 10 10:11:52 2005

(c) Copyright 2000 Oracle Corporation. All rights reserved.


Connected to: Oracle8i Enterprise Edition Release 8.1.7.4.1 - Production
With the Partitioning option
JServer Release 8.1.7.4.1 - Production
Export done in WE8ISO8859P1 character set and WE8ISO8859P1 NCHAR character set
Note: table data (rows) will not be exported
About to export transportable tablespace metadata...
For tablespace TESTING ...
. exporting cluster definitions
. exporting table definitions
. . exporting table T
. exporting referential integrity constraints
. exporting triggers
. end transportable tablespace metadata export
Export terminated successfully without warnings.

tkyte@ORA8IW> host mkdir \xxx

tkyte@ORA8IW> host xcopy c:\temp\*.dbf \xxx
C:\temp\TESTING.DBF
C:\temp\TESTING2.DBF
2 File(s) copied

tkyte@ORA8IW> alter tablespace testing read write;
Tablespace altered.

tkyte@ORA8IW> drop tablespace testing including contents ;
Tablespace dropped.

tkyte@ORA8IW> host imp file=expdat.dmp userid="""sys/manager as sysdba""" transport_tablespace=y "datafiles=(c:\xxx\testi
ng.dbf)"

Import: Release 8.1.7.4.1 - Production on Mon Jan 10 10:13:34 2005

(c) Copyright 2000 Oracle Corporation. All rights reserved.


Connected to: Oracle8i Enterprise Edition Release 8.1.7.4.1 - Production
With the Partitioning option
JServer Release 8.1.7.4.1 - Production

Export file created by EXPORT:V08.01.07 via conventional path
About to import transportable tablespace(s) metadata...
import done in WE8ISO8859P1 character set and WE8ISO8859P1 NCHAR character set
. importing SYS's objects into SYS
IMP-00017: following statement failed with ORACLE error 19721:
"BEGIN sys.dbms_plugts.checkDatafile(NULL,2272405325,10,128,8,10,0,0,64982"
"4,649853,282236,NULL,NULL,NULL,NULL); END;"
IMP-00003: ORACLE error 19721 encountered
ORA-19721: Cannot find datafile with absolute file number 10 in tablespace TESTING
ORA-06512: at "SYS.DBMS_PLUGTS", line 1626
ORA-06512: at line 1
IMP-00000: Import terminated unsuccessfully

There is your precise error, we are missing a datafile on the datafiles=() line. If i do it with a different list:


tkyte@ORA8IW> host imp file=expdat.dmp userid="""sys/manager as sysdba""" transport_tablespace=y "datafiles=(c:\xxx\testi
ng2.dbf)"

Import: Release 8.1.7.4.1 - Production on Mon Jan 10 10:13:44 2005

(c) Copyright 2000 Oracle Corporation. All rights reserved.


Connected to: Oracle8i Enterprise Edition Release 8.1.7.4.1 - Production
With the Partitioning option
JServer Release 8.1.7.4.1 - Production

Export file created by EXPORT:V08.01.07 via conventional path
About to import transportable tablespace(s) metadata...
import done in WE8ISO8859P1 character set and WE8ISO8859P1 NCHAR character set
. importing SYS's objects into SYS
IMP-00017: following statement failed with ORACLE error 19721:
"BEGIN sys.dbms_plugts.checkDatafile(NULL,2272405325,9,128,8,9,0,0,649817,"
"649853,282236,NULL,NULL,NULL,NULL); END;"
IMP-00003: ORACLE error 19721 encountered
ORA-19721: Cannot find datafile with absolute file number 9 in tablespace TESTING
ORA-06512: at "SYS.DBMS_PLUGTS", line 1626
ORA-06512: at line 1
IMP-00000: Import terminated unsuccessfully

similar error, different file number. Only when I give it ALL of the information:

tkyte@ORA8IW> host imp file=expdat.dmp userid="""sys/manager as sysdba""" transport_tablespace=y "datafiles=(c:\xxx\testing2.dbf,c:\xxx\testing.dbf)"

Import: Release 8.1.7.4.1 - Production on Mon Jan 10 10:13:57 2005

(c) Copyright 2000 Oracle Corporation. All rights reserved.


Connected to: Oracle8i Enterprise Edition Release 8.1.7.4.1 - Production
With the Partitioning option
JServer Release 8.1.7.4.1 - Production

Export file created by EXPORT:V08.01.07 via conventional path
About to import transportable tablespace(s) metadata...
import done in WE8ISO8859P1 character set and WE8ISO8859P1 NCHAR character set
. importing SYS's objects into SYS
. importing TKYTE's objects into TKYTE
. . importing table "T"
Import terminated successfully without warnings.


does it succeed. So, I've a feeling you are "missing" some files that are necessary for this. go back to the SOURCE database and verify you've got them all


thank you

cg, January 10, 2005 - 11:20 am UTC

That was my exact error.

I tried again after finding out that......

1. A tablespace was not being dropped.
2. I needed to drop a temp table and its constraints in order to drop that tablespace.
3. Changed my script to stop putting an extra '\' after my drive letter ('C:\\').

once I did that it was successful.

Didnt rerun the export just tried to import the same set.

Thanks for your time.

Anna Lobova, January 13, 2005 - 12:27 am UTC


A reader, May 10, 2005 - 1:18 pm UTC

Hi Tom,

I have done the following to check if my tablespaces are self contained:

SQL> exec sys.dbms_tts.transport_set_check( 'PORTIMAGE', TRUE);

PL/SQL procedure successfully completed.

SQL> select * from sys.transport_set_violations;

no rows selected

There are a number of tables in this tablespace and I wanted to check the indexes for one table in that tablespace. I did the following:

SQL> select index_name, table_name, tablespace_name from user_indexes
13:13:15   2  where table_name = 'BIOSET';

INDEX_NAME                     TABLE_NAME                     TABLESPACE_NAME
------------------------------ ------------------------------ ------------------------------
IDX_BIOSET_BIOSETID            BIOSET                         INDX
IDX_BIOSET_BIOTXNID            BIOSET                         INDX
IDX_BIOSET_COMBIDS             BIOSET                         INDX

Why did it not give me any errors. Shouldn't it error out because even though the table data is in tablespace PORTIMAGE the index is created in tablespace INDX and I have not included INDX in the tablespace check in the first query.

Thanks. 

Tom Kyte
May 10, 2005 - 2:24 pm UTC

I should have been more terse and just stopped at:

Your tablespace is NOT self contained (eg: it has an index but not the table)




A reader, May 10, 2005 - 2:40 pm UTC

My understanding was that if the tablespace is not self contained then when I do

select * from sys.transport_set_violations;

it should list me some violations. But I got a "no rows selected". Can you please clarify this for me please?

Thanks.

Tom Kyte
May 10, 2005 - 2:54 pm UTC

because it was -- I should have stopped at "you have a tablespace with an index but not the table" and not rambled on and said "or the converse"

Violations, IOTs and GTTs

AB, May 12, 2005 - 9:29 am UTC

Tom,

We have two tablespaces for data and indexes, but the existence of IOTs and GTTs are causing the transport_set_check to throw up violations due to the "tables" being in the SYSTEM tablespace.

HJR noted on a C.D.O.S thread that the primary key indexes for the GTTs would need to be dropped at source and re-created at target, but do you have any idea what I can do for the IOTs ? I'm getting a violation of the form:-

Constraint <PK_HEAP> between table <IOT> in tablespace SYSTEM and table <HEAP> in tablespace <TS_TO_BE_TRANSPORTED>.

The IOT and its overflow segment exist entirely in the tablespace set and I've verified this. For some reason Oracle believes the "table" belongs to SYSTEM. Clearly this is a similar problem to the GTT in that there is no actual table segment as such. Do you have any idea what I might be able to do to workaround this ?

Thanks...




Tom Kyte
May 12, 2005 - 12:47 pm UTC

<b>help me reproduce...</b>


ops$tkyte@ORA9IR2> create table t ( x int primary key, y int ) organization index overflow tablespace tkyte;
 
Table created.
 
ops$tkyte@ORA9IR2> @dbls
Wrote file /tmp/xtmpx.sql
 
OBJECT_TYPE  OBJECT_NAME                    S TABLESPACE_NAME
------------ ------------------------------ - ------------------------------
INDEX        SYS_IOT_TOP_38736                TKYTE
 
TABLE        SYS_IOT_OVER_38736               TKYTE
             T
 
 
ops$tkyte@ORA9IR2> exec sys.dbms_tts.transport_set_check('TKYTE',true);
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> select * from sys.transport_set_violations;
 
no rows selected
 
 

rename tablespace

Dave, May 12, 2005 - 9:35 am UTC

In 10g, renaming tablespace feature helps a lot in transport tablespace operation. However, I had to rename a tablespace either in the source or target before I can plug and attach. Say, I have a tablespace with name tabspace_A in database ora_db1. If there is a tablespace with the same name tabspace_A in database ora_db2, I should rename the tablespace in ora_db2 before I can import the metadata for tablspace_A into ora_db2. My question is that is there a way in 10gR1 to rename tabspace_A when I import the metadata itself into ora_db2 so that I don't have to rename the tablespace tablespace_A in ora_db2. Hope I am not confusing the issue!

Thanks.

Tom Kyte
May 12, 2005 - 1:01 pm UTC

you either rename before you transport it, or you rename the pre-existing before you import.

I do not believe the tablespace remap applies here with impdb -- but I will have to test that soon (no facility to test right now)

Sorry, forgot the version and demo...

AB, May 13, 2005 - 3:24 am UTC

Tom,

Apologies for slackness above. Here's an example of what I'm experiencing ( note the version is old old old :o( ):-

SQL*Plus: Release 8.1.7.0.0 - Production on Fri May 13 08:11:52 2005

(c) Copyright 2000 Oracle Corporation. All rights reserved.


Connected to:
Oracle8i Enterprise Edition Release 8.1.7.4.1 - Production
With the Partitioning option
JServer Release 8.1.7.4.1 - Production

SCOTT@AB> create table t ( x int primary key, y int ) organization index tablespace users overflow tablespace users;

Table created.

SCOTT@AB> select segment_name, segment_type, tablespace_name
2 from user_segments
3 where segment_name like 'SYS_IOT%';

SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME
--------------------------------------------------------------------------------- ------------------ ------------------------------
SYS_IOT_OVER_6827 TABLE USERS
SYS_IOT_TOP_6827 INDEX USERS

SCOTT@AB> exec sys.dbms_tts.transport_set_check( 'USERS', TRUE );

PL/SQL procedure successfully completed.

SCOTT@AB> select * from sys.transport_set_violations;

no rows selected

That's all well and dandy, but now I add a foreign key to replicate what is going on in the actual database I wanted to transport...

SCOTT@AB> create table t2 ( x int references t(x) ) tablespace users;

Table created.

SCOTT@AB> exec sys.dbms_tts.transport_set_check( 'USERS', TRUE );

PL/SQL procedure successfully completed.

SCOTT@AB> select * from sys.transport_set_violations;

VIOLATIONS
----------------------------------------------------------------------------------------------------------------------------------------------------
-----
Constraint SYS_C005425 between table SCOTT.T in tablespace SYSTEM and table SCOTT.T2 in tablespace USERS

And now there is a magical T in SYSTEM tablespace. Of course, T is just a logical object, so doesn't really exist anywhere other than as metadata, so this is bugging me.

As an aside, I noticed something very naughty going on from Oracle when I traced the DBMS_TTS call. Here's an example of what I mean from v$sql :-

SCOTT@AB> select sql_text from v$sql
2 where lower(sql_text) like '%sys_c005425%'
3 and lower(sql_text) not like '%ignore_me%';

SQL_TEXT
----------------------------------------------------------------------------------------------------------------------------------------------------
-----
insert into sys.transts_error$ values ('Constraint SYS_C005425 between table SCOTT.T in tablespace SYSTEM and table SCOTT.T2 in tablespace USERS')

You might want to have a word with the developers who built TTS ;o)

Regards

Tom Kyte
May 13, 2005 - 10:13 am UTC

contact support, reference bug 2926756.

I reproduced in 8i/9i (fixed in 10g)

I couldn't find a bug against IOT's but theorized it would happen with partitioned tables as well -- and it does. That bug is against partitioned tables, but I think the fix would apply to iot's as well. there may be a patch for 8i, but you'll need to work that via support.


Tarnsport TBS and sequence-generated keys

MS, June 29, 2005 - 11:52 am UTC

Oracle Version: 9.2.0.6
OS: HP-UX

Tom,
I have a question about sequence generated keys and transportable tablespaces.
Suppose there is a tablespace TS_EMP (all employee data) that we want to transport
from production to development database. This tablespace has all the employee related
data but one of it's table has a foreign key to say department table (DPT) that is in another tablespace (ts_dept).
DEPT.DEPTID is a key generated by a sequence...so it will be different in different databases.
So, if we want to transport employee data..then we have to transport department data also..
Is there another way to do this transport....
also is there a way to keep the sequences same in a production and development database...specially when the
sequence is something like company id or department id which in turn has other dependent tables.

Thanks

Tom Kyte
June 29, 2005 - 12:10 pm UTC

...so it will be different in
different databases.


then it isn't the key is it. Think about it...

You want data integrity -- you need data with integrity. The key is the key, the key comes from PROD, not DEV.

Perhaps what you really want to do is restore your PROD instance to TEST for some real testing (that you can restore AND that your app upgrades work?)

There have been some "interesting" discussions just recently on synthetic keys vs natural keys -- you might be interested in them.


</code> http://www.phpbbserver.com/phpbb/viewforum.php?f=2&mforum=dizwellforum
"Poll ID COLUMNS"

http://oraclesponge.blogspot.com/2005/06/natural-and-synthetic-keys.html <code>


David Aldridge, June 29, 2005 - 1:15 pm UTC

... and on whether the date datatype is natural or synthetic ...
</code> http://oraclesponge.blogspot.com/2005/06/dates-as-key-values-natural-or.html <code>

:D

Tom Kyte
June 29, 2005 - 1:30 pm UTC

just make sure you have about 5 days set aside to read the threads...

it's all about passion :)

good thing the long weekend is coming up..thanks

ms, June 29, 2005 - 2:15 pm UTC


transportable tablespace and ASM

reader, July 29, 2005 - 6:13 pm UTC

If my datafiles are ASM files how does it work with transportable tablespace? I can't see the ASM files at the OS level. My understanding is that only RMAN can see them. Hoe do I ftp the files to another host? In the imp step how do I attach the files? Thanks.

above question...

Samuel, July 30, 2005 - 2:56 pm UTC

Tom, If time permits, could you please look at the question on ASM and transportable tablespace. We are evaluating ASM at work. I have been asked this question and I did not know how to do it. We do transport tablespaces a lot. Please help. Thanks.

transportable tablespace and ASM

Samuel, July 30, 2005 - 6:04 pm UTC

we are testing with 10gR1 so I refered the first URL that you provided. All that says is that,

<quote>If Automatic Storage Management (ASM) is used with either the source or destination database, you must use RMAN to transport/convert the tablespace<quote>

How do I use RMAN to ftp the files? It is confusing. Is there a document that would outline the syntax how to use RMAN to transport the ASM files to another machine. We are using Solaris on both target and source so no need to convert the files. I would appreciate if you could help me on this.

Thanks for your time.

Tom Kyte
July 31, 2005 - 9:05 am UTC

you don't "ftp" with rman (the ftp ability is new with 10gr2 and XDB). You may be able to use dbms_file_transfer inside the database over sqlnet as well. I have not personally tried it (but it is there to move datafiles from instance to instance)

A reader, August 12, 2005 - 10:02 am UTC

Hi Tom,
Firstly,thanks for always helping us with your valuable advise. I am not sure if I am raising a new topic but it is related with TTS.
COuld you please explain how we can make two databases share the same tablespace(s).

Many thanks and regards

Santosh.

Tom Kyte
August 13, 2005 - 8:50 am UTC

In read only mode, you can. If the tablespace is read only, you can.

Otherwise, you cannot. Cannot do it.

If you are interested in the read only, the databases would be on the same machine, you put the tablespace into read only mode. You transport it. You import that metadata into the other database and they are "sharing" the files.

But at that point, you must ask yourself, "why do I have two databases when I obviously meant to have one"

A reader, August 16, 2005 - 8:40 am UTC

Hi Tom,
Thanks for your reply. Yes, we have got few very large tablepsaces which are not changing but are required . So, need to share them between databases.

Many thanks

Santosh

Tom Kyte
August 17, 2005 - 10:36 am UTC

sounds like you NEED A SINGLE DATABASE on that server if you ask me.

TTS - Correction Info.

Uday, August 24, 2005 - 6:11 pm UTC

<Tom>
1.7.2 isn't a version by the way. 8.1.7.2 is and I guess that is what you meant.

anyway, we are upwards compatible, not backwards. file formats change. You would be able to take a 8170 file and attach it to 8172 but not the other way in general.

You can go up, but not down.
<Tom>

Tom did you mean only 8i release or 9i release too? We could do Transportable Tablespace from 9.2.0.5 to 9.2.0.4

Tom Kyte
August 25, 2005 - 3:26 am UTC

depends on your compatible settings as well -- across patch sets, it make be possible, but not always (I would not count on it in any case)

ALTER TABLESPACE READ ONLY TAKING VERY LONG TIME

Dilipkumar Patel, August 30, 2005 - 2:36 pm UTC

Following Note is taken from the Oracle 9i documentation "Server admin guide, chapter 11: Managing Tablespaces "


If you find it is taking a long time for the tablespace to quiesce, it is possible to identify the transactions which are preventing 
the read-only state from taking effect. The owners of these transactions can be notified and a decision can be made to terminate the 
transactions, if necessary. The following example illustrates how you might identify the blocking transactions:

Identify the transaction entry for the ALTER TABLESPACE ... READ ONLY statement and note its session address (saddr). 
SELECT SQL_TEXT, SADDR 
    FROM V$SQLAREA,V$SESSION
    WHERE V$SQLAREA.ADDRESS = V$SESSION.SQL_ADDRESS    
        AND SQL_TEXT LIKE 'alter tablespace%'; 

SQL_TEXT                                 SADDR   
---------------------------------------- --------
alter tablespace tbs1 read only          80034AF0


The start SCN of each active transaction is stored in the V$TRANSACTION view. 
Displaying this view sorted by ascending start SCN lists the transactions in execution order.
Since you know the session address of the transaction entry for the read-only statement,
it can be located in the V$TRANSACTION view. 
All transactions with lesser start SCN can potentially hold up the quiesce and subsequent read-only state of the tablespace. 
SELECT SES_ADDR, START_SCNB 
    FROM V$TRANSACTION
    ORDER BY START_SCNB;

SES_ADDR START_SCNB
-------- ----------
800352A0       3621   --> waiting on this txn
80035A50       3623   --> waiting on this txn
80034AF0       3628   --> this is the ALTER TABLESPACE statement
80037910       3629   --> don't care about this txn


I am facing the same situation. My ALTER TABLESPACE TABLESPACE_NAME READ ONLY; is still runinng waiting for other previous non-related transaction of the database with lesser START SCN.

So is it related to the Tablespace transactions running or all the transaction running will wait to make the tablespace read only. My tablespace not contain any (Select nor DML) transaction. Is it related with some lock on the Data dictionary. 


Session waits for the particular session showing waits on enqueue and row cache lock.

SQL> select sid, event, total_waits,average_wait, time_waited/100 Time_Waited, max_wait
  2  from   v$session_event
  3  where  sid = &sid;
old   3: where  sid = &sid
new   3: where  sid = 350

                                                                        Time_Waited
       SID EVENT                               TOTAL_WAITS AVERAGE_WAIT      Seconds   MAX_WAIT
---------- ----------------------------------- ----------- ------------ ------------ ----------
       350 latch free                                  421            6        26.03         54
       350 wait list latch free                          2            2          .04          2
       350 rdbms ipc reply                             606           92       560.17        196
       350 enqueue                                    3806          175      6657.41        296
       350 control file sequential read                  5            0          .00          0
       350 control file parallel write                   3            2          .07          5
       350 kcbzps                                        4           98         3.92         98
       350 buffer latch                                  2           98         1.97         98
       350 log file sync                                 2           15          .30         15
       350 db file sequential read                     644            2        10.04         55
       350 row cache lock                             2543          294      7466.31        295
       350 resmgr:waiting in end wait2                  42            0          .01          0
       350 SQL*Net message to client                    21            0          .00          0
       350 SQL*Net message from client                  21          250        52.43       5239

14 rows selected.


SQL> select sid, event, p1, p2 , p3, state
  2  from   v$session_wait
  3  where  sid = &sid;
old   3: where  sid = &sid
new   3: where  sid = 350

       SID EVENT                                       P1         P2         P3 STATE
---------- ----------------------------------- ---------- ---------- ---------- -------------------
       350 row cache lock                              11          0          3 WAITING

 

Tom Kyte
August 30, 2005 - 2:48 pm UTC

not sure if you have a question here or not?

ALTER TABLESPACE READ ONLY TAKING VERY LONG TIME

Dilipkumar Patel, August 30, 2005 - 2:54 pm UTC

Sorry, my question was not clear.

My questions is what is the reason it is waiting for all transaction of the database. And what is the solution for this.



Tom Kyte
August 30, 2005 - 3:21 pm UTC

it is an implementation restriction, it is the way it works, you have to wait for all outstanding transactions to complete. Long running transactions will prevent you from doing it.



dbms_tts does not show partitioned indxes

A reader, November 16, 2005 - 2:27 pm UTC

Tom,
I am having two problems with TTS in 9.2.0.6.
a) I want to transport a partitioned table with all its partitions. Each partition has its own tablespace. I ran
exec dbms_tts.transport_set_check('ts1,ts2,...,tsn',TRUE);

select * from transport_set_violations;
returned 0 rows. This partitioned table has partitioned indexes as well. Each of these partitioned index is in its own tablespace, separate from table partition tablespace. Why the dbms_tts.transport_set_check did not report them? If I transported the tablespaces just believing dbms_tts, the index tablespaces will be missed creating a mess.

b) When I ran the
exec dbms_tts.transport_set_check('p1,p2,...,pn',TRUE);

on the second table, the

select * from transport_set_violations;

returned the error that

Default Partition (Table) Tablespace TS1 for T1 not contained in transportable set

The problem here is that tablespace TS1 does not hold any partition of T1. I checked in dba_tables and dba_tab_partitions. Then I used OEM to generate object DDL for T1 and the tablespace TS1 was the default tablespace for T1. Question is where is this information stored in data dictionary?

Thanks



Tom Kyte
November 16, 2005 - 6:16 pm UTC

can you give me a two segment example to test with (meaning at MOST two partitions and at least two partitions)

use tablespaces t1, t2, i1, i2



Rowid can be duplicated?

Naresh, November 20, 2005 - 2:34 am UTC

Tom,

rowid is a combination of data file Id, block within data file and row within block. (Correct?)

When a tablespace is trasnported, the data file copied may share the file id with files that exist on target DB. Can this not lead to duplicated rowids?

If file ids can be duplicated, how does Oracle resolve as to which file is to be accssed given a situation that a row is to be accessed using rowid?

Thanks,
Naresh.


Tom Kyte
November 20, 2005 - 8:27 am UTC

rowids contain that (and more), yes. they also contain object information.

there are relative and absolute file ids. and a rowid of a row is not stored with the row - rather it is implied by where the row "exists" (eg: you goto a row and then you know it's rowid - the rowid isn't stored with the row, the rowid is found be finding the row and saying "ah - it is here"

transport Index tablespace

jp, November 21, 2005 - 2:16 am UTC

Dear Tom,

this is the scenario..

Two servers ( A and B ), running same application, same database structure, same tablespaces, same database version ( 9.2 ) and same OS, just different data.
server A has an index tablespace which is offline since 3 months back, to put it online we need to apply recovery but we dont have all the archives.
this is what I am planning..

1) drop index tablespace in server A
2) recreate a new index tablespace in server A
3) transport the index tablespace from Server B
4) rebuild indexes in server A

Is this the correct approach?

thanks



Tom Kyte
November 21, 2005 - 8:29 am UTC

no, you would just:


a) drop index and tablespace
b) create tablespace and index


it would not make sense (and it is not actually possible) to transport just "indexes".




How do the SCNs work?

naresh, December 07, 2005 - 12:16 am UTC

Tom,

I have some questions related to SCN (I hope I am not getting anything wrong below).

Consider the scenario:

We are transporting tablespace from DB1 to DB2.

On DB1m data is updated on the tablespace T, and the checkpoint is done as part of set readonly.
The last SCN on the datafiles in T is S1.

The last SCN on DB2 is S2.

What happens when S1 > S2? e.g. if the instance crashes, then during startup, how does Oracle know that the S1 on the transported data files
being ahead of the database wide SCN of S2 is OK?

If there is subsequently an update to tablespace T data, again the updating SCN may be lower than the SCN on the
datafile checkpoint information. How is this handled?

Thanks,
Naresh.

Tom Kyte
December 07, 2005 - 1:54 am UTC

not an issue in transported data. The scn isn't really stored on the blocks, transaction information is. when you transported the data the tablespace was read only upon attachment - it is "known" to be OK.

SCN

naresh, December 07, 2005 - 9:04 am UTC

Tom,

I did not undestand the response.

Here's my question again, with some more details:

transporting tablespace T from DB1 to DB2.

On DB1 data is updated on the tablespace T, and the checkpoint is done as part of "set readonly". The last SCN on the datafiles' **HEADERs** in T is SCN1=10000.

The last SCN on DB2 is SCN2=200.

Here SCN1 > SCN2.

After transporting T, updates are made on tabespace T in DB2. Say the SCN on DB@ advances to SCN2=500.
Ultimately a checkpoint is performed at some time - at this time the existing checkpoint SCN on the HEADER block of a datafile in T is SCN1=10000, which is greater than the SCN2 (=500) prevalent on the DB2. Would this give any problems?

OR,
Is it that, as part of making the Tablespace read-write on DB2, the datafile HEADERs are updated so that the above condition does not occur (i.e. the checkpoint SCN which is 10000, is set to SCN2 which is 200?)

OR,
is the SCN on the target database advanced whenever a transoprted tablespace is plugged in?

Thanks,
Naresh


Tom Kyte
December 08, 2005 - 1:01 am UTC

No, it does not give any problems, or else this entire feature would not work.

The process of attaching and subsequently making read-write of these datafiles do everything necessary to make this work transparently.

differing from documentation

A reader, March 28, 2006 - 11:01 am UTC

Tom, you said in the first reply of this thread

"
Your tablespace is NOT self contained (eg: it has an index but not the table, or
the table and not the index)."

But in 9i release 2 Database Administrators guide Chapter 11
"
...Some examples of self contained tablespace violations are:

An index inside the set of tablespaces is for a table outside of the set of tablespaces.

Note:
It is not a violation if a corresponding index for a table is outside of the set of tablespaces.
"

but you said
.... or the table and not the index)

can you please clarify

forgot?

A reader, March 30, 2006 - 4:18 am UTC

Please be reminded if you forgot to reply the above.

Thanks

Tom Kyte
March 31, 2006 - 11:09 am UTC

the documentation is correct.

Transportable tablespace in 10g

Reader, April 05, 2006 - 3:36 pm UTC

I want to use expdp in 10.2.01 to transport TOOLS_1 tablespace.

I did the export using following parameters (after putting the source database's tablespace in READ ONLY mode).

DIRECTORY=DATA_PUMP_DIR1
DUMPFILE=export.dmp
JOB_NAME=odsd10g_exp
TRANSPORT_TABLESPACES=(TOOLS_1)
TRANSPORT_FULL_CHECK=Y
LOGFILE=export.log

I tried to import this 3 times and it failed for 3 different reasons. Can you tell me why in each case? Also, what can I do to fix this?

DIRECTORY=DATA_PUMP_DIR1
DUMPFILE=export.dmp
JOB_NAME=odsd10g_imp
TRANSPORT_TABLESPACES=(TOOLS_1)
TRANSPORT_DATAFILES='/ux883/oradata/odsd10g/tools_101.dbf'
TRANSPORT_FULL_CHECK=Y
LOGFILE=import.log
--------------------------------------------------------------------------
$impdp "'/ as sysdba'" parfile=import.par

Import: Release 10.2.0.1.0 - 64bit Production on Wednesday, 05 April, 2006 13:50:53

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
ORA-39005: inconsistent arguments
ORA-39208: Parameter TTS_FULL_CHECK is invalid for IMPORT jobs.

If I cannot use this parameter then why is the parameter available for import?
=================================================================

DIRECTORY=DATA_PUMP_DIR1
DUMPFILE=export.dmp
JOB_NAME=odsd10g_imp
TRANSPORT_TABLESPACES=(TOOLS_1)
TRANSPORT_DATAFILES='/ux883/oradata/odsd10g/tools_101.dbf'
LOGFILE=import.log
----------------------------------------------------------------------
$impdp "'/ as sysdba'" parfile=import.par

Import: Release 10.2.0.1.0 - 64bit Production on Wednesday, 05 April, 2006 13:51:09

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
UDI-00011: parameter dumpfile is incompatible with parameter transport_tablespaces

What is this error message saying? What am I doing wrong here?
===================================================================

DIRECTORY=DATA_PUMP_DIR1
DUMPFILE=export.dmp
JOB_NAME=odsd10g_imp
TRANSPORT_DATAFILES='/ux883/oradata/odsd10g/tools_101.dbf'
LOGFILE=import.log
----------------------------------------------------------------------
$impdp "'/ as sysdba'" parfile=import.par

Import: Release 10.2.0.1.0 - 64bit Production on Wednesday, 05 April, 2006 13:53:04

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Master table "SYS"."ODSD10G_IMP" successfully loaded/unloaded
Starting "SYS"."ODSD10G_IMP": '/******** AS SYSDBA' parfile=import.par
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
ORA-39123: Data Pump transportable tablespace job aborted
ORA-29342: user CAI does not exist in the database

Job "SYS"."ODSD10G_IMP" stopped due to fatal error at 13:53:08

Shouldn't the import create the user CAI in the last case above? Then I created the user CAI, but then it didn't import the privileges for the user and the triggers are imported with privilege errors. I would appreciate your help!

Thank you.


Tom Kyte
April 06, 2006 - 9:44 am UTC

why are you using sysdba for any of this? documentation is fairly clear "don't do that" - sysdba is not to be used.


</code> http://docs.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_export.htm#sthref57 http://docs.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_import.htm#sthref241 <code>



Modifying a transported tablespace

VA, April 05, 2006 - 10:27 pm UTC

Oracle 9.2.0.6 on Solaris 8

T1: Took a transportable tablespace export from database A
T2: Dropped database A and recreated it as a copy of another database
T3: Plugged in the tablespaces exported at T1
T4: Modified the plugged in tablespace to READ WRITE and updated some data
T5: Dropped database A and recreated it as a copy of another database
T6: Tried to plug in the same TTS set (same dmp file and same datafiles as in T3), but this time it did not import. I keep getting a ORA-19721 when I do the import.

Any ideas? Thanks a lot for any help.

Tom Kyte
April 06, 2006 - 9:59 am UTC

[tkyte@vmware demo]$ oerr ora 19721
19721, 00000, "Cannot find datafile with absolute file number %s in tablespace %s"
// *Cause: Can not find one of the datafile that should be in the Pluggable
// Set.
// *Action: Make sure all datafiles are specified via import command line
// option or parameter files.


steps t3, t4, t5 seem to be "not relevant" if you used the same files (without MODIFICATION AT ALL) in t6.

did you use the files you plugged into the database in t3? and then made read write? If so, those files are not useful anymore, you need the exact same files you created in T1.

Answer

Reader, April 07, 2006 - 4:18 pm UTC

Tom,

I did know that. The reason I am using 'as sysdba' is because Oracle 9i and up won't let you login as sys without logging in 'as sysdba'. Isn't that correct? We always use 'sys' to do DBA work. We have not created our own users and given sysdba privilege.
But do you think the errors I am getting are because of logging in as 'sysdba'?

Thank you,


Tom Kyte
April 08, 2006 - 9:39 am UTC

You should NOT BE USING SYS (or system) either!!!

Use your own dba account (not even bothering to look at this whilst it is using sysdba/sys/system - they are not for you to use day to day - use your OWN accounts to do DBA work)


Import slow with lots of partitions

VA, April 10, 2006 - 11:29 am UTC

I have a 50GB TTS with about 50 tables, the TTS import is done before I can blink.

But I have another 25GB TTS with only 25 (partitioneD) tables, each table has about 100 partitions each.

This import takes a loooong time. The last time I checked it took nearly 2 hours!

I realize that importing all the partitoned tables means it has to do more work to create all the entries in the dictionary, but is there any way to speed this up? The whole point of using pluggable tablespaces is kinda lost if the plugin and plugout operations take hours!

Thanks

I am still getting error

Reader, April 10, 2006 - 3:16 pm UTC

I am using ptambe user who has DBA role granted. And I am still getting this error.

I am using a network link now:

Here is my import.par:

TRANSPORT_DATAFILES='/ux883/oradata/odsd10g/tools_101.dbf'
NETWORK_LINK='STEW301_MTV'
TRANSPORT_TABLESPACES=(TOOLS_1)
NOLOGFILE=Y

=============================================
$impdp ptambe@test10g_db parfile=import.par

Import: Release 10.2.0.1.0 - 64bit Production on Monday, 10 April, 2006 14:47:28

Copyright (c) 2003, 2005, Oracle. All rights reserved.
Password:

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Starting "PTAMBE"."SYS_IMPORT_TRANSPORTABLE_01": ptambe/********@test10g_db parfile=import.par
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
ORA-39123: Data Pump transportable tablespace job aborted
ORA-29342: user CAI does not exist in the database

Job "PTAMBE"."SYS_IMPORT_TRANSPORTABLE_01" stopped due to fatal error at 14:47:54

Help?

Tom Kyte
April 11, 2006 - 11:43 am UTC

did you read the error message "ORA-29342: user CAI does not exist in the database"

[tkyte@xtkyte-pc tkyte]$ oerr ora 29342
29342, 00000, "user %s does not exist in the database"
// *Cause: The referred user is one of the owners of data in the
// pluggable set. This user does not exist in the database.
// *Action: Consider either creating the user in the database or map
// the user to a different user via FROM_USER and TO_USER
// import options.
[tkyte@xtkyte-pc tkyte]$


DBMS_TTS.DOWNGRADE

A reader, April 17, 2006 - 2:32 am UTC

what does the DBMS_TTS.DOWNGRADE do? There is not much
detail in the manual about it.

Tom Kyte
April 17, 2006 - 7:58 am UTC

it would be for downgrading to a release of Oracle that didn't support transportable tablespaces (eg: you'll never actually use this procedure anymore)

ALTER TABLESPACE ts1 READ ONLY, (confusing documentation)

Asim Naveed, April 17, 2006 - 11:56 am UTC

Here is a copy/paste from Oracle SQL Referrence
9i release 2 documentation related to
ALTER TABLESPACE ts1 READ ONLY statement.


READ ONLY | READ WRITE
Specify READ ONLY to place the tablespace in transition read-only mode. In this
state, existing transactions can complete (commit or roll back), but no further write
operations (DML) are allowed to the tablespace except for rollback of existing
transactions that previously modified blocks in the tablespace.


After reading the above I am very much confused, because
while transporting tablespaces we make them read only and
then copy the datafiles to the target, but if (according
to the documentation) read only tablespace can allow
dml operations to complete and even rollbacking of
existing transactions then how the datafiles of such
tablespace can remain consistent as at the point of time
when the tablespace was brought into READ ONLY state.

I think it should be written like this
"When ALTER TABLESPACE ts1 READ ONLY statment
is issued(NOT COMPLETED),
existing transactions can complete (commit or roll back), but no further write
operations (DML) are allowed to the tablespace except for rollback of existing
transactions that previously modified blocks in the tablespace. After the completion of all such transactions
the tablespace is brought into READ ONLY state and now
no transactions (no matter whatever type it is) can
modify tablespace.

The documentation says IN THIS STATE ...
but i think it should be
"IN THE STATE
between
ALTER TABLESPACE ts1 READ ONLY IS ISSUED
and
THE TABLESPACE IS ACTUALLY BROUGTH INTO READ ONLY MODE."

Tom Kyte
April 17, 2006 - 8:03 pm UTC

the problem is, the alter won't complete until all outstanding transactions are finished.

So, I agree, the documentation is "fuzzy" on that - do you have the ability to file a tar?

documentation TAR?

Asim Naveed, April 18, 2006 - 7:12 am UTC

I have access to metalink, but thats for filing TARs about
software bug.How can I file a tar about documentation bug on metalink?



Tom Kyte
April 18, 2006 - 9:57 am UTC

a bug is a bug - by any other name.

documentations bugs are bugs.

Migration using transportable tablespace

Oracle in &quot;The Matrix&quot;, June 17, 2006 - 6:08 pm UTC

I am doing a ~1TB database migration . We are planning to do this process by eliminating the system tablespace. Because my team believes that it is better to have a fresh installation instead of the "corrupted system tablespace" that we have now!!!!

We are using the Raw devices in to store the datafiles. Here is my plan:
1. First create a database with with system,temp and undo. 2

2. Create all the users, "just user" like "create user xxx identfied by yyy" . <<-- with no default tablespace, temp tbs.

3.export all the tablespace from source database except system and of course temp and undo.

<<<<<I don't want to move any datafiles, because i think i can keep the tablespaces sharable by both versions of database in read only mode>>>>>

4) Import the tablespaces into target Database.

5) Shutdown the source database.

6) Grant all the privs create sequences, compile triggers, create materialized views, alter user with correct tablespaces (default and temp).

----
Here is my doubts ::: There are so many "revokes" for the default roles in my database and also so many materialized views.

a ) Can i do the full import after importing all the tablespaces into target database and remove the "create tablespace" and "create user" commands from importing in full mode?
So all the revoke statements will also executes while importing in full mode.

For this i did the import with show=y and the it is "unformatted".
How can format the log file while importing with show=y and execute that file.

Is that a good idea ?



3.

Tom Kyte
June 18, 2006 - 12:02 pm UTC

... There are so many "revokes" for the default roles in my
database and also so many materialized views. ...

don't know what you mean by that.

a) that'll be somewhat slow - but you cannot remove bits from a dmp file - if you import full, the create tablespaces and create users will be there.


imp indexfile=foo.sql

instead of show=y

transportable tablespace

matte, June 18, 2006 - 12:37 pm UTC

Instead of the step 6, i will do an full import with rows=n and "ignore=n". then i will get all the missing objects right.

Here the "create user", "create tablespace" may executed again and gives the errors ofcourse. If we ignore that errors, we should be fine right!!!

Regarding the revokes :: We revoked some of the privileges from the default roles. like REVOKE UNLIMITED TABLESPACE FROM RESOURCE; I think the full import will execute the revoke statements automatically.

So the outline of my idea :
x) Import all the tablespaces except system etc.
y) Import with full=y option to import all the objects once againg. so that all the objects missed in step x will be executed. Ofcourse you may get some errors while executing "CREATE USER", "CREATE TABLESPACE " --- Just ignore them!!!

Is this a good idea?

Tom Kyte
June 18, 2006 - 1:29 pm UTC

well, let me back up.

why do you believe your system tablespace is "corrupt"


I'd much much much rather not have to deal with exp/imp here at all.

You'll end up with duplicate constraints and other headaches - this is just not a good idea.

migration using tronsportable tablespace

A reader, June 18, 2006 - 2:19 pm UTC

"You'll end up with duplicate constraints and other headaches"

Is import does not over write the existed constraint? or just error out with something like "object already existed"

"other headaches" -- could you please tell me what might be those headaches.

Because i would like to do this process as simple/fast as possible.I think the import of tablespace, full import takes like... .less than 10 minutes

But i really want to know what are the other headaches i might have to face?
Thanks,

Tom Kyte
June 18, 2006 - 4:03 pm UTC

if you have un-named constraints, you get them over and over...


ops$tkyte@ORA10GR2> drop table t purge;

Table dropped.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> create table t
  2  ( x int check (x>0))
  3  /

Table created.

ops$tkyte@ORA10GR2> select constraint_name, constraint_type, search_condition from user_constraints;

CONSTRAINT_NAME                C
------------------------------ -
SEARCH_CONDITION
-------------------------------------------------------------------------------
SYS_C0011790                   C
x>0


ops$tkyte@ORA10GR2> !exp userid=/ tables=t

Export: Release 10.2.0.1.0 - Production on Sun Jun 18 16:00:34 2006

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)

About to export specified tables via Conventional Path ...
. . exporting table                              T          0 rows exported
Export terminated successfully without warnings.

ops$tkyte@ORA10GR2> !imp userid=/ full=y ignore=y

Import: Release 10.2.0.1.0 - Production on Sun Jun 18 16:00:35 2006

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

Export file created by EXPORT:V10.02.01 via conventional path
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses WE8ISO8859P1 character set (possible charset conversion)
. importing OPS$TKYTE's objects into OPS$TKYTE
. importing OPS$TKYTE's objects into OPS$TKYTE
. . importing table                            "T"          0 rows imported
Import terminated successfully without warnings.

ops$tkyte@ORA10GR2> select constraint_name, constraint_type, search_condition from user_constraints;

CONSTRAINT_NAME                C
------------------------------ -
SEARCH_CONDITION
-------------------------------------------------------------------------------
SYS_C0011791                   C
x>0

SYS_C0011790                   C
x>0




what other headaches?  missing bits, duplicated bits, just really messy.


But - why do you believe this would be "better" 

missing bits

A reader, June 18, 2006 - 7:42 pm UTC

Thanks tom,

why i am thinking this is better :: it just two way step one is import tablespace second is full import with rows=n ignore=n. With this i can do the migration in less than 30 minutes of time.

Here i am fine with the constraints because i can rename all the constraints to a used defined name in a couple of minutes and put it back to their original name after all the process completed.


And i am scared of this new words in oracle world "Missing bits" and "dublicate bits" . I really don't what are they?


--------

And here is why i am choosing this method. As i said before we some of the revoke statements on default roles at the source DB. I am not how to capture those statements.

And the other thing i got problem while testing is invalid trigger because of the sequences. of cource i created the sequences manually and recompiled the triggers.
---------

but anyway i will stop doing the full import and do some of the manual work after importing tablespaces.
I guess you probably talking about the some of the DML statements while doing the FULL IMPORT.

thanks for you time during the weekend also.

Tom Kyte
June 19, 2006 - 5:47 pm UTC

you can do a terabyte in under 30 minutes? with imp???

... I am doing a ~1TB database migration . ...

Ok, I'll bite - "HOW"



as for the unnamed constraints - why bother putting them back? NAME THEM something meaningful and KEEP IT UP.


duplicate bits - showed you already.

Missing bits - things that failed but you failed to notice because of the large volume of spurious errors generated by import with ignore=y.



Those are hardly new words.

I don't get the RELEVANCE of "revokes" here. You are doing an upgrade.



Database migration

A reader, June 19, 2006 - 11:24 am UTC

Hi tom,

These are all the list of objects that are not exported while exporting at transportable mode.

Analyze cluster, Application contexts, Auditing information, Database links, Default roles, Directory aliases, Dimensions, Foreign function libraries, External tables (without data), Index types, Java resources and classes, Job queues, Object types, Operators, Password history, Postinstance actions and objects, Postschema procedural actions and objects, Preschema procedural objects and actions, Private synonyms, Procedural objects, Profiles, Public synonyms, Referential integrity constraints, Refresh groups, Resource costs, Role grants, Roles, Rollback segment definitions, Sequence numbers, Snapshot logs, Snapshots and materialized views, System privilege grants, Table data, Tablespace definitions, Tablespace quotas, Triggers owned by other users, User definitions, User-stored procedures, packages, and functions, User views, User proxies.


From the above list,
I can workaround: Auditing information, Database links, Default roles, Private synonyms, Profiles, Public synonyms, Referential integrity constraints, Role grants, Roles, Sequence numbers, Snapshot logs, Snapshots and materialized views, System privilege grants, Tablespace quotas, User definitions, User-stored procedures, packages, and functions, User views.

I can ignore : Analyze cluster, Directory aliases, External tables (without data) … Because I am not using them at source database.

I need your help on : Application contexts, Dimensions, Foreign function libraries, Index types, Java resources and classes, Job queues, Object types, Operators, Password history, Postinstance actions and objects, Postschema procedural actions and objects, Preschema procedural objects and actions, , Refresh groups, Resource costs, User proxies.

Thanks you very much for your help.

Tom Kyte
June 19, 2006 - 6:37 pm UTC

i'm telling you - don't export import then.

Just UPGRADE


You are know seeing the meaning of missing bits, duplicate bits, "not straight forward", non-trivial.



Zip datafiles ?

Yoav, July 10, 2006 - 3:11 pm UTC

Hi Tom,
I have a transportable tablespace process, which involve moving around 8O GB every day from our staging database to the data warehouse database.

Is it safe to zip and unzip the datafiles ?
I am using ksh script. I am currently using the cp command to move the files. can you recommand a faster way to move them, or even different method ?

Thank you .

Tom Kyte
July 11, 2006 - 7:22 pm UTC

yes, zip and unzip is "lossless", you end up with the same file on the other end.

it might be

a) faster
b) slower
c) not any different

if you compress. If you are using cp, I would guess that b) will be the correct choice.

since cp copies from location 1 to location 2 on the same machine - you'll just have added "compress and decompress" into the step. If you were coping to a remote machine (eg: compress files, ftp files, decompress files) and the network was the big bottleneck - maybe a) would be true

I am back again!!!!

Maximus in Gladiator :), August 07, 2006 - 9:02 pm UTC

Hi tom,
I was discussing something about the migration of 9206 db to 9207 a couple of months back. I did the testing and migrated my databases to 9207 using transportable tablespace.

Here is the outline of the things i did:
I) PRE-MIGRATION:

1.Created 9207 database with system,undo,temp tablespaces.
2)Created all the users and granted roles,sysprivs, assigned profiles etc.
*Note: See Table 1-1 Objects Exported and Imported in Each Mode
</code> http://download-east.oracle.com/docs/cd/B10501_01/server.920/a96652/ch01.htm#1004717 <code>

Here my concept is to create or generate scipts for the objects that can not be exported/imported through userlevel and transportble tablespaces( like roles,public synonyms,profiles etc).

II)MIGRATION:

1.Export 9206 database in transportable tablespace mode (Followed all the steps like checking transport set violations.)
2) Second export in userlevel mode for all the users except sys,system,outln,dbsnmp etc.!!! Here i used the options like rows=n constraints=n triggers=n indexes=n.

3)Here the beaty comes. My databases are running using RAW devices. and my whole point here is not to use the dd command or cp command.
I just imported the dumpfile into 9207 database. At this moment of time all the application tablespaces are in read only mode in both 9206 database and 9207 database. Both versions of instances can see the tablespaces in read-only mode at this moment.
4) Import the dump file export at step2 using ignore=y

III)POST MIGRATION:
1) Now check the log file at second import. ignore the object already existed errors.
2)Check the object count in 9206 and 9207 database. you might want to recompile some objects like triggers espacially.
3)Shutdown the 9206 instance.
4)make all the tablespace to readwrite on 9207 database.

*** Happy migration :)
Everything looks fine to me, what is your comments on this way of migration work!!



Tom Kyte
August 07, 2006 - 10:06 pm UTC

I would have just installed the patch set and upgraded.

migration using Transportable tablespace

Maximus in Gladiator :), August 08, 2006 - 1:09 am UTC

Well, you are right in the normal case. But you are missing my point that i don't want to use the old system tablespace(because there is something wrong happend before i came here and some system tablespace objects are created in other schemas causing 600/7445 errors while droping that schema).

Anyway i just put this here because this is also one other way for
> faster migration to change your dbblock size if you are 9i or above
> faster migrations from different platforms if you are on the 10g.

Thanks tom,
your Ekalavya if you are a Karna :)
see this if you have free time!!!
</code> http://en.wikipedia.org/wiki/Drona#Ekalavya_and_Karna <code>



TTS from 10gr2 to 9ir2

Mike, July 25, 2007 - 2:29 pm UTC

Tom, i have the request for TTS from 10gr2 to 9ir2. can that work? i failed in finding a doc on this.

thanks!
Tom Kyte
July 26, 2007 - 5:42 pm UTC

from 9i to 10g - yes

from 10g to 9i - no

the data files could be different in format, there are things in 10g that didn't exist in 9i

things are compatible from smaller release to bigger release in general, not the other way around.

Local indexes and partition table

A reader, August 30, 2007 - 8:45 am UTC

Hi Tom,
Oracle version 8.1.7.4:

Im using transportable tablspace to move data from production database to "History" database.
Few tables are partitioned and they have local and global indexes.
Before the transportable tablespace shell script is running
I am using pl/sql procedure to exchange the partition to regular tables because of self contain error.

My question is regarding the LOCAL and the Global Indexes:
1. Is there option to move the LOCAL index to the
exchanged table as part of "alter table exchage " command ?
2. If not , how can i move that LOCAL index to
the exchange table without using export utility ?
(please note its 8.1.7.4 so i cant use
dbms_metadata.get_dll)
3. Could you please explain what is the "Best Practice"
for moving global and local indexes on partition table
to regular table .
Thank You.


Tom Kyte
September 04, 2007 - 5:11 pm UTC

1) including indexes - a clause on the exchange command.

2) see #1

3) there is no hope by definition for the global indexes, they will go invalid (or in current releases you can "update global indexes" during the exchange). They contain a mish-mash of data.



I don't like the term best practice.

One persons best practice is another persons recipe for utter disaster.


You can swap the partition+local indexes with a table that has the same structure+indexes.

The global indexes will not be swappable - they contain data from possibly EVERY partition, you will have to rebuild them on the partitioned table and create them on the newly exchanged table after you exchanged it.


Alexander, January 11, 2008 - 12:02 pm UTC

Hi Tom,

We have a large 800 gig HR system we are considering moving from Sun to Linux RAC cluster.

If we go cross platform, and if the tablespaces are not self contained, is the only option exp/imp? That might take a week...

"show=y" vs "indexfile=foo.sql"

Serg, February 17, 2009 - 11:56 am UTC

Tom,

Many times it is recommended "imp indexfile=foo.sql instead of show=y". Benefits are clear - formatted SQL script to recreate schema objects.

The problem though there are no triggers, procedures, functions - PL/SQL that belongs to the owner of the schema to be reconstructed.

I hit this issue testing TTS from 9iR2 to 10gR2 for Siebel OLTP schema. An advise is very appreciated.

Thanks.
Tom Kyte
February 17, 2009 - 2:28 pm UTC

use your install scripts?

datapump (10g) can do that...

But - why wouldn't you just upgrade a backup of your 9i database? I wouldn't even consider moving anything, just restore a backup, upgrade it, done.

Moving a 3rd party application like that is fraught with issues - sometimes, it just isn't even doable.

And restoring+upgrading will show you what will happen in real life when you go to upgrade production.

"show=y" vs "indexfile=foo.sql"

Serg, February 17, 2009 - 2:47 pm UTC

It is true :)

DBUA upgrade is easy. Not as fast as TTS upgrade though if you are after shorter outage window for your Production database ...

After all what is left after metadata import is:
- re-create sequences
- re-create procedures/packages/functions
- re-compile all PL/SQL with utlrp.

Looks like doable.
Tom Kyte
February 17, 2009 - 2:58 pm UTC

you forgot step 4: a call to support to verify this would be a supported application database after doing this.

"show=y" vs "indexfile=foo.sql"

Serg, February 17, 2009 - 3:51 pm UTC

Tom,

Understand. Siebel Support blessing, UAT testing - full SDLC. Not sure Siebel Professional Services would have time to test TTS upgrade for Siebel CRM themselves though.

Just to mention
- 10g data pump option - does not work due to missing packages on 9i
- 9iR2 TTS upgrade to 10gR2 works - tested at least RDBMS part. Full scale Siebel Apps testing is not done ... yet.
- next test would be 9iR2 TTS upgrade to 11gR1 - 10gR2 got just one year of "supported" life.

Thank you for your time.

Alex, May 06, 2009 - 9:52 pm UTC

Tom, are there restrictions against using multiple device types for restoring when creating a TT backup (from a backup)? It fails trying to find a log on tape, my default is disk. I found these restrictions, but I don't understand what it's saying:
If you are performing TRANSPORT TABLESPACE without a recovery catalog, the following additional restrictions apply:

    *

      If creating a transportable set with tablespace contents as of a point in time in the past, then the set of tablespaces with undo segments at the time TRANSPORT TABLESPACE is executed must be the same as the set of tablespaces with undo segments at the time selected for transport.

      Tablespaces including undo segments as of the target SCN for TRASNPORT TABLESPACE must be part of the auxiliary set. Unlike the recovery catalog, the RMAN repository in the control file only contains a record of tablespaces that include undo segments at the current time. If the set of tablespaces with undo segments was different at the target time, then TRANSPORT TABLESPACE fails.


$ rman target /

Recovery Manager: Release 10.2.0.2.0 - Production on Tue May 5 09:49:37 2009

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: OCP19T (DBID=3911219513)

RMAN> backup as compressed backupset archivelog all not backed up tag 'ARC_LOG_DISK%d%t';

Starting backup at 05-MAY-09
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=867 devtype=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: sid=822 devtype=DISK
skipping archive log file /ocp19t/arclogs/1_17020_617984496.dbf; already backed up 1 time(s)
skipping archive log file /ocp19t/arclogs/1_17021_617984496.dbf; already backed up 1 time(s)
skipping archive log file /ocp19t/arclogs/1_17022_617984496.dbf; already backed up 1 time(s)
skipping archive log file /ocp19t/arclogs/1_17023_617984496.dbf; already backed up 1 time(s)
skipping archive log file /ocp19t/arclogs/1_17024_617984496.dbf; already backed up 1 time(s)
skipping archive log file /ocp19t/arclogs/1_17025_617984496.dbf; already backed up 1 time(s)
channel ORA_DISK_1: starting compressed archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=17026 recid=17018 stamp=686051384
channel ORA_DISK_1: starting piece 1 at 05-MAY-09
channel ORA_DISK_1: finished piece 1 at 05-MAY-09
piece handle=/ocp19t/backups/RMAN_backups/OCP19T_nbke8k1q_1_1.bkp tag=ARC_LOG_DISK%D%T comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
Finished backup at 05-MAY-09

Starting Control File and SPFILE Autobackup at 05-MAY-09
piece handle=/ocp19t/backups/RMAN_backups/OCP19T_c-3911219513-20090505-0b.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 05-MAY-09

RMAN> transport tablespace OFACPP_DATA
tablespace destination '/ocp19t/backups/tspitr'
auxiliary destination '/ocp19t/backups/aux';2> 3>

RMAN-05026: WARNING: presuming following set of tablespaces applies to specified point in time

List of tablespaces expected to have UNDO segments
tablespace SYSTEM
tablespace UNDOTBS1

Creating automatic instance, with SID='uxaB'........

Starting recover at 05-MAY-09
using channel ORA_AUX_DISK_1

starting media recovery

archive log thread 1 sequence 17020 is already on disk as file /ocp19t/arclogs/1_17020_617984496.dbf
archive log thread 1 sequence 17021 is already on disk as file /ocp19t/arclogs/1_17021_617984496.dbf
archive log thread 1 sequence 17022 is already on disk as file /ocp19t/arclogs/1_17022_617984496.dbf
archive log thread 1 sequence 17023 is already on disk as file /ocp19t/arclogs/1_17023_617984496.dbf
archive log thread 1 sequence 17024 is already on disk as file /ocp19t/arclogs/1_17024_617984496.dbf
archive log thread 1 sequence 17025 is already on disk as file /ocp19t/arclogs/1_17025_617984496.dbf
archive log thread 1 sequence 17026 is already on disk as file /ocp19t/arclogs/1_17026_617984496.dbf

Removing automatic instance
shutting down automatic instance
Oracle instance shut down
Automatic instance removed
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of tranport tablespace command at 05/05/2009 10:20:53
RMAN-03015: error occurred in stored script Memory Script
RMAN-06053: unable to perform media recovery because of missing log
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 17019 lowscn 7767175054470
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 17018 lowscn 7767175034966
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 17017 lowscn 7767175014484
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 17016 lowscn 7767174994270


So I tried:

RMAN> run
2> {
3> allocate channel ch1 device type disk;
4> allocate channel ch2 device type disk;
5> allocate channel ch3 device type sbt_tape;
6> allocate channel ch4 device type sbt_tape;
7>
8> transport tablespace OFACPP_DATA
9> tablespace destination '/ocp19t/backups/tspitr'
10> auxiliary destination '/ocp19t/backups/aux';
11> }

allocated channel: ch1
channel ch1: sid=899 devtype=DISK

allocated channel: ch2
channel ch2: sid=1048 devtype=DISK

allocated channel: ch3
channel ch3: sid=1034 devtype=SBT_TAPE
channel ch3: VERITAS NetBackup for Oracle - Release 5.1 (2004122521)

allocated channel: ch4
channel ch4: sid=1015 devtype=SBT_TAPE
channel ch4: VERITAS NetBackup for Oracle - Release 5.1 (2004122521)

RMAN-05026: WARNING: presuming following set of tablespaces applies to specified point in time

List of tablespaces expected to have UNDO segments
tablespace SYSTEM
tablespace UNDOTBS1

Creating automatic instance, with SID='zrdw'

initialization parameters used for automatic instance:
db_name=OCP19T
compatible=10.2.0.2.0
db_block_size=8192
db_files=200
db_unique_name=tspitr_OCP19T_zrdw
large_pool_size=1M
shared_pool_size=110M
#No auxiliary parameter file used
db_create_file_dest=/ocp19t/backups/aux
control_files=/ocp19t/backups/aux/cntrl_tspitr_OCP19T_zrdw.f


starting up automatic instance OCP19T

Oracle instance started

Total System Global Area 205520896 bytes

Fixed Size 2070072 bytes
Variable Size 146803144 bytes
Database Buffers 50331648 bytes
Redo Buffers 6316032 bytes
Automatic instance created

contents of Memory Script:
{
# set the until clause
set until scn 7767175164625;
# restore the controlfile
restore clone controlfile;
# mount the controlfile
sql clone 'alter database mount clone database';
# archive current online log for tspitr to a resent until time
sql 'alter system archive log current';
# avoid unnecessary autobackups for structural changes during TSPITR
sql 'begin dbms_backup_restore.AutoBackupFlag(FALSE); end;';
}
executing Memory Script

executing command: SET until clause

Starting restore at 05-MAY-09


Removing automatic instance
shutting down automatic instance
Oracle instance shut down
Automatic instance removed
released channel: ch1
released channel: ch2
released channel: ch3
released channel: ch4
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of tranport tablespace command at 05/05/2009 16:00:18
RMAN-03015: error occurred in stored script Memory Script
RMAN-06034: at least 1 channel must be allocated to execute this command

Tom Kyte
May 11, 2009 - 11:30 am UTC

... It fails trying to find a log on tape ...

confused - why do you say that?

Alexander, May 11, 2009 - 11:42 am UTC

Those logs it's complaining about are on tape, not disk. I queried RMAN.

The error seems obvious, "no channel", but it allocates them dynamically so.....

Alexander, May 13, 2009 - 5:02 pm UTC

I've tried everything under the sun to get this to work. This is the third "advanced" type of feature that doesn't appear to work now. Do you have some suggests based on what I pasted right above? I would appreciate it.
Tom Kyte
May 13, 2009 - 5:14 pm UTC

please utilize support for this one, too big for here.

Alexander, May 13, 2009 - 5:49 pm UTC

I'm working with them too. Haven't had much luck yet.

Let me ask you this, are there any TT limitations going to multiple device types? Or should it just go off and find whatever backups it needs to create the TT set?
Tom Kyte
May 13, 2009 - 7:43 pm UTC

It should be able to find the backups where ever they have been cataloged. If you could restore them from there, they should be accessible.

Alexander, May 15, 2009 - 10:55 am UTC

Tom,

I'm having the same problem with transport tablespace as I did with duplicate database, RMAN is trying to pull back archive logs from tape when I don't want it to (it would be ok with me if it worked, but it fails when it shouldn't so I'm trying get avoid it, going to tape is slow anyway).

Is there any command for dup db, TT, ANYTHING where I can say "use my disk backups, THAT'S IT".

I can't get either of these feature to work AT ALL even with the help of support. I can definitely access the remote media storage, we do for regular restores.
Tom Kyte
May 15, 2009 - 2:02 pm UTC

do you know the escalation process with support? Have you ever gotten beyond the "first level"?


I'll refer you to the rman discussion forum on otn.oracle.com
http://forums.oracle.com/forums/forum.jspa?forumID=74

Alexander, May 15, 2009 - 2:15 pm UTC

No, I don't. When I want the best I come to you.
Tom Kyte
May 15, 2009 - 2:28 pm UTC

http://www.oracle.com/support/seminars.html


... Understand support terminology, tools and the escalation process. ...

A reader, May 16, 2009 - 5:10 am UTC

Mr Tom
Assume I want to transport a tablespace to diffrent platform, user "x" has tables,and other objects in this tablespace.
When I transport the tablespace to some other platform, if the user "x" doesnt exist in the destination, is the user created automatically or what happens?
Tom Kyte
May 23, 2009 - 10:50 am UTC

you would create the schema yourself.

Alexander, June 11, 2009 - 11:07 am UTC

Tom,

Oracle support told me it is a product limitation that transport tablespace in RMAN can't go to different media types for backups. (Although it still fails when I try using only tape where everything resides....)

Is this true?
Tom Kyte
June 11, 2009 - 3:13 pm UTC

they must have given you a reference - an official note, documentation, something - do you have that?

A reader, June 11, 2009 - 3:15 pm UTC

How to do transportable tablespace in an ASM environment ?
Is it only feasible via RMAN ?
Tom Kyte
June 11, 2009 - 3:26 pm UTC

you can ftp files in and out via XDB - but - I would suggest using RMAN anyway since

if you are using ASM you have 10g.
in 10g you can do an online tablespace transport from backups.
Meaning - you don't have to make the tablespace readonly in the source system.

http://docs.oracle.com/docs/cd/B19306_01/backup.102/b14191/ontbltrn.htm#CACFIGDB

Alexander, June 11, 2009 - 3:49 pm UTC

No Tom, nothing official. I would not be asking you if it were true if I saw an official document ;)
Tom Kyte
June 11, 2009 - 4:54 pm UTC

please ask them for the documentation or official written statement, closing an SR that way is not acceptable.

Alexamder, June 11, 2009 - 5:16 pm UTC

I'm still working with them. He suggested I try restoring the logs it complains about from tape and running it. That appears to be working, but it's kind of a hack.

Alexander, June 29, 2009 - 4:59 pm UTC

Tom,

Can you explain to me why RMAN wants to use all these archivelogs for TT? I ran

RMAN> backup tablespace  SYSTEM, UNDOTBS1, MY_TS;


So I have a fresh backup of the datafiles. Then

transport tablespace MY_TS
tablespace destination '/ocp19t/backups/tt'
auxiliary destination '/ocp19t/backups/aux';


It dies looking for

RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 19369 lowscn 7767203467125
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 19368 lowscn 7767203447316
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 19367 lowscn 7767203428515
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 19366 lowscn 7767203408089
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 19365 lowscn 7767203388575
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 19364 lowscn 7767203369022
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 19363 lowscn 7767203349180
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 19362 lowscn 7767203329015
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 19361 lowscn 7767203309039
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 19360 lowscn 7767203289469
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 19359 lowscn 7767203269001
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 19358 lowscn 7767203249901
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 19357 lowscn 7767203229723
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 19356 lowscn 7767203210673
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 19355 lowscn 7767203191315
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 19354 lowscn 7767203171275
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 19353 lowscn 7767203151982
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 19352 lowscn 7767203131788
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 19351 lowscn 7767203112766
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 19298 lowscn 7767202455575
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 19297 lowscn 7767202455543


I've already given up trying to fix this. But I'd like to know why it's trying to access these. (By the way, it already found a ton of them already on disk, too many to count, an entire page.) So why the heck does it need, like, a 100 archivelogs to create a simple TT set?? I could understand if I specified an until time of a week ago or something.

transport and partitioning

A reader, September 08, 2009 - 2:21 am UTC

greetings thomas,

and thanks like always.

regarding transportable tablespaces and partition table. can i transport a tablespace that contains a partition table.
EX.
exp "'/ as sysdba'" statistics=none grants=no file=trans.dmp transport_tablespace=y tablespaces=RTX_Q1_2009_LTD

Export: Release 9.2.0.6.0 - Production on Mon Sep 7 13:41:45 2009

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


Connected to: Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production
Export done in AR8ISO8859P6 character set and AL16UTF16 NCHAR character set
server uses AR8MSWIN1256 character set (possible charset conversion)
Note: table data (rows) will not be exported
About to export transportable tablespace metadata...
For tablespace RTX_Q1_2009_LTD ...
. exporting cluster definitions
. exporting table definitions
. . exporting table RTX_Q1_2009_LT
. exporting referential integrity constraints
. exporting triggers
. end transportable tablespace metadata export
Export terminated successfully without warnings.

and after moving and copying data files and when i am importing the dump, i have this error:

imp "'/ as sysdba'" file=trans.dmp log=trans.log grants=no transport_tablespace=y tablespaces=RTX_Q1_2009_LTD \
datafiles='/arch/work2/RTX_Q1_2009_LTD01.dbf', \
'/arch/work2/RTX_Q1_2009_LTD04.dbf','/arch/work2/RTX_Q1_2009_LTD07.dbf','/arch/work2/RTX_Q1_2009_LTD02.dbf', \
'/arch/work2/RTX_Q1_2009_LTD05.dbf','/arch/work2/RTX_Q1_2009_LTD08.dbf','/arch/work2/RTX_Q1_2009_LTD03.dbf', \
'/arch/work2/RTX_Q1_2009_LTD06.dbf'

Import: Release 10.2.0.1.0 - Production on Mon Sep 7 19:27:26 2009

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

Export file created by EXPORT:V09.02.00 via conventional path
About to import transportable tablespace(s) metadata...
import done in AR8ISO8859P6 character set and AL16UTF16 NCHAR character set
import server uses AR8MSWIN1256 character set (possible charset conversion)
. importing SYSADM's objects into SCOTT
IMP-00017: following statement failed with ORACLE error 1647:
"CREATE TABLE "RTX_Q1_2009_LT"
IMP-00003: ORACLE error 1647 encountered
ORA-01647: tablespace 'RTX_Q1_2009_LTD' is read only, cannot allocate space in it
Import terminated successfully with warnings.

Please advice.
Tom Kyte
September 08, 2009 - 6:11 am UTC

you give no example to work with - but yes, you can transport partitioned objects in general.

seeing 9206 there, suggest you contact support and reference bug 4016802, there is a solution for that if that is what you are encountering.

Transport partition table

A reader, September 08, 2009 - 1:22 pm UTC

Greetings thomas,

and thanks like always,

regarding the above post, the table i am trying to exp/imp using the transportable tablespaces is a partition table by range and subpartition by hash,
and i have searched metalink for the bug you had mentioned but no work around solution i had found, do i need to contact support for this, or the solution can be found through metalink, because i had searched metalink but no solution i had found.
please advice
Tom Kyte
September 09, 2009 - 5:54 am UTC

.... do i need to contact support for this ...

well....


suggest you contact support and reference bug 4016802, there is a solution for that if that is what you are encountering.

ROWID vs FILE_ID duplication after transportable tablespaces

zed, September 23, 2009 - 9:03 pm UTC

In one of your comments ROWID duplication is well explained, but I would like to know what could happen as a result of possible duplicate FILE_ID? I just do not like an idea that it is possible to have same file_id for different datafiles. I searched internet but there is really nothing about this issue. My concern is really not around in-house writen scripts using file_id for differentiataion, but more around what can go wrong in Oracle Internals. For example Data dictionary, rman, different dbms_packages, reporting based on file_id,etc.. Is it possible that impdp, if used for transportable tablespaces, somehow ensures that file_ids are unique in the database in newer versions of Oracle ? Am I panicking without a reason?
Tom Kyte
September 28, 2009 - 2:30 pm UTC

we use absolute and relative file_ids (have for many releases). We are OK with it, no worries.

similar problem impdp

raghu, September 28, 2009 - 12:56 pm UTC

I had a similar situation. I had same platforms,different file systems, and different source and destination schemas and I was using transportable tablespaces. So I used remap_datafiles and remap schemas in impdp. I took forever I stopped this and tried without remap_datafiles and it worked in a minute and I see in dba_data_files files names are with new file systems.


I had similar problem

raghu, September 28, 2009 - 12:59 pm UTC

If its on lab or where there are not many active transactions these may also be useful.

Last night I had the same problem and this was useful.
The following example identifies the transaction entry for the ALTER TABLESPACE...READ ONLY statement and note its session address (saddr):

SELECT SQL_TEXT, SADDR
FROM V$SQLAREA,V$SESSION
WHERE V$SQLAREA.ADDRESS = V$SESSION.SQL_ADDRESS
AND SQL_TEXT LIKE 'alter tablespace%';

SQL_TEXT SADDR
---------------------------------------- --------
alter tablespace tbs1 read only 80034AF0

The start SCN of each active transaction is stored in the V$TRANSACTION view. Displaying this view sorted by ascending start SCN lists the transactions in execution order. From the preceding example, you already know the session address of the transaction entry for the read-only statement, and you can now locate it in the V$TRANSACTION view. All transactions with smaller start SCN, which indicates an earlier execution, can potentially hold up the quiesce and subsequent read-only state of the tablespace.

SELECT SES_ADDR, START_SCNB
FROM V$TRANSACTION
ORDER BY START_SCNB;

SES_ADDR START_SCNB
-------- ----------
800352A0 3621 --> waiting on this txn
80035A50 3623 --> waiting on this txn
80034AF0 3628 --> this is the ALTER TABLESPACE statement
80037910 3629 --> don't care about this txn

transport alternative

A reader, December 14, 2009 - 8:28 am UTC

Greetings thomas,

and thanks like always.

I have some tablespaces that i need to move them but the problem they are 24X7 and i cannot make them read only, so i am thinking of alternative of transport them:
1) Use rman for transporting them, but the tablespaces are not read only and i cannot make them.

2) Use bulk load for creating the required objects.

3) Your suggestion for online "transport tablespace".(I hope you know what i mean).
Tom Kyte
December 14, 2009 - 10:30 am UTC

In 10g you can transport from backups
http://docs.oracle.com/docs/cd/B19306_01/backup.102/b14191/ontbltrn.htm#CACFBHIE

since you didn't mention a version, I'll assume you are 10g or above. That is the online transport.

Adjust the above

A reader, December 14, 2009 - 8:56 am UTC

Greetings thomas,

Please i want to modify the below point as follow:

--------------1) Use rman for transporting them, but the tablespaces are not read only and i cannot make them.

1) Use rman for transporting them, but the tablespaces that need to be move are 9i and the target database is 10g, and rman transport tablespace need data pump, which is not available in 9i.
Tom Kyte
December 14, 2009 - 10:31 am UTC

restore you system, undo and tablespaces you want to transport on your test machine (you have one, you use it to test your recovery processes - I *know* you have one because you must do this anyway...)

recover them to the point in time you want them to be.

transport them the old fashioned way from that temporary database

remove the temporary database.

Like always thank you

A reader, December 15, 2009 - 1:11 am UTC


Alexander, January 07, 2010 - 9:25 am UTC

As promised, here I am back again, your lucky day.

As predicted, I knew this wouldn't work, what could be causing this? There's very little room for error, it's such a simple command:

(x228kdc:oracle)> rman target /

Recovery Manager: Release 10.2.0.3.0 - Production on Thu Jan 7 10:13:38 2010

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

connected to target database: OCP25X (DBID=3771837640)

RMAN> transport tablespace EADB0001S
tablespace destination '/ora_exports/aja'
auxiliary destination '/ora_exports/aja';2> 3>

using target database control file instead of recovery catalog
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified point in time

List of tablespaces expected to have UNDO segments
tablespace SYSTEM
tablespace UNDOTBS1
tablespace UNDOTBS2

Creating automatic instance, with SID='czwf'

initialization parameters used for automatic instance:
db_name=OCP25X
compatible=10.2.0.3.0
db_block_size=8192
db_files=200
db_unique_name=tspitr_OCP25X_czwf
large_pool_size=1M
shared_pool_size=110M
#No auxiliary parameter file used
db_create_file_dest=/ora_exports/aja
control_files=/ora_exports/aja/cntrl_tspitr_OCP25X_czwf.f


starting up automatic instance OCP25X

Oracle instance started

Total System Global Area     205520896 bytes

Fixed Size                     2071544 bytes
Variable Size                146801672 bytes
Database Buffers              50331648 bytes
Redo Buffers                   6316032 bytes
Automatic instance created

contents of Memory Script:
{
# set the until clause
set until  scn 535123550;
# restore the controlfile
restore clone controlfile;
# mount the controlfile
sql clone 'alter database mount clone database';
# archive current online log for tspitr to a resent until time
sql 'alter system archive log current';
# avoid unnecessary autobackups for structural changes during TSPITR
sql 'begin dbms_backup_restore.AutoBackupFlag(FALSE); end;';
}
executing Memory Script

executing command: SET until clause

Starting restore at 07-JAN-10
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=34 devtype=DISK
allocated channel: ORA_AUX_DISK_2
channel ORA_AUX_DISK_2: sid=33 devtype=DISK
allocated channel: ORA_AUX_SBT_TAPE_1
channel ORA_AUX_SBT_TAPE_1: sid=32 devtype=SBT_TAPE
channel ORA_AUX_SBT_TAPE_1: Data Protection for Oracle: version 5.3.3.0
allocated channel: ORA_AUX_SBT_TAPE_2
channel ORA_AUX_SBT_TAPE_2: sid=31 devtype=SBT_TAPE
channel ORA_AUX_SBT_TAPE_2: Data Protection for Oracle: version 5.3.3.0

channel ORA_AUX_SBT_TAPE_1: starting datafile backupset restore
channel ORA_AUX_SBT_TAPE_1: restoring control file
channel ORA_AUX_SBT_TAPE_1: reading from backup piece c-3771837640-20100107-0b
channel ORA_AUX_SBT_TAPE_1: restored backup piece 1
piece handle=c-3771837640-20100107-0b tag=TAG20100107T090114
channel ORA_AUX_SBT_TAPE_1: restore complete, elapsed time: 00:00:01
output filename=/ora_exports/aja/cntrl_tspitr_OCP25X_czwf.f
Finished restore at 07-JAN-10

sql statement: alter database mount clone database

sql statement: alter system archive log current

sql statement: begin dbms_backup_restore.AutoBackupFlag(FALSE); end;
released channel: ORA_AUX_DISK_1
released channel: ORA_AUX_DISK_2
released channel: ORA_AUX_SBT_TAPE_1
released channel: ORA_AUX_SBT_TAPE_2

contents of Memory Script:
{
# generated tablespace point-in-time recovery script
# set the until clause
set until  scn 535123550;
# set an omf destination filename for restore
set newname for clone datafile  1 to new;
# set an omf destination filename for restore
set newname for clone datafile  2 to new;
# set an omf destination filename for restore
set newname for clone datafile  5 to new;
# set an omf destination filename for restore
set newname for clone datafile  3 to new;
# set an omf destination tempfile
set newname for clone tempfile  1 to new;
# set a destination filename for restore
set newname for datafile  11 to
 "/ora_exports/aja/eadb0001s_01.dbf";
# rename all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set plus the auxilliary tablespaces
restore clone datafile  1, 2, 5, 3, 11;
switch clone datafile all;
#online the datafiles restored or flipped
sql clone "alter database datafile  1 online";
#online the datafiles restored or flipped
sql clone "alter database datafile  2 online";
#online the datafiles restored or flipped
sql clone "alter database datafile  5 online";
#online the datafiles restored or flipped
sql clone "alter database datafile  3 online";
#online the datafiles restored or flipped
sql clone "alter database datafile  11 online";
# make the controlfile point at the restored datafiles, then recover them
recover clone database tablespace  "EADB0001S", "SYSTEM", "UNDOTBS1", "UNDOTBS2", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
# PLUG HERE the creation of a temporary tablespace if export fails due to lack
# of temporary space.
# For example in Unix these two lines would do that:
#sql clone "create tablespace aux_tspitr_tmp
#           datafile ''/tmp/aux_tspitr_tmp.dbf'' size 500K";
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed temporary file 1 to /ora_exports/aja/TSPITR_OCP25X_CZWF/datafile/o1_mf_temp_%u_.tmp in control file

Starting restore at 07-JAN-10
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=31 devtype=DISK

channel ORA_AUX_DISK_1: starting datafile backupset restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to /ora_exports/aja/TSPITR_OCP25X_CZWF/datafile/o1_mf_system_%u_.dbf
restoring datafile 00002 to /ora_exports/aja/TSPITR_OCP25X_CZWF/datafile/o1_mf_undotbs1_%u_.dbf
restoring datafile 00011 to /ora_exports/aja/eadb0001s_01.dbf
channel ORA_AUX_DISK_1: reading from backup piece /ora_exports/backup_files/OCP25X_ril2b7io_1_1
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=/ora_exports/backup_files/OCP25X_ril2b7io_1_1 tag=TAG20100101T030040
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:02:46
channel ORA_AUX_DISK_1: starting datafile backupset restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00005 to /ora_exports/aja/TSPITR_OCP25X_CZWF/datafile/o1_mf_undotbs2_%u_.dbf
restoring datafile 00003 to /ora_exports/aja/TSPITR_OCP25X_CZWF/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /ora_exports/backup_files/OCP25X_rhl2b7io_1_1
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=/ora_exports/backup_files/OCP25X_rhl2b7io_1_1 tag=TAG20100101T030040
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:03:06
Finished restore at 07-JAN-10

datafile 1 switched to datafile copy
input datafile copy recid=34 stamp=707653293 filename=/ora_exports/aja/TSPITR_OCP25X_CZWF/datafile/o1_mf_system_5ncynyks_.dbf
datafile 2 switched to datafile copy
input datafile copy recid=35 stamp=707653293 filename=/ora_exports/aja/TSPITR_OCP25X_CZWF/datafile/o1_mf_undotbs1_5ncynyj0_.dbf
datafile 5 switched to datafile copy
input datafile copy recid=36 stamp=707653293 filename=/ora_exports/aja/TSPITR_OCP25X_CZWF/datafile/o1_mf_undotbs2_5ncyt42x_.dbf
datafile 3 switched to datafile copy
input datafile copy recid=37 stamp=707653293 filename=/ora_exports/aja/TSPITR_OCP25X_CZWF/datafile/o1_mf_sysaux_5ncyt44k_.dbf
datafile 11 switched to datafile copy
input datafile copy recid=38 stamp=707653293 filename=/ora_exports/aja/eadb0001s_01.dbf

sql statement: alter database datafile  1 online

sql statement: alter database datafile  2 online

sql statement: alter database datafile  5 online

sql statement: alter database datafile  3 online

sql statement: alter database datafile  11 online

Starting recover at 07-JAN-10
using channel ORA_AUX_DISK_1

starting media recovery

archive log thread 1 sequence 21070 is already on disk as file +DATA01/ocp25x/1_21070_653008202.dbf
archive log thread 1 sequence 21071 is already on disk as file +DATA01/ocp25x/1_21071_653008202.dbf
archive log thread 1 sequence 21072 is already on disk as file +DATA01/ocp25x/1_21072_653008202.dbf
archive log thread 1 sequence 21073 is already on disk as file +DATA01/ocp25x/1_21073_653008202.dbf
archive log thread 1 sequence 21074 is already on disk as file +DATA01/ocp25x/1_21074_653008202.dbf
archive log thread 1 sequence 21075 is already on disk as file +DATA01/ocp25x/1_21075_653008202.dbf
archive log thread 1 sequence 21076 is already on disk as file +DATA01/ocp25x/1_21076_653008202.dbf
archive log thread 2 sequence 14973 is already on disk as file +DATA01/ocp25x/2_14973_653008202.dbf
archive log thread 2 sequence 14974 is already on disk as file +DATA01/ocp25x/2_14974_653008202.dbf
archive log thread 2 sequence 14975 is already on disk as file +DATA01/ocp25x/2_14975_653008202.dbf
archive log thread 2 sequence 14976 is already on disk as file +DATA01/ocp25x/2_14976_653008202.dbf
archive log thread 2 sequence 14977 is already on disk as file +DATA01/ocp25x/2_14977_653008202.dbf
archive log thread 2 sequence 14978 is already on disk as file +DATA01/ocp25x/2_14978_653008202.dbf
archive log thread 2 sequence 14979 is already on disk as file +DATA01/ocp25x/2_14979_653008202.dbf

Removing automatic instance
shutting down automatic instance
Oracle instance shut down
Automatic instance removed
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of tranport tablespace command at 01/07/2010 10:21:49
RMAN-03015: error occurred in stored script Memory Script
RMAN-06053: unable to perform media recovery because of missing log
RMAN-06102: no channel to restore a backup or copy of log thread 2 seq 14972 lowscn 535045288
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 21069 lowscn 535045286
RMAN-06102: no channel to restore a backup or copy of log thread 2 seq 14971 lowscn 535045245


Tom Kyte
January 11, 2010 - 8:23 pm UTC

where is log thread 2 seq 14972

do you have a channel configured that can read it and restore it?

if you did a full restore, would it fail the same way?

Alexander, January 12, 2010 - 9:12 am UTC

No, a restore does not fail. I switched to another database to eliminate further confusion with RAC.

(x216kdc:oracle)> rman target /

Recovery Manager: Release 10.2.0.2.0 - Production on Tue Jan 12 09:52:00 2010

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

connected to target database: OCP25T (DBID=1072633703)

RMAN> restore database validate;

Starting restore at 12-JAN-10
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=63 devtype=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: sid=119 devtype=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: sid=58 devtype=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: sid=44 devtype=DISK
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: sid=85 devtype=SBT_TAPE
channel ORA_SBT_TAPE_1: Data Protection for Oracle: version 5.3.3.0
allocated channel: ORA_SBT_TAPE_2
channel ORA_SBT_TAPE_2: sid=30 devtype=SBT_TAPE
channel ORA_SBT_TAPE_2: Data Protection for Oracle: version 5.3.3.0

channel ORA_SBT_TAPE_1: starting validation of datafile backupset
channel ORA_SBT_TAPE_1: reading from backup piece backup_OCP25T_1ll2tm7v_1_1
channel ORA_SBT_TAPE_1: restored backup piece 1
piece handle=backup_OCP25T_1ll2tm7v_1_1 tag=TAG20100108T030118
channel ORA_SBT_TAPE_1: validation complete, elapsed time: 00:00:25
channel ORA_SBT_TAPE_1: starting validation of datafile backupset
channel ORA_SBT_TAPE_1: reading from backup piece backup_OCP25T_1kl2tm7v_1_1
channel ORA_SBT_TAPE_1: restored backup piece 1
piece handle=backup_OCP25T_1kl2tm7v_1_1 tag=TAG20100108T030118
channel ORA_SBT_TAPE_1: validation complete, elapsed time: 00:13:16
Finished restore at 12-JAN-10

RMAN> transport tablespace RIGHTA_DATA
tablespace destination '/ora_exports/aja'
auxiliary destination '/ora_exports/aja';2> 3>

using target database control file instead of recovery catalog
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified point in time

List of tablespaces expected to have UNDO segments
tablespace SYSTEM
tablespace UNDOTBS1

Creating automatic instance, with SID='zhfo'

initialization parameters used for automatic instance:
db_name=OCP25T
compatible=10.2.0.2.0
db_block_size=8192
db_files=200
db_unique_name=tspitr_OCP25T_zhfo
large_pool_size=1M
shared_pool_size=110M
#No auxiliary parameter file used
db_create_file_dest=/ora_exports/aja
control_files=/ora_exports/aja/cntrl_tspitr_OCP25T_zhfo.f


starting up automatic instance OCP25T

Oracle instance started

Total System Global Area     205520896 bytes

Fixed Size                     1259936 bytes
Variable Size                146802272 bytes
Database Buffers              50331648 bytes
Redo Buffers                   7127040 bytes
Automatic instance created

contents of Memory Script:
{
# set the until clause
set until  scn 7764969819565;
# restore the controlfile
restore clone controlfile;
# mount the controlfile
sql clone 'alter database mount clone database';
# archive current online log for tspitr to a resent until time
sql 'alter system archive log current';
# avoid unnecessary autobackups for structural changes during TSPITR
sql 'begin dbms_backup_restore.AutoBackupFlag(FALSE); end;';
}
executing Memory Script

executing command: SET until clause

Starting restore at 12-JAN-10
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=37 devtype=DISK
allocated channel: ORA_AUX_DISK_2
channel ORA_AUX_DISK_2: sid=36 devtype=DISK
allocated channel: ORA_AUX_DISK_3
channel ORA_AUX_DISK_3: sid=35 devtype=DISK
allocated channel: ORA_AUX_DISK_4
channel ORA_AUX_DISK_4: sid=34 devtype=DISK
allocated channel: ORA_AUX_SBT_TAPE_1
channel ORA_AUX_SBT_TAPE_1: sid=33 devtype=SBT_TAPE
channel ORA_AUX_SBT_TAPE_1: Data Protection for Oracle: version 5.3.3.0
allocated channel: ORA_AUX_SBT_TAPE_2
channel ORA_AUX_SBT_TAPE_2: sid=32 devtype=SBT_TAPE
channel ORA_AUX_SBT_TAPE_2: Data Protection for Oracle: version 5.3.3.0

channel ORA_AUX_SBT_TAPE_1: starting datafile backupset restore
channel ORA_AUX_SBT_TAPE_1: restoring control file
channel ORA_AUX_SBT_TAPE_1: reading from backup piece c-1072633703-20100111-00
channel ORA_AUX_SBT_TAPE_1: restored backup piece 1
piece handle=c-1072633703-20100111-00 tag=TAG20100111T030741
channel ORA_AUX_SBT_TAPE_1: restore complete, elapsed time: 00:00:02
output filename=/ora_exports/aja/cntrl_tspitr_OCP25T_zhfo.f
Finished restore at 12-JAN-10

sql statement: alter database mount clone database

sql statement: alter system archive log current

sql statement: begin dbms_backup_restore.AutoBackupFlag(FALSE); end;
released channel: ORA_AUX_DISK_1
released channel: ORA_AUX_DISK_2
released channel: ORA_AUX_DISK_3
released channel: ORA_AUX_DISK_4
released channel: ORA_AUX_SBT_TAPE_1
released channel: ORA_AUX_SBT_TAPE_2

contents of Memory Script:
{
# generated tablespace point-in-time recovery script
# set the until clause
set until  scn 7764969819565;
# set an omf destination filename for restore
set newname for clone datafile  1 to new;
# set an omf destination filename for restore
set newname for clone datafile  2 to new;
# set an omf destination filename for restore
set newname for clone datafile  3 to new;
# set an omf destination tempfile
set newname for clone tempfile  1 to new;
# set a destination filename for restore
set newname for datafile  9 to
 "/ora_exports/aja/righta_data_01.dbf";
# rename all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set plus the auxilliary tablespaces
restore clone datafile  1, 2, 3, 9;
switch clone datafile all;
#online the datafiles restored or flipped
sql clone "alter database datafile  1 online";
#online the datafiles restored or flipped
sql clone "alter database datafile  2 online";
#online the datafiles restored or flipped
sql clone "alter database datafile  3 online";
#online the datafiles restored or flipped
sql clone "alter database datafile  9 online";
# make the controlfile point at the restored datafiles, then recover them
recover clone database tablespace  "RIGHTA_DATA", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
# PLUG HERE the creation of a temporary tablespace if export fails due to lack
# of temporary space.
# For example in Unix these two lines would do that:
#sql clone "create tablespace aux_tspitr_tmp
#           datafile ''/tmp/aux_tspitr_tmp.dbf'' size 500K";
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed temporary file 1 to /ora_exports/aja/TSPITR_OCP25T_ZHFO/datafile/o1_mf_temp_%u_.tmp in control file

Starting restore at 12-JAN-10
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=33 devtype=DISK


Removing automatic instance
shutting down automatic instance
Oracle instance shut down
Automatic instance removed
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of tranport tablespace command at 01/12/2010 10:08:36
RMAN-03015: error occurred in stored script Memory Script
RMAN-06026: some targets not found - aborting restore
RMAN-06100: no channel to restore a backup or copy of datafile 9
RMAN-06100: no channel to restore a backup or copy of datafile 3
RMAN-06100: no channel to restore a backup or copy of datafile 2
RMAN-06100: no channel to restore a backup or copy of datafile 1

RMAN> show all;

RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 18;
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO DISK;
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/ora_exports/backup_files/%d_%F.bkp';
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE 'SBT_TAPE' TO '%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO BACKUPSET;
CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE 'SBT_TAPE' TO 1;
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE 'SBT_TAPE' TO 1;
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT   '/ora_exports/backup_files/%d_%U' MAXPIECESIZE 2000 M;
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' FORMAT   'backup_%d_%U' PARMS  'ENV=(TDPO_OPTFILE=/database/oracle/ctl/OCP25T.opt)';
CONFIGURE AUXILIARY CHANNEL DEVICE TYPE 'SBT_TAPE' FORMAT   'backup_%d_%U' PARMS  'ENV=(TDPO_OPTFILE=/database/oracle/ctl/OCP25T.opt)';
CONFIGURE MAXSETSIZE TO UNLIMITED;
CONFIGURE ENCRYPTION FOR DATABASE OFF;
CONFIGURE ENCRYPTION ALGORITHM 'AES128';
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/oracle/product/102x/dbs/snapcf_OCP25T.f'; # default



Alexander, January 25, 2010 - 9:49 am UTC

Tom,

I ran the same test on 11gr2, it worked fine. Definitely thinking there's a bug with this in 10g. Can you get it to work in 10g? Wondering if it's me or not.

Database migration from different Unix flavor,

A reader, October 07, 2010 - 10:37 am UTC

Hello,

We have 100's of databases in HPUX (almost all of them on filesystem) and now we need to migrate them to Linux on ASM storage. The endian format on HPUX and RHEL are different. The size of the database varies from 50 GB to 10 TB.

What are the options we have? Our priority is "very low" downtime.

The first option that I thought of is using RMAN and transportable tablespace. However, I am not sure how to fill the gaps in datafiles that caused between the time the datafile was copied over from HPUX and later put it to Linux (after conversion). I can't use archivelog files to fill the gap.

The other option that I can think of is datapump but this can run for long time if the database size is big .

Thanks,


Tom Kyte
October 11, 2010 - 10:36 am UTC

some of your options:

a) data pump it. This will necessarily incur a "stop modifying the data" whilst you are pumping it from one database to the other (else you wouldn't pick up those changes)

b) 'replicate it'. You could install your schema on the new system (just the schema, not the data yet) and then build materialized views that point to the source tables (you'd probably want materialized view logs on the source database as well - that would be a change to the existing production system). You could then refresh the materialized views one by one - the data would come over from production. Then you would do another refresh of all objects (to get them pretty much in sync) and then STOP working on production - do a last refresh - and now you have everything over in the new system. You can drop the materialized views.

c) 'replicate it part 2'. Set up streams based replication - replicate all data from source to target. Cut off access to source at some point - redirect people to new target shortly after (when the streams queues have all been applied)

d) 'replicate it part 3'. Using golden gate to do the replication - more 'tools' at this point - less 'raw, bare metal' then (b) and (c).

e) transport it as you suggested. You would be able to do a point in time recovery of the transport set out of your backups. This will (like data pumping) involve cutting off access to production "as of the point in time" you restore (so as to not lose changes).

The most 'available' options will involve 'replicate it' at some point. Option (c) is less intrusive than (b) and involves software you already have as opposed to (d)

follow up regarding migration process,

A reader, November 08, 2010 - 10:46 am UTC

Hello,

I have a question about the different methods of using migration. If I use datapump method on a 1 TB database, the datapump in our environment may take 24-36 hours to complete. Practically it is not possible to stop production database for that long.

However, the changes made on a table right after the expdmp is triggered, will not be seen at the destination database. Here, we are thinking of using Oracle Streams.

The process would be:
1. Start exporting schema A
2. Continue exporting all other user defined schemas
3. Import all the schemas on the destination database (as and when dmp file is available)

Now, to capture changes made on objects of Schema A:

I start capturing the changes using Streams on Schema A. This process will be triggered along with expdmp.
After the import on the destination completes, apply the changes . Will this concept work? Where will Oracle store all the changes made by the capture process on the source database? How will Oracle know to start applying based on "first in first out" method?

Thanks,
Tom Kyte
November 08, 2010 - 12:10 pm UTC

what are you migrating from and to

Follow up,

A reader, November 08, 2010 - 12:58 pm UTC

Hello,

I am migrating from HP-UX to Linux (both are having different endian formats).

Thanks,

Tom Kyte
November 08, 2010 - 1:13 pm UTC

If you wanted to use streams, you would just use streams to set up one way (read only) replication from HP to Linx. No need for data pump, just "replicate" what you want in the first place - it moves the stuff.

If you can have some amount of downtime (a short amount), you could use cross platform tablespace transports - move the data via rman from hp to linux and use datapump just to move code/sequences and the like. That'll be less work overall than setting up and managing streams for a short term project.

Follow up,

A reader, November 08, 2010 - 1:26 pm UTC

Hello,

Which would be faster? 1. to move objects using replication (one-way) or 2. move the objects (along with the data) using datapump and then set up replication to catch up the gaps? We have quite an amount of database in our environmnet (in 100's) that needs migration from HP to Linux. Our solution to the team is to be pretty generic for all the applications. The database size varies from few GBs to 2-3 TB.

In the 2nd method which you have mentioned, there would still be gaps in data between source and destination. How to fill that up?
Thanks,



Tom Kyte
November 08, 2010 - 1:43 pm UTC

.. In the 2nd method which you have mentioned, there would still be gaps in data between source and destination. How to fill that up?
..

there would not be any gaps, I started with:

"If you can have some amount of downtime (a short amount), you could use cross platform tablespace transports "

No gaps, because during the migration, you are down - that is the downtime.


... Which would be faster? ...

I suggest a benchmark, one of them is much easier to do than the other - you have to take that into consideration as well. Duration is not as relevant perhaps when you are talking about an online operation (your end users don't have downtime)

Probably this:
http://docs.oracle.com/docs/cd/E11882_01/server.112/e10705/instant.htm#i1008748

is the "fastest" to get started, transport the data



Follow up,

A reader, November 08, 2010 - 4:34 pm UTC

Hello,

In the 2nd method (using transportable tbs), the amount of time to extract all the datafiles from HP, convert them and put it in Linux is going to take few hours on most of our databaes that are greater than 500 GB. I don't think we can shutdown source until target is brought up.

However, this option is good if we use Streams. The proposed idea is

1. Put all the tablespace in READ ONLY mode on the source
2. copy all the datafiles to temporary location (say NFS)
2a. Export source objects by setting transportable tablespaces=Y
3. Bring back all the tablespaces to READ-WRITE mode. Until the tablespaces are brought back to READ-WRITE mode, the source DB is unavailable for users. This small downtime is inevitable.
4. Capture the SCN right after all the tablespaces are brough back to READ-WRITE mode
5. On target Linux, start converting datafiles and put them in the storage (ASM in our case) using RMAN
6. Import the dmp file
7. On source, start Streams capture using SCN value that was taken in step #4. Instantiate with target using that SCN. Create propagation as well. Capture process are done at table level.
8. On Target, create the apply process and start apply , propagation and capture (on source)
9. Wait till the data is synced up (not sure how to check when all the tables are synced up)
10. Cut off source and target and make target available for production use.

Now, I need to figure out how to use Streams to catch up the gap if we use "datapump" method.

Say, if we use datapump at schema level and if we have 10 schemas...

1. Start expdmp with parallel options for Schema 1 through 10. For example start @8am
2. There is no downtime in source. Users will be accessing the database while expdmp is running
3. Say @1pm, the expdmp completes and we have 10 different dmp files
4. @1:05pm, we start importing all those dmps into source
5. Say @6pm, impdmp completes
6. We need to fill gaps starting from 8am to 6pm and continue till data is synced up between source and target
7. Cut off Streams between source and target and make target available for users

How to achieve #6 using streams?

Thanks,

Tom Kyte
November 10, 2010 - 11:06 am UTC

Now, I need to figure out how to use Streams to catch up the gap if we use "datapump" method.


did you read the link I provided?

"Instantiating Objects in a Tablespace Using Transportable Tablespace or RMAN"

http://docs.oracle.com/docs/cd/E11882_01/server.112/e10705/instant.htm#i1008748


no need for downtime, no need for read only. Read that entire chapter, it covers the steps.

Now Migration using expdp/impdp utility,

A reader, November 12, 2010 - 10:57 am UTC

Hello,

Thanks for providing a very good support. I appreciate your commitment.

In continuation of my thought towards migration with almost zero downtime, I thought of using expdp and impdp utility. If my source DB wants to be available for read-write through out, I cannot expect the impdp will import the schema to the latest state of its corresponding schema in source database. In this regard, Streams is what I am thinking to fill those gaps.

However, in my demonstration, I couldn't see the sync up.

Here is what I did:
On the source database, connect as STRMADMIN user
Create directory
Connect as STRMADMIN and create ANYDATA queue to stage the changes
Create private database link from source to destination as STRMADMIN user
Create PROPAGATION RULES on source database (say src_to_trg)
Stop the Propagation
Prepare the entire source database for Instantiation.
Create and Start the CAPTURE process on source
Archive the current online redo log
On the source, get the SCN using GET_SYSTEM_CHANGE_NUMBER function. Make a note of this number
On source logging in as STRMADMIN user, run EXPDP utility with FLASHBACK_SCH value set to the number obtained above
On destination database, create directory to store the “dmp” file
Log in as STRMADMIN on destination and run IMPDP utility to import the schema. Performing the import automatically sets the instantiation SCN on the schema that was imported and all of its database objects
On destination database, connect as STRMADMIN and remove any streams configuration. Make sure to remove “only” on destination database
As STRMADMIN user, create a private database link to source database pointing to STRMADMIN user
On the destination database, create a queue using DBMS_STRAMS_ADM
Make all necessary configurations on the destination database
At the destination database, create and start the APPLY process
On the source database, start the PROPAGATION process
After the data between the source and target in sync, remove the streams configuration on both the databases.


After I took the SCN number on source, I made some DML which obviously the export wouldn't have captured. How to get those DML changes on the target using Streams?

Thanks,











Tom Kyte
November 12, 2010 - 1:08 pm UTC

did you follow the step by steps in the documentation - step by step? Don't make me lay your steps next to our documented steps to see if you are missing any - they are already written down.

Follow up,

A reader, November 16, 2010 - 10:22 am UTC

Hello,

Using Expdp and Impdp utility, I was able to capture changes ever since I captured SCN value from source database (this happens before expdp kicks off). As per Oracle's document, no DDL changes will be captured by streams.

With regard to capturing changes on source, I have two options:
1. Start the capture on source right after obtaining SCN value (before expdp begins)
2. Start apply on target and later start capture on source from that particular SCN onwards

I need to understand where will capture stores all the changes in the first option. Is it in Streams tablespace?

Regarding 2nd option, I believe the capture process mines from archivelog files. Should I keep my archivelog files online since that particular SCN on source?

Other question: I noticed in one of my production databaes, there was around 4 TB of archivelog files generated per day. If I go with option 1 (and assuming the changes are stored in Streams tablespace), will my Streams tablespace can grow up to 4 TB?


Thanks,


Tom Kyte
November 17, 2010 - 5:37 am UTC

a) it would be in the streams queue, yes.

b) or larger, or smaller, it depends on how much of the redo data must be captured.

Follow up on Streams capture,

A reader, November 17, 2010 - 10:26 am UTC

Hello,

As part of checking on how things happen, last night, I stopped apply process on target and propagation job on source. Only capture on source was running. As per the technical facts, all the messages captured will be stored in form of queue in a table owned by STRMADMIN.

After leaving just capture process alone running, I inserted into a table few million of rows. The table size grew from couple of hunder bytes to 260MB. Later I started dequeuing after starting Propagation (on source) and apply (on target).

I was also hoping the queue table (STRMADMIN_HR2_CAPTURE_QT) would also grow (to hold all those messages). However, it remained at 65KB. Number of rows in that table is 0 and also on the table AQ$_STRMADMIN_HR2_CAPTURE_QT_P.

Am I missing something here?

Thanks,



Tom Kyte
November 17, 2010 - 11:31 am UTC

did everything get propagated correctly

Follow up,

A reader, November 17, 2010 - 12:26 pm UTC

I think so. I checked data dictionary views like DBA_CAPTURE, V$STREAMS_CAPTURE, DBA_PROPAGATION, V$BUFFERED_QUEUES, V$PROPAGATION_SENDER,V$BUFFERED_SUBSCRIBERS and didn't see any errors or issues. Is there a any other way to check that?

The "state" column in v$streams_capture, most of the time it says "capturing changes" sometimes it says "connecting to apply database".

Replication is still goin on. Out of 8 million rows that I inserted in source, I can see 4 million in target. The last set of 4 million rows was inserted in one shot (insert into x select * from x);

The last set of 4 million rows on source datbase was inserted almost 18 hours ago. It is still replicating...

Thanks,

Tom Kyte
November 18, 2010 - 3:09 am UTC

yup, it will replicate those one by one unfortunately - it does not capture the original "insert as select" (it cannot), it only sees the row by row modifications in the redo.

so all is working as expected. Nothing to worry about. It is getting information from the redo logs and queuing it up and forwarding it on.

Follow up,

A reader, November 18, 2010 - 9:23 am UTC

Thanks.

However, you said "It is getting information from the redo logs and queuing it up and forwarding it on". I thought the capture would have already captured the information and put it in its queue table as messages and don't rely on redo log files (or archive log files) to be there for a long time. Do you mean in those hours while replication is going on, the capture would be still reading from redo/archive log files ?

The problem in our case is, we cannot keep archive log files for a long time. On some databases, the amount of archivelog is up to 2 TB per day.

Thanks,

Tom Kyte
November 19, 2010 - 7:34 am UTC

it uses online redo - writes to SGA, spills to disk only if it needs to

http://docs.oracle.com/docs/cd/E11882_01/server.112/e17069/strms_capture.htm#STRMS296

Follow up,

A reader, November 19, 2010 - 8:32 am UTC

Thanks. I think I understand now. In my case, the streams_pool_size was 100M and I guess all the LCRs were in memory and didn't spill anything to disk. Perhaps that is why my queue table didn't grow.

In our environment where we are migrating more than 1000 databases, I am writing a generic script that use Streams to catch up data after expdp/impdp do its initial job.

I will start the capture process right after SCN on source is recorded (just before expdp kicks off) and allow capture to run. Here I will make sure to have enough space for my queue table to store spilled messages.

After expdp/impdp completes, I will start propagation (on source) and apply (on target) and let replication begins. If it takes days to replicate.. let it take.

Now my focus is to understand any tips and tricks to speed up the whole replication steps.

Thanks,


I am back with a question on Capture process and redo logs,

A reader, December 08, 2010 - 9:10 am UTC

Hello,

In my continued work on data migration using expdp and Streams (Strams is used to catch up the data that was modified between the time Exp and Imp runs), I have found the Capture process requires archive log files all the way till the Apply process on Destination completes its job.

The way I did is (at a high level): Created capture and enabled it - created propagation (on source) and apply on (dest) and disabled it - started expdp using flashback_scn and later impdb was used to import it in destination - did quite an amount of DML on the schema that was exported (about 1.5 GB of redo was generated by those activities) - started apply and propagation - replication applied all the changes.

Here I was expecting all the changes that had happened during the time exp-imp was captured are stored as messages. Apparently it didn't happen that way. The queue table was empty. On Source, I deliberately moved archivelog files away from its location. The capture was expecting the log file to be visible when replication was going on.

Later I changed the checkpoint_retention to 0.1 (which is 2.4 hours I guess) and continued the same operation and the capture process still expected the archivelog files to be in its location.

Could you please let me know why is this behavior?

Thanks,

Tom Kyte
December 08, 2010 - 10:44 am UTC

it is that behavior because that is the way streams works, it wants to use in memory queues as much as possible and if it doesn't have to load it - it won't.

http://docs.oracle.com/docs/cd/E11882_01/server.112/e17069/strms_prop.htm#sthref151


follow up,

A reader, December 08, 2010 - 12:06 pm UTC

In that case, do you think till apply process on destination applies all changes, having archivelog files on the source database is inevitable?

In my demonstration, I have disabled propagation along with apply (while capture process is enabled) and enabled them back when I want to start the replication. Is it ok if I enable propagation along with capture (while keeping apply in disabled state) when my DML is happening? Will this approach doesn't require to have archivelog files around?

It will be going to be a big deal for us to arrange space for storing archivelog files till the time exp-imp completes. There are 100's of databases to be migrated and each database can potentially generate 10-20 GB of redo files per hour.

Thanks,

Tom Kyte
December 08, 2010 - 12:25 pm UTC

but your queues will get HUGE too - you won't be saving much, if anything at all - it might even be larger. You'll be taking binary compact data and turning it into verbose logical change records. Have you considered that?

follow up,

A reader, December 08, 2010 - 12:47 pm UTC

The queues will anyway be bigger due to quite a lot of DML activities. We don't save anyting in either approach, however, it will be easy to store them in queue table (rather depending on archivelog files). The queue table will be on a tablespace that can be expanded since there will always be buffer for datafiles to grow . Archivelog file destination is limited in size. Our backup jobs cleans up pretty often leaving room for new ones. It is not easy for us to expand its location.

Could starting propagation along with Capture (at the initial stage) doesn't require archivelog files to be around when apply process begins?

I have not heard about binary compact data. Could you please point me to any documentation/case study about that?

Thanks,


Tom Kyte
December 08, 2010 - 12:54 pm UTC

but you made this all about SPACE above - if you don't have room for the archives, you don't have room for the queues.

Also, you won't be impacting the production system performance wise if you let it be. Add reading the archives, stuffing them into persistent queues instead of just reading the archives into memory when the other box is ready and you have a really big impact.

... It is not easy
for us to expand its location.
...

in the year 2010, really?

redo logs are binary data streams, they are simple change vectors, they do not have all of the metadata a logical change record will.

follow up,

A reader, December 08, 2010 - 2:03 pm UTC

" Add reading the archives, stuffing them into persistent queues instead of just reading the archives into memory when the other box is ready and you have a really big impact" ... actually that is the problem. The target database wouldn't be ready to receive the changes.

say at 10:00am the capture process begins.
10:01am export of schema/table begins
02:01pm export completes and successfully imported into the target database
2:02pm apply process begins

between 10:01 am and 02:01pm, there could be numerous transactions on the tables. In that window, if there is no enough room to store the archivelog files, then we are in soup.

I checked about persistent queues that can store queue messages in tables. Apparently it doesn't support LOB datatypes.

If I use buffered queues, dependency of archivelog files is always there. Period. Am I right?



Tom Kyte
December 08, 2010 - 2:25 pm UTC

... In that window, if there is no enough room to store the archivelog
files, then we are in soup ...

you are already in soup because you're telling me you won't have the storage to save the queues either - since they will likely be larger.


buffered queues need the archives, yes.

follow up,

A reader, December 08, 2010 - 2:51 pm UTC

Well, I can find some space for tablespace since there will always be room for datafiles to grow.

Another question (little different although):
This is regarding OLTP kind of transactions and bulk loads.

In my experiment, I have found the same amount of records (also generating almost similar amount of redo) have a drastic difference on time taken to replicate when I use oltp kind of transactions and bulk loads.

I simulated OLTP by performing 60-70 records inserts per second for almost 12 hours (generated 1.5 GB of redo). After the 12th hour, I enabled apply and propagation and the entire changes was replicated on the target in less than 45 minutes.

In the 2nd experiment, I inserted the same amount of data in 3 different INSERT..SELECT statements. This generated almost 1.5 GB of redo as well. After those 3 inserts, I enabled apply and propagation. This time the replication took almost 7 hours to apply all the changes.

Is bulk loads a downside for Streams or are there any settings that make it to run fast?

Thanks,

Tom Kyte
December 08, 2010 - 3:22 pm UTC

I would be surprised at such a large difference. The big bulk statements will be turned into row by row operations - but that should matter since the OLTP stuff was already row by row.

Is it reproducible (you can do it on a smaller scale) - it shouldn't be.

follow up,

A reader, December 09, 2010 - 1:42 pm UTC

I did a test to demonstrate the time taken for Streams to replicate OLTP kind of transactions and bulk load transactions.

First, OLTP

I used the following script below and executed it on Source database.  Before executing, the Apply process on target and Propagation process on Source were disabled.  Only Caputure process was enabled.

create table d as select sysdate ds  from dual;
select count(*) from x1;
select count(*) from x2;
select count(*) from x3;

begin

  for i in 1..36000 loop
     insert into x1 select * from x1 where rownum < 23;
     insert into x2 select * from x2 where rownum < 23;
     insert into x3 select * from x3 where rownum < 33;
     commit;
     dbms_lock.sleep(1);
   end loop;
end;
/
create table f as select sysdate ds from dual;

After the 10th hour, I enabled both Apply and Propagation processes.

I have two create tables in the above script. These two tables will be replicated as well.  The "created" column in dba_objects (on Target) can tell how long the replication took to apply all the changes happened in that 12 hour window.

OBJECT_NAME                         CREATED
----------------------------------- --------------------

D                                   09-dec-2010 15:15:36
F                                   09-dec-2010 15:29:38
Based on the above timestamp, Streams replicated all the changes in under 15 minutes.

Next, bulk load testing. Before I started the test, I disabled Apply and Propagation processes.  I used the following script which is similar to the above one.

create table g as select sysdate dd from dual;
select count(*) from x3;
     insert into x3 select * from x3 ;
     commit;
     create table g1 as select sysdate dd from dual;
     insert into x3 select * from x3 ;
     commit;
     create table g2 as select sysdate dd from dual;
alter system switch logfile;
select count(*) from x3;

The above script on Source took just couple of minutes to complete.

I continued to use the same logic of creating tables to capture the time taken by the replication process.

The number of records in X3 was 1.3 million before I started.

I enabled Apply and Propagation process.  For the first INSERT statement, replication took 2 hrs 15 minutes to replicate all those 1.3 million rows.  The records from the 2nd Insert is still hasn't applied yet.  

OBJECT_NAME                         CREATED
----------------------------------- --------------------
G                                   09-dec-2010 15:51:05
G1                                  09-dec-2010 18:07:38


The current time on my system is:

SQL> !date
Thu Dec  9 19:39:23 UTC 2010

Thanks,
 

Follow up,

A reader, December 10, 2010 - 10:04 am UTC

Adding to my previous input.. the other table G2 got created almost after 7 hours G1 got created. That means the replicationp rocess took 7 hours to replicate the 2.6 million insert from the final Insert statement.


OBJECT_NAME CREATED
----------------------------------- --------------------

G 09-dec-2010 15:51:05
G1 09-dec-2010 18:07:38
G2 10-dec-2010 01:15:36

Thanks,
-Shiva

Follow up,

A reader, January 05, 2011 - 11:00 am UTC

Hello,

Do you have any update for my previous post about the time taken by Streams to replicate the bulk transactions?

I appreciate your help.

Thanks,

A reader, June 20, 2011 - 6:57 am UTC

I have to migrate my database from 32bit to 64 bit.
Which one is better among Transpotable Tablespace[expdp/impdp] and RMAN.

Since I am ok with cold backup, Using expdp/impdp should be ok?
What other benefots do I get with RMAN....
Tom Kyte
June 20, 2011 - 1:37 pm UTC

you don't need to rebuild to do that. Just use support, there are many many notes on the process. You just take your existing database and make it be 64bit.

A reader, June 20, 2011 - 11:07 pm UTC

With my current scenario, Transportable tablespace and RMAN are tye best options. I am however confused among which one to go for among these two.

With Transportable tablespace, if I keep backup of dbf file and metadata file, will that be ok to protect me incase of any kind of failure...
kindly guide....
Tom Kyte
June 21, 2011 - 7:41 am UTC

no they are not, not unless you tell me what your current scenario is and why an in place transformation won't work.

transporting will just move the data, you'll need to rebuild everything else - hard.

RMAN - what would that do? You'd either restore from it - in which case, you still have the in place transformation to deal with - or you'd create transport sets from it - which is back into "hard"

So, tell us why doing it the right way isn't something you want to do.

A reader, June 22, 2011 - 4:33 am UTC

Can you please explain Rebuilding, does it mean validating datbase objects?

My current scenario is to move Oracle Database/data to another hardware server. The only possibility of failure I see is when my dbf file somehow gets corrupted durung File transfer.

Will Transportable Tablespace suffice my requirement.....
Tom Kyte
June 22, 2011 - 10:42 am UTC

are you referring to:

transporting will just move the data, you'll need to rebuild everything else - hard.


If you transport, you are transporting data. Not stored procedures, not views, etc. Just the data. You'll have to rebuild everything else about your database - users, code, synonyms, etc.



Are you staying with the same OS (windows to windows). New hardware - no big deal, we can go 32 to 64 bit easily - without doing any transports or dump and reloads.



remap_tablespace across different users

A reader, August 13, 2012 - 2:54 pm UTC

Hi Tom - We are working on a database that we are going to move to another using transportable tablespaces.

This is the scenario

Database 1
-----------

user1 -> Has objects on tbsp1
user2 -> Has objects on tbsp1 and tbsp2

Database 2
-----------

userdb1, default tablespace tbspdb1
userdb2, default tablespace tbspdb2

If you see in Database 1, tbsp1 is shared across both user1 and user2, so if I use remap_schema in my impdp, then some of the objects of userdb2 in Database2 are going to be in tbspdb1. How can I avoid that and make sure that in Database 2, userdb1 has all of its objects on tbspdb1 and userdb2 has all of its objects on tbspdb2.

I did a test and this is what my impdp parameter file looks like. But this will not give me what I want as described above.

directory=testdir
dumpfile=user2.dmp
remap_schema=user1:userdb1
remap_schema=user2:userdb2
remap_tablespace=tbsp1:tbspdb1
remap_tablespace=tbsp2:tbspdb2
transport_datafiles=

Thanks.

Tom Kyte
August 17, 2012 - 1:56 pm UTC

you cannot merge tablespaces with transporting, you have to unload and reload.

so, just use normal datapump export/import and remap_tablespaces. transporting cannot be used here.

Tablespace Size issue during import !!

Ankit, October 19, 2012 - 3:29 am UTC

Hi Tom
Thanks a lot for the help you provide.

While doing import (impdp), I sometimes face issue of
tablespace getting filled up in target database. Then I get the size for those tablespaces increased before importing again.
Is there any way to know beforehand (like we do estimate for export) if any tablespace will fill-up in target database during import, so that we can get size increased for them at first?

Thanks.
Tom Kyte
October 23, 2012 - 11:15 am UTC

why not set them to autoextend in the first place? so they can grow as needed to a reasonable size during the import??

you can only get an estimate using a database link with impdp - the assumption is if you have the files, you sort of know how big it is already.


More to Explore

VLDB

If you are new to partitioning, check out Connor McDonald's introduction series here.

VLDB

Documentation set on VLDB and Partitioning.