Skip to Main Content
  • Questions
  • A doubt against transport tablespace

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Pauline.

Asked: February 16, 2001 - 9:49 pm UTC

Last updated: March 01, 2011 - 4:15 pm UTC

Version: 8.1.5

Viewed 1000+ times

You Asked

Hi Tom,
I read the some ducumentation about transport tablespace and practiced the whole procedure between two systems successfully.
But I have some doubt of its advantages---
transporting tablespaces can be useful in the follow case:
* Transporting fact tables from OLTP system in a data warehouse.
* Perform Tablespace Point In Time Recovery
......
Transporting tablespace need to alter tablespace to read
only first. In OLTP (24 X 7)database, since we are unable to turn the tablespaces to read only, so how to transport tablespaces to other server? It seems like that alter tablespace to read only is the disadvantage of it.

For performing Tablespace Point In Time Recovery, we can copy the
hot backup files directly from system1 to system2 and recreate control file to recover. This way won't bother system1 to alter tablespace read only. Why we need to user transporting tablespace? In my opion,for OLTP database, altering tablespace to read only and then exporting that tablespace, is the big disadvantage.

Maybe I don't fully understand Transporting Tablespace. Please give me your thought.
Thanks.

Pauline

and Tom said...

Well, first -- OTLP does not imply 24x7. Many OLTP systems are "9-5".

Even if it is 24x7 -- is there a small window in which this can be done? Probably. If you store the data in a small enough grouping (eg: put the table you want to transport frequently into its own tablespace and its index(es) into either that same or their own tablespace with no other data). The "read only" time to transport that single table will be purely a function of the time it takes you to copy the datafiles (the exp + copy should be done simultaneously).

Further, if you are "really serious" -- you might use a mirrored database (hardware). Perhaps triple mirrored. You would use "alter database suspend IO", break the mirrors (so you have a double mirrored system + 1 extra image), "alter database resume IO". That takes a second (so for a second, things "pause"). You now have your database back (with mirrored disks) and you have what looks like a "crashed" instance off to the side (the 3rd disk in the mirror). You start up another instance on the "crashed" copy, alter tablespace read only, exp, copy the files and then ask your hardward to resilve the mirrors (adding the 3rd disk back in).

24x7 OLTP is only one of many configurations people run Oracle in. Consider the developer and the bonus this feature is. They need a baseline upon which to test. In the "old days", they would get a database to the point in time they needed it -- take a full cold backup and then test. When done testing, full database restore and do it over. Now they can export just their data (copy only their tables/indexes) -- test, drop and reattach, test, drop and reattach, and so on over and over and over without bring the database down, without doing a full restore.

consider the web database with a catalog. The catalog is OUTSIDE the firewall for reading. The catalog is developed inside the firewall via some application. Every week, you need to update the catalog. Rather then dump and load -- you transport the data.

Consider the OLTP system where orders are collected into a table outside your firewall. Every night they need to come INSIDE your firewall. So, at some point you:

create table new_batch_orders ( .... );
create index on new_batch_orders ....
add constraints whatever.....

create or replace view batch_orders
as
select * from new_batch_orders;

-- transport old_batch_orders (can be read only as no one is using it anymore)


Not every technology is 100% applicable to every case but every technology has its use. It took me a while to understand why fo example Oracle8i added the "alter database suspend/resume IO" command. How useless I thought. Never thought of the triple mirror situation. People with really large databases use it to create a mirror for reporting/backup/etc and then resilver the disks.

You wont use transportable tablespaces for everything, but when you do, you'll appreciate that they are there.


Rating

  (10 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Please give your openion on this.

A reader, April 16, 2002 - 11:31 am UTC

Tom, This answer provided valuable ideas for the use of transportable tablespaces in real life situation. Can I take your openion if we can use it the following way:

We have a datawarehouse loaded weekly from an ODS. The ODS database is updated 2-3 times daily. Every week we take a dump of the new data from ODS to ascii files (using PL/SQL) and ftp and load into the warehouse (using SQL*Loader). We are now getting close on the SLA load-window. We are now thinking may be we can use transportable tablespaces to move the data over to the warehouse. This would need some re-design, but keeping in mind the future growth and demands probably that is the way to go. We have the fact tables partitioned based on month now. Can we move over the weekly data and attach it as a sub-partition to the fact table? The question is can we move and attach weekly data without re-designing the paritioning? If there has to be any partitioning re-design, I'd prefer that to be on the ODS rather than the warehouse. Can I have your valuable ideas please.

Tom Kyte
April 16, 2002 - 12:23 pm UTC

You would have to change the partitioning scheme to be weekly, not monthly, if you just want to slide the data in.

Will the Transportable tablespaces will be useful in Cross Platform migration

Subodh Deshpande, July 27, 2004 - 1:17 am UTC

Hi Tom,

Migration from 817 to 9ir2 is the issue..

817 db is on HP UX size few hundred gb.
9ir2 db will on sun solaris. normal improt/export is not feasible it will nearly take more than 40 hrs. how to proceed for this cross platform migration.

thanx in advance..take care..subodh

Tom Kyte
July 27, 2004 - 7:14 am UTC

10g is the first release to support cross platform transportable tablespaces.

If you were to migrate to 10g we could do that.


you can parallelize your export/imports (do schema level exp/imps for example).


with a bit of planning you can dump and reload a terabyte in lots less than 40 hours. One would hope you have the schema -- so you could create the tables -- dump and reload just the data, enable constraints without having to validate them, create indexes in parallel, etc.

Tablespace Read-only mode in 9i and 10g.

Zoltán Réti, July 28, 2004 - 11:22 am UTC

I experienced that I could not put a tablespace in read-only mode till an active transaction was running anywhere in the same database and other tablespace. I filed a bug and Oracle refused it. I filed an enhancement request against this serious problem and Oracle accepted it.

Oracle promised to fix this issue over 10g.



TS

Rob, September 04, 2004 - 12:17 pm UTC

Can I transport a tablespace from 8.1.7.3 to 9.2.0.4 and vice versa for example? Thanks.

Tom Kyte
September 04, 2004 - 1:06 pm UTC

from X to Y where X <= Y yes.

The other way around -- well, no... The 9i file would be full of stuff that 8i says "huh?"

transporting tablespace from 8i to 9i gives national character conversion error

sunny, January 06, 2005 - 6:22 pm UTC

Hi,
I encoutered error when moving tablespaces from oracle 8i to 9i.

pls help as exp/imp cannot be feasible because of very large amount of data ie 300gb is involved

Connected to: Oracle8i Enterprise Edition Release 8.1.7.0.0 - Production
With the Partitioning option
JServer Release 8.1.7.0.0 - Production

Export file created by EXPORT:V08.01.07 via conventional path
About to import transportable tablespace(s) metadata...
import done in WE8ISO8859P1 character set and UTF8 NCHAR character set
export server uses WE8ISO8859P1 NCHAR character set (possible ncharset conversio
n)
. importing SYS's objects into SYS
IMP-00017: following statement failed with ORACLE error 19736:
"BEGIN sys.dbms_plugts.beginImport ('8.1.7.0.0',31,'31',NULL,'NULL',3870,2"
"001,1); END;"
IMP-00003: ORACLE error 19736 encountered
ORA-19736: can not plug a tablespace into a database using a different national
character set
ORA-06512: at "SYS.DBMS_PLUGTS", line 1642
ORA-06512: at "SYS.DBMS_PLUGTS", line 1480
ORA-06512: at line 1
IMP-00000: Import terminated unsuccessfully


Pls guide how to resolve this issue
I suppose no. of people would be facing this because we have only 2 National character options in 9i

Thanks in advance.

Regards
sunny


Tom Kyte
January 06, 2005 - 7:29 pm UTC

they have to have the same character set attributes to transport -- that is one of the very basic requirements of TTS -- without it being met, you cannot do it.

Jamesk

James Krumrei, January 21, 2005 - 3:16 pm UTC

Hi Tom,

thanks for the examples of scenarios where transportable tablespaces are usefull.

One of the requirements for transportable tablespaces is that the target instance of oracle does not already have a tablespace of the same name.

Is there a way to get around this requirement in order to transport a tablespace back to the same oracle instance it came from under a different name? i.e. can the name be changed during the transport?

I am trying to copy a tablespace with production data to a tablespace of a different name which is used for development. Am I going in the wrong direction by trying to use transportable tablespaces?

The exact scenario is this:
Oracle 8.1.7.4
A tablespace, productionT with production tables.
A tablespace, producitonI with indexes for production tables.
A tablespace, testT with test tables.
A tablespace, testI with indexes for test tables.
The goal is to find the fastest way to copy all of the records in the produciton tables to the test tables.
The production and test tablespaces contain the same tables and indexes.
Total data size < 40GB for the tables and indexes combined.

Tom Kyte
January 21, 2005 - 7:59 pm UTC

(and in 10g you have the ability to rename tablespaces....)


the name cannot be changed during the transport, no.

<quote>
Am I going in the wrong direction
by trying to use transportable tablespaces?
</quote>

no, you are going the wrong direction by even thinking about having test and production in the same database!



Transport tablespace

A reader, December 02, 2008 - 3:49 pm UTC

Hi Tom,

We are trying to move a subset of data from one database to another database.

The source database is configured as Data Guard and this has a physical standby. The data which needs to be moved is about 1.5 billion rows contained in a tablespace.

I was thinking of many options:

1. sqlplus copy command 
2. db link
3. Spool the contents to text file and then using sqlldr
4. Normal exp/imp  
5. transport tablespace feature.

The Ist 4 options are very time/CPU consuming options on the primary database and the application layer may face issues in latency during the rows transfer from the source database to the target database.

The team is also not inclined to take any down time, so I was thinking may be once you put the STANDBY database to READ-ONLY can you do transport tablespace metadata dump and scp the datafiles to the target database.

Will that option work?

Do you have any other alternatives to move aroung 400G of data from source to the target database with source and target database names being different.

Tom Kyte
December 09, 2008 - 9:28 am UTC

what version? in 10g, transports can(should) be made from backups. No 'read only' requirement

Also, in 9i and before, just do a tablespace point in time recovery (restore system+undo+tablespace you want). Then transport that set of files (eg: make them read only in this 'mini instance' and you are done).

You do not need any downtime regardless for transporting, no matter the version. Use your backups.

Transport tablespace from two databases ...

Ghulam Rasool Butt, March 01, 2011 - 4:23 am UTC

Dear TOM,
The size of SYSTEM tablespace is reached to 150GB because of application design. It creates and drops temporary tables. Oracle support say its an expected behaviour.

We need to recreate the production database with minimum downtime. We are thinking of using TTS option. The problem is that we cannot afford a big downtime while because of system tablespace corruption, exporting the metadata takes 6 days (tested on a copy of prod).

The question is Can we take the export from copy of production (test db) and datafiles from production db (downtime for 3 to 4 hours) can be affordable? and use them in fresh database.
Tom Kyte
March 01, 2011 - 12:19 pm UTC

... It creates and drops temporary tables. ...

that is not a design, that is a design mistake.


... The question is Can we take the export from copy of production (test db) and
datafiles from production db (downtime for 3 to 4 hours) can be affordable?
and use them in fresh database. ...

no, not at all. The export is remembering the segments and their mapping to extents and everything, the export has to take place using the data dictionary that is used to manage the data.

you could use golden gate or streams to accomplish this with little to no downtime at all. Just replicate the database to another database - when replicated - stop work on production, do a final synchronization and there you go.

Alexander, March 01, 2011 - 1:28 pm UTC

How would creating and dropping tables blow up SYSTEM like that? Wouldn't the space be reused if they were subsequently dropped?
Tom Kyte
March 01, 2011 - 1:57 pm UTC

I didn't read too much into that.

One way to read that is "they created their temporary tables in system".


Alexander, March 01, 2011 - 2:24 pm UTC

I didn't even think you could do that. I thought Oracle created them in the temp tablespace implicitly but then again I never use temp tables because who needs em ;)
Tom Kyte
March 01, 2011 - 4:15 pm UTC

their temporary tables are in name only, they are not true temporary tables - they create and drop them, they are permanent tables they use temporarily.

create+drop = entirely wrong approach.

More to Explore

Backup/Recovery

Check out the complete guide to all of the Backup & Recovery techniques in the Oracle Database.