Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Craig.

Asked: January 08, 2009 - 2:57 pm UTC

Last updated: July 18, 2012 - 3:22 pm UTC

Version: 10.2.0.3

Viewed 10K+ times! This question is

You Asked

Hi Tom, I have been tasked with migrating a number of databases (10.2.0.3) from Solaris to Linux. None of the databases are over 10gb but they do have a large number schema's. What is the best (recommended) way to migrate them?

and Tom said...

for these small ones, probably data pump

http://docs.oracle.com/cd/B19306_01/server.102/b14220/mgmt_db.htm#sthref2192

if it was really large, a mixture of data pump to get the code over and cross platform transports
http://docs.oracle.com/cd/B19306_01/server.102/b14220/mgmt_db.htm#sthref2192

to get the large bits of data over.

Rating

  (9 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Help on a migration approach

Rohit, January 07, 2011 - 2:37 am UTC

Hey Tom,

I need to migrate my 9i database on Solaris SPARC to 11g on Solaris X86 platform. Also, as part of the migration, character set needs to be changed from WE8ISO8859P1 to UTF. Plus, we can not afford any downtime.

1. I think the best approach to achieve the downtime requirement is DATAGUARD, STREAMS or GOLDEN GATE, right?
2. Please suggest a good approach to perform such a migration because we need to resolve endian incompatibility, change character set and avoid downtime as part of this task.

Any pointers will be really helpful Sir

Thanks
Rohit
Tom Kyte
January 07, 2011 - 9:14 am UTC

1) data guard would not apply - you cannot do data guard in a heterogeneous environment.

streams would apply - golden gate would apply.

2) you will be rebuilding the database from scratch - which can be accomplished with replication (streams/golden gate). That will resolve the endian issue. Replication would also minimize downtime to as small as you want it to be.


I would say that streams is your likely candidate as you already have it, unless you already have golden gate as well - it is an extra cost option to the database.



an extension of previous question on database migration

Rohit, January 07, 2011 - 10:42 am UTC

Thank you Tom

To use streams, i need to create a copy of source database. I can do it either using RMAN's duplicate or exp/imp (can't use data pump because the source is 9i).

1. It is cross platform, so i think exp/imp is a viable option here, right?

2. And if we use exp/imp, i don't need any extra step (like RMAN's convert or csalter) to resolve endian compatibility and to change character set, is it? I mean, will the exp/imp process take care of these 2 things implicitly?

3. One more question sir - if my source database contains columns with XML data, exp/imp along with streams is still a good option? I mean how to take care of such columns? Is using securefiles in 11g a good option?
Tom Kyte
January 07, 2011 - 1:15 pm UTC

streams/golden gate itself can instantiate the tables.

exp/imp will be slow. very very slow. If your database is of any size, it will be slow.

3) xml type columns are going to be problematic. It wasn't until 11g that support was added for them replication wise and even then it was for clob storage only.

You'll have to review the limitations of each replication technique and see what objects you have that will cause issues.

Rohit, January 09, 2011 - 3:26 am UTC

You mean if i am using streams for this scenario, i do not need a pre created copy of the database nad instantiating the tables will also create them at destination?
As per the below note, it seems we need to have a copy of the object at destination side before instantiating it -

http://download.oracle.com/docs/cd/B14117_01/server.101/b10728/instant.htm

I thought we need to use exp/imp (or RMAN if platforms are same) to create the database copy first and then use streams to sync source and destination.

==> Also, if i create a blank destination database (on the new platform) with desired character set and then use exp/imp to create a copy of the source database, will i need to do any thing (any extra step) to resolve endian compatibility and moving to a different character set?
Tom Kyte
January 10, 2011 - 8:00 am UTC

doh, you are correct, I was thinking advanced replication/materialized views. Sorry about that.



... Also, if i create a blank destination database (on the new platform) with
desired character set and then use exp/imp to create a copy of the source
database, will i need to do any thing (any extra step) to resolve endian
compatibility and moving to a different character set? ...

http://docs.oracle.com/docs/cd/E11882_01/server.112/e16536/original_export.htm#SUTIL2773

Thanks Tom

Rohit, January 10, 2011 - 11:38 pm UTC

Thanks you sir.
Just qurious, when will you start taking new questions :)
Tom Kyte
January 11, 2011 - 6:00 am UTC

depends on my schedule. totally depends on my schedule.

Doubts on char set conversion as part of this migration

Rohit, January 11, 2011 - 7:12 am UTC

Hey Tom,

On the character set conversion thing, let me be more specific. The existing 9i database is in WE8ISO8859P1 and we plan to move to 11g on UTF8 char set. I read the note you suggested but probably my basics/understanding on char sets and national char sets is not up to the mark (so please bear with my questions).

1. Is UTF8 a superset of WE8ISO8859P1???
2. Does this conversion sounds logical???I mean in terms of long term solution, data integrity etc.???
3. UTF8 is a multibyte and WE8ISO8859P1 is a single byte set, right? So i am still not sure what all i need to do as part of export and import to make sure my data is not corrupted or lost after import to target? Will it be a plain export/import and the exp/imp take care of conversion implicitly? I read somewhere (not sure where) that before export and after import, there is some manual intervention required to ensure data integrity and avoid data loss.
4. Does char set difference has any impact on streams replication?

Please advise

Thanks
Rohit
Thanks
Rohit
Tom Kyte
January 12, 2011 - 10:07 am UTC

1) Logically - utf8 is a superset of we8iso8859p1 - every character in we8iso has a corresponding character in utf8.

However, physically is it not a "binary" superset of we8iso. You need to have character set conversion take place.

2) if you need utf8 support - sure it sounds "logical".

3) yes on the multibyte/single byte. The export/import would be done in the original character set and then upon import - it will undergo character set conversion.

Just set your NLS_LANG to the character set of the source database.

4) sure it can - as (depending on the character sets) CONVERSION will take place and if there is some character that doesn't convert nicely - you will end up with different data.




Suggest you read the globalization guide from cover to cover - freely available under the documentation for the database on otn.oracle.com

On XML data replication

Rohit, January 11, 2011 - 7:24 am UTC

Tom, In addition to previous questions on char sets, just wanted to bother you on XML columns also. In the following note (11g), i read that XML data stored as CLOB can be replicated and i believe columns with BFILE type can not be replicated.

==> Can not find 9i streams limitations (becuase my source is 9i), Any pointers?
==> So then the only option left is Golden Gate? Or is there any other option to replicate from my 9i database which contains columns with XML data stored as BFILE datatype?

Thanks
Rohit
Tom Kyte
January 12, 2011 - 10:08 am UTC

9i did not support XMLType in replication. That is a new feature of the current 11g releases.

BFILES are never replicated - not by anything provided at the database layer - they live in the file system, we don't do file system replication.

Migration TB DB from solarios to linux

Marcel, March 19, 2012 - 6:42 am UTC


Hi ,

Can I use Golden Gate to migrate the 20 TB from Solaris to Linux ?

How will be be the first copy of the data ?

Tom Kyte
March 19, 2012 - 10:29 am UTC

yes you can.

it will take a while? I'm not sure what you want me to comment on.

A reader, July 18, 2012 - 10:15 am UTC

We have a database that has about 5 TB of data and is currently in Oracle 10.2.0.3 on a Solaris platform. The database layout is dbschema1, dbschema2, dbschema3. We have to migrate this database to Oracle 11.2.0.3 on a Linux platform. How can we achieve this with minimal downtime from your experience ?
Tom Kyte
July 18, 2012 - 11:34 am UTC

if you want no downtime, you can use streams or golden gate to replicate the database to the other location (it'll take a while, but so what, you are still running) and then cut over.

that would be the least downtime of all.


you don't mention the hardware here - it could be as simple as just moving the disks over if you are x86 to x86.


A reader, July 18, 2012 - 12:34 pm UTC

What would the steps be to configure Golden Gate. Would it be something like this ?

1. Take a backup of db1.
2. Restore to db2.
3. Configure golden gate replication from db1 to db2
4. determine the cutover time and switch to db2.

Is there a document available for how to do this ?
Tom Kyte
July 18, 2012 - 3:22 pm UTC

More to Explore

Data Pump

All of the database utilities including Data Pump are explained in the Utilities guide.