Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, Mohammed.

Asked: September 12, 2000 - 7:26 am UTC

Last updated: April 22, 2013 - 1:49 pm UTC

Version: 7.3.4

Viewed 10K+ times! This question is

You Asked

Tom,

What method oracle suggest to migrate the 7.3.4 to 8.0.5+.

1. Full Export/Import
If we use this option,
what will be the chronological order import will
create the objects while import on 8.0.5+?


2. User by User Export/Import
If we use this option,
What will be the order users for importing on 8.0.5+

In either cases, what will all_objects 'view' contains?
Will it contain same number of records as on 7.3.4?
What will be differences in all_objects view?

What are the steps we need to follow in order for a successful migration?

I will be thankful if i get an early advice.

regards,
Ahmed,





and Tom said...

This document:

</code> http://docs.oracle.com/cd/A58617_01/server.804/a58243/toc.htm <code>

outlines your options. I would opt for a migration -- not a export/import if I could. A migration is much faster. I would only use an export import if i really wanted to rebuild the entire database.

All_objects, by user -- excluding SYS, SYSTEM and other internal type account, should have the same count after any migration. SYS and SYSTEM and others will have different counts (more objects in the data dictionary with each release).

The above referenced document has all of the steps needed.

If you choose the export/import -- here are some of the considerations:

o a user by user export/import will not export things like the "create user" and system grants (eg: grant DBA, grant CREATE TABLE, etc) for each user. You will have to migrate the "users" your self. (not their data -- their identity).

o a user by user export allows for parallel operations. I can export many at the same time, import many at the same time. It can be much much faster -- AND in the event of an error during either imp or exp -- the amount of "lost work" is much smaller. I would rather have trouble importing a single user rather then have trouble importing the entire database.

I myself go for the user by user export even though you have to be careful to recreate the user on the new database yourself. It is easy enough with some queries on DBA_USERS to generate the create user statements and DBA_ROLES and such to get the grants for each user.

Rating

  (89 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Excellant approch

Nathan, October 02, 2002 - 4:34 am UTC

Hi Mr.Tom,
Oracle 8 DB in NO Archivelog mode
I want to transfer data(with one day transaction difference) using export /import from prod to test machine in daily basis. I want to maintain same tablespace definitions(for all users) in test machine ,i have 50 more users.( I don't want to use DB cloning),because its taking more time to copy over to test machine.
Guide me with steps...
Thanks in advance

Tom Kyte
October 02, 2002 - 10:30 am UTC

well, it doesn't really matter here. You don't care about the data in the database since you are running in noarchivelog mode.


but anyway, just export and import. although

o enabling archivelog mode
o doing a hot backup
o restoring it on the test machine

will be about a zillion times faster and will show that you actually *care* about your data (it is important to you).

In noarchivelog mode on a transactional production system, you might as well be using grep, sed and awk for a database.

I didn't get you

Sampath, October 02, 2002 - 11:01 am UTC

Hi Tom!
I know you recomend archivelog mode and proved that is the way to run an Oracle database by your arguments hands down.

How ever I didn't get the following part of your update .
Could you please explain it ..

"In noarchivelog mode on a transactional production system, you might as well be using grep, sed and awk for a database. "

Thanks,
Sampath

Tom Kyte
October 02, 2002 - 11:12 am UTC

my point was you have the same degree of protection with simple flat files as you do with a noarchivelog mode database. You have blown off one of the major big reasons to use a database (recovery). So, why bother... Just use flat files, you'll be just as protected. (i was being sarcastic)

Oracle8i to Oracle9i migration qn

RB, November 18, 2002 - 11:40 am UTC

I have an oracle8i DB ( 8.1.7 ) and would like to migrate to oracle9i Release 2. Please advise the steps or point me to a doc where I can find the steps to do.

Tom Kyte
November 18, 2002 - 1:31 pm UTC

The oracle9i migration guide
</code> http://docs.oracle.com/docs/cd/B10501_01/nav/docindex.htm#index-MIG <code>
covers all of that


From 7.3 to 9i

Irfan, November 19, 2002 - 2:44 am UTC

Hello Tom,
What way you will suggest if migrating from 7.3 to 9i directly? Thanks.

Tom Kyte
November 19, 2002 - 7:24 am UTC

to read this document:

</code> http://docs.oracle.com/docs/cd/B10501_01/nav/docindex.htm#index-MIG <code>

from cover to cover, understand your options and pick the one that is best for you in your environment.


Upgrade the database

Tony, November 19, 2002 - 3:21 am UTC

I want to upgrade the database from 8.0 to 8.1.7.
I don't want to install 8.1.7 and then do exp/imp of data.
Is there any script to do the same?. If yes, Where can I find that script? Running that script alone should do the upgrade.



Tom Kyte
November 19, 2002 - 7:26 am UTC

geez -- EVERY release of Oracle has this document called the "migration guide"

for 817 it is:

</code> http://docs.oracle.com/docs/cd/A87860_01/doc/server.817/a86632/toc.htm <code>

(this page is becoming the index to all of the migration guides isn't it -- you can just goto otn -> documentation -> your database version -> migration guide!)

read that, you'll understand your options which include "upgrade my database in place please"


opinion

A reader, September 30, 2003 - 8:42 am UTC

Hi Tom,
I was reading about new feature in 10g about migration, then I remember and think it could be interesting to share my experience in migrations.
After trying the automatic migration feature included in Oracle and have problems in the migration( 7.3 to 7.3.4, 7.3.4 to 8i, 7.3.4 to 9i, 8i to 9i), performance lose after migrating, errors in the migratoin process, etc.
Then we see that a manual migration was better I will dare to suggest, maybe it could interest to some one, you how Oracle migration could be improved (for small database at least)
1. Open Old database
2. get strcture from tablespaces and datafiles
3. export full database
3a optionally export table,count(*), to verify there is the same amount of tables and rown, and objects, than before.
4. detect information about features installed
5. drop old software install new software
6. rename datafiles from xx to xx_old
7. copy basic database (already created)
8. run additional features.
9. recreate structure.
10. import previously exported database into new database.
10a optionally verify there is the same amount of tables and rown, and objects, than before.

I had seen several time after several problems that export and import is the best way to migrate, even when I understand that for a big database could be a titanic work.

:)

Tom Kyte
September 30, 2003 - 8:55 am UTC

hate to be a doubting Thomas -- but I will be.

I've upgraded many many (lots) of databases. I use exp/imp ONLY when forced to. I do inplace upgrades whenever possible.

I have not had the same experience as you.

A reader, September 30, 2003 - 2:30 pm UTC

Thanks Tom,
Maybe there was another element in the middle (until I didn't get my ocp, and asktom I didn't understood the need of the statistics as I do now).
So the problem was the migration plus (other thing), but the full export and import solved that. And I got and addition control of what is the migration doing.

Maybe I'm damned because every time I try to apply a patch, from the 7.3 to the 7.3.4, some strange thing happens, one example is the problem in the function tables and materialized views I asked about I recently have after applying a small patch (to avoid a problem that locked tables and procedures, so you need to disconnect everything before updating, otherwise it hanged), I'm planning to reinstall my database. :)


migration from solaris (9.2.0.3) to Linux (9.2.0.4)

Baqir Hussain, November 12, 2003 - 3:42 pm UTC

Tom,
Oracle 9.2.0.3, production database in OLTP, is running on E4500 with Netapp. We plan to move to ORACLE9i RAC on Linux with different storage attached to it. The current size of the database is ~ 50G in which 3/4 of the data is text messages in two tables.
exp/imp takes more than 2 days to complete migration (can't afford).
I would like to know different methods in which the migration can be achieved without down time and/or minimum down time.
Please advise.
Thanks

Tom Kyte
November 12, 2003 - 5:03 pm UTC

well, exp/imp should not take 2 days, not for 50gig, not unless you wrote it by hand that is (byte by byte).


you can CREATE TABLE AS SELECT over a dblink. that is one option.

you can write a custom program to dump and load.

you can export in parallel running many exp processes -- each with a query to get a different slice of the table, imp in parallel.

you can install oracle on linux and create empty tables, then create materialized views on prebuilt tables (use read only replication). then when you want to switch over -- just a) kill user connections to the solaris box, b) refresh mv's, c) drop the mv's (since they are on prebuilt tables -- the tables will stay, with all of their data)



Migration from EE to SE (Oracle 9,2)

Olaf, November 13, 2003 - 7:55 am UTC

Tom,

I want to migrate our prod db from EE to Standard Edition.
What is the best approach to do this.
Will full export/import overwrite the sys and system objects in the target database? I saw one database two years ago which become corrupt after full export/import.
I make it always on the schemas (fromuser touser for each schema) basic, but this database have too much schemas to do so.

Can you give me a good advice?



Tom Kyte
November 13, 2003 - 8:47 am UTC

it'll be a full export import if you want to rebuild -- just create a "basic, small database" and do the full import.

you can contact support and see what is involved in downgrading to SE from EE.

I've not seen a database "become corrupt" due to a full import (and even if it did, well, it is a "seed" database, it didn't have anything in it, so just throw it away)

Migration across platform on OLTP system - Oracle9i R2

Baqir Hussain, January 19, 2004 - 2:11 pm UTC

I would like to use materialized views to migrate data from SOLARIS to LINUX and you suggested as well.

Before kill user connection from the Solaris box, I would like most of the data to be replicated ahead of time and then migrate a few hours of the rest of the data across.

I would appreciate if you please show me how to achieve it.
Thanks

Tom Kyte
January 19, 2004 - 2:57 pm UTC

i gave you the abc's above?

you can install oracle on linux and create empty tables, then create
materialized views on prebuilt tables (use read only replication). then when
you want to switch over -- just a) kill user connections to the solaris box, b)
refresh mv's, c) drop the mv's (since they are on prebuilt tables -- the tables
will stay, with all of their data)

migration across platform

Baqir Hussain, January 19, 2004 - 3:52 pm UTC

Thanks for the prompt response. I would appreciate if you please either point me to any existing example or show with a very simple example. Thanks

Tom Kyte
January 20, 2004 - 6:52 am UTC

abc, 123, ... here you go...

concept

a) create empty tables in new database....
b) put constraints etc on them (deferrable please, important for mview groups!)
c) refresh them (fill them up)
d) put mv logs on remote tables
e) keep fast refreshing after that (could switch d/c above to avoid possible double complete refresh)
f) when ready to cut over -- kill access to old db, do a last refresh -- drop materialized views and there you go.



ops$tkyte@ORA9IR2> create table emp as select * from emp@ora9ir1 where 1=0;
 
Table created.
 
ops$tkyte@ORA9IR2> create table dept as select * from dept@ora9ir1 where 1=0;
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter table emp add constraint emp_pk
  2  primary key ( empno ) deferrable;
 
Table altered.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter table dept add constraint dept_pk
  2  primary key ( deptno ) deferrable;
 
Table altered.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter table emp add constraint emp_fk_dept
  2  foreign key (deptno ) references dept(deptno) deferrable;
 
Table altered.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create materialized view emp
  2  on prebuilt table
  3  refresh complete
  4  as
  5  select * from emp@ora9ir1;
 
Materialized view created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create materialized view dept
  2  on prebuilt table
  3  refresh complete
  4  as
  5  select * from dept@ora9ir1;
 
Materialized view created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> begin
  2     dbms_refresh.make(
  3        name             =>'group1',
  4        list             =>'emp,dept',
  5        next_date        =>sysdate,
  6        interval         =>'sysdate+1/24',
  7        implicit_destroy =>true);
  8  end;
  9  /
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec dbms_refresh.refresh( 'GROUP1' );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select mview_name, last_refresh_type from user_mviews;
 
MVIEW_NAME                     LAST_REF
------------------------------ --------
DEPT                           COMPLETE
EMP                            COMPLETE
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> @connect scott/tiger@ora9ir1.localdomain
ops$tkyte@ORA9IR2> set termout off
scott@ORA9IR1> REM GET afiedt.buf NOLIST
scott@ORA9IR1> set termout on
scott@ORA9IR1> create materialized view log on emp;
 
Materialized view log created.
 
scott@ORA9IR1> create materialized view log on dept;
 
Materialized view log created.
 
scott@ORA9IR1>
scott@ORA9IR1> @connect /
scott@ORA9IR1> set termout off
ops$tkyte@ORA9IR2> REM GET afiedt.buf NOLIST
ops$tkyte@ORA9IR2> set termout on
ops$tkyte@ORA9IR2> alter materialized view emp refresh fast;
 
Materialized view altered.
 
ops$tkyte@ORA9IR2> alter materialized view dept refresh fast;
 
Materialized view altered.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> @connect scott/tiger@ora9ir1.localdomain
ops$tkyte@ORA9IR2> set termout off
scott@ORA9IR1> REM GET afiedt.buf NOLIST
scott@ORA9IR1> set termout on
scott@ORA9IR1> update emp set ename = initcap(ename) where rownum = 1;
 
1 row updated.
 
scott@ORA9IR1> update dept set dname = initcap(dname) where rownum = 1;
 
1 row updated.
 
scott@ORA9IR1> commit;
 
Commit complete.
 
scott@ORA9IR1>
scott@ORA9IR1> @connect /
scott@ORA9IR1> set termout off
ops$tkyte@ORA9IR2> REM GET afiedt.buf NOLIST
ops$tkyte@ORA9IR2> set termout on
ops$tkyte@ORA9IR2> exec dbms_refresh.refresh( 'GROUP1' );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> select mview_name, last_refresh_type from user_mviews;
 
MVIEW_NAME                     LAST_REF
------------------------------ --------
DEPT                           FAST
EMP                            FAST

ops$tkyte@ORA9IR2> drop materialized view emp;
 
Materialized view dropped.
 
ops$tkyte@ORA9IR2> drop materialized view dept;
 
Materialized view dropped.
 
ops$tkyte@ORA9IR2> select count(*) from emp;
 
  COUNT(*)
----------
        14
 
ops$tkyte@ORA9IR2> select count(*) from dept;
 
  COUNT(*)
----------
         4
 

migration across platform

Baqir Hussain, January 19, 2004 - 8:07 pm UTC

I figured out ,based on your suggestion, how to set up mv in target and source database. Thanks

migration across platforms

Baqir Hussain, January 27, 2004 - 1:24 pm UTC

Tom,
Thanks for the example. Without this example it was impossible to figure out how to migrate across platforms. Thanks again.

I have few questions regarding data move:

1. What is deferrable constraint for in mv replication?
2. How the refresh groups should be created? Should I create multiple fresh groups with (parent + child ) tables ONLY and/or some other way?
3. Should I create all indexes after the cut over and then run dbms_stat and/or any other way?

I would very much appreciate your help in this regard.
Thanks again


Tom Kyte
January 28, 2004 - 7:52 am UTC

1) the way snapshots are refreshed is "table at a time", not transaction by transaction. hence RI and other contraints may be invalid during the refresh but will be consistent by the commit. so, we defer constraints until commit.

2) parent+child

3) you tell me -- that is your call.

data migration across platform

Baqir Hussain, January 28, 2004 - 11:47 am UTC

Tom,
Thanks again. I would prefer creating indexes after the cut over and than run dbms_stats on each table.

1. Should I change deferrable constraint to default (not deferrable) after the cut over for regular OLTP operation?
Please advise.
Thanks

Tom Kyte
January 28, 2004 - 2:34 pm UTC

1) you cannot. you would have drop and recreate them. the major difference will be that a unique/primary key that is deferrable will use a NON-UNIQUE index, instead of a unique index for enforcement.

data migration across platform

Baqir Hussain, January 29, 2004 - 12:47 pm UTC

Tom,
Thanks for the prompt reply. Unique index enforement is the requirement for our application. During the data migration this enforment is going to break.
And during migration we might get duplicate records due to "deferrable" .
After the cut over, should
1. I look for the duplicate records in each table and then delete it.
2. Drop old contraint and then enfore new constraint (unique index).

3. Or there is any other way that we can keep the uniqueness during migration.

Thanks

Tom Kyte
January 29, 2004 - 1:35 pm UTC

no, you won't get duplicates.

deferrable means "wait until commit to verify"

it does not mean "just let junk in and ignore it"


when the snapshot refreshes -- everything is OK -- everything is unique, everything is consistent. no duplicates.





migration across plat form

Baqir Hussain, January 31, 2004 - 12:53 am UTC

Tom,
Thanks for providing valuable suggestions.
I understand deferrable constraint , in read-only mv's replication, keep the database away from conflict.

1. In multi-master replication (master-slave) what is best way to avoid conflict?
2. Data type LONG cannot be replicated. I have a couple of table with LONG. A week before the cut over I will export and import these tables into the new database. After the cut over, a week of the remaining data, needs to be imported back to the new database. I can take an export by (sysdate - 7)using query option. Is it possible to import this data on the existing table into the new database or please advise some other method to achieve it.
Thanks

Tom Kyte
January 31, 2004 - 9:59 am UTC

1) multi-master is peer to peer (symetric). master-slave is materialized views. Hmm?

In multi-master, replication is implemented by replaying transactions exactly as they happened -- so, you do not need deferrable constraints.

In master/slave, replication is done to the snapshot tables one table at a time, hence you need deferrable constraints so the tables can be inconsistent for a brief moment of time during the refresh (no one sees these inconsistencies of course since no one sees updates until they are committed and the constraints are verified upon commit)


2) longs will have to be done manually via whatever process you dream up. If they were clobs -- it'd be done with the materialized views for you.

migration across platform

Baqir Hussain, February 04, 2004 - 5:06 pm UTC

If I run the following at command line
exec dbms_refresh.refresh ('BDR.GROUP1');
I will not get the prompt back until it completes "refresh complete" on the tables defined in the group.
After the completion of this procedure , I change mv's to "refresh fast" at command line.
I tried using cron job as follows:

exec dbms_refresh.refresh ('BDR.GROUP1');
alter materialized view client refresh fast;
alter materialized view home_search refresh fast;

but it oracle stucks at
update sys.snap$ set auto_fast = :1
where vname = :3 and sowner = :2 and instsite = :4

On the other hand, it works fine at command line.

I would like to have it done automatically (with some sort of script).

Please advise.
Thanks

Tom Kyte
February 04, 2004 - 6:11 pm UTC

Oracle should not be getting "stuck" there.

create a job using dbms_job

declare
l_job number;
begin
dbms_job.submit( l_job,
'begin
dbms_refresh.refresh( ''BDR.GROUP1'' );
execute immediate ''alter ....'';
execute immediate ''alter ....'';
end;'
);
commit;
end;
/

that'll "background" it -- assuming job_queue_processes > 0 (init.ora parameter)

Can we migrate directly from 8i to 10g?

A Reader, February 23, 2004 - 3:57 pm UTC

Hi Tom,
Since Oracle will desupport 8i, we need to migrate to a newer version. Since 10g is already on the market, can we migrate directly from 8i to 10g (I could not find related 10g document). This is very important for us to make our decision. Can you please tell us if it's possible or good idea to skip 9i? Technically, is it much more difficult to do? PLease tell us the pros and cons of doing that.
Thank you for your help.

Tom Kyte
February 23, 2004 - 5:08 pm UTC

8iR3 (817), 9iR1 (90), and 9iR2 (9.2) can be upgraded directly to 10gR1 (10.1) from their terminal releases (eg: 8.1.7.4 is typically the terminal release of 8iR3)


If you don't have the "first version willies" -- going from 8i -> 10g involves less work than 8i -> 9i -> 10g will at the end of the day.

pros of going to 10g, one less upgrade, test, fix cycle.

cons, you'll have lots of first version people out there ready to jump all over you everytime something appears to be wrong (even tho you'll hit issues going to 9i as well in all probability -- things you find in test, fix or workaround before production)



System tablespace size after migration

APL, August 06, 2004 - 5:08 am UTC

How much size we should give for system tablespace when migrating from 8.1.7 to 9.2? Is there any formula to calculate the additional space requiremnt for SYSTEM tablespace? Some non-authoritative sources says it as "Total Database Space used * 10%".

Tom Kyte
August 06, 2004 - 8:17 am UTC

my 817 system was about 250meg, my 9ir2 was about 425m

however -- totally depends on the amount of plsql you have, what features you enable/install. totally.


you'll find out for sure when you install on your test machine -- down to the byte.

Listener After migration

APL, August 18, 2004 - 7:48 am UTC

During migration from oracle 8.1.7 to 9.2.0.5, i am not getting the screen for NETWORK CONFIGURATION. So my listener file is not changing. Still it is showing the old oracle_home. I am using ODUA. I got only 1 listener with default port and name. What changes i should do?


Tom Kyte
August 18, 2004 - 9:37 am UTC

you can just basically copy over the listener.ora from the 8i oracle home, edit it and make any changes (like oracle home).

beyond that -- suggest you contact support if the screen that is supposed to come up is not coming up.

Queries are very slow in ORACLE9i.

peru, November 04, 2004 - 3:49 am UTC

Hi,
I have export file exp.dmp taken from oracle8i.
I have imported this to oracel9i.
Now my queries are very slow in oracle9i.
Some the queries very very slow.
But these queries are fine while working on oracle8i.
Kindly tell me what could be the reason and how to solve this.

how to.. and why

A reader, November 16, 2004 - 8:07 am UTC

Hi,
I have export file exp.dmp taken from oracle8i.
I have imported this to oracel9i.
Now my queries are very slow in oracle9i.
Some the queries very very slow.
But these queries are fine while working on oracle8i.
Kindly tell me what could be the reason and how to solve this.



Tom Kyte
November 16, 2004 - 12:53 pm UTC

did you use tkprof to compare the plans from before and after and try to diagnose any of it yourself first?

migrate oracle 7 to 10g

Alex, December 04, 2004 - 9:37 am UTC

Tom can you please advise me what is the best way to migrate from oracle 7 database to 10g. Is it to move from 7 to 8 and then 9 and then 10g or is there any direct method.

Tom Kyte
December 04, 2004 - 11:02 am UTC

the only way to migrate is to read the migration guide supplied with each release first, it'll lay out the options for you.

in your case, i believe you'll find EXP to be part of your future.

Reader

A reader, December 19, 2004 - 10:32 am UTC

In some migration like 8i to 9i, it is part of the
precedure to turn public synonym off and then
turn it on.

Is this because of some name resolution conflict
If so, is this because the migration program for
some reason, not using the full path when
accessing objects

Tom Kyte
December 19, 2004 - 11:41 am UTC

er? no idea what you are asking.....

8i to 10g

Kapil, December 20, 2004 - 5:48 am UTC

Tom,

Just to continue the discussion on 8i to 10g migration, I believe you also recommend that rather than the 8i to 9i migration. Could you let us know of any other things that one should be careful of while doing the 8i to 10g migration as its a version jump?

Could I also say that the migration from a RULE based optimization on 8i to cost based 10g optimizer would be smoother as it would avoid the concerns of manual statistics gathering?

Regards
Kapil


Tom Kyte
December 20, 2004 - 8:24 am UTC

<quote>
Could you let us know of any
other things that one should be careful of while doing the 8i to 10g migration
as its a version jump?
</quote>

whether you go from 9ir2 to 10gr1

or version 2 to 10gr1

the same primary, principle single rule applies:

Test

test functionality.
test load.
test scale.




meng, December 20, 2004 - 8:23 pm UTC

Hi!Tom,
Can you give me some suggestion how to migrate oracle 8.0.4 to 9.2.0.1,because the oracle migration docus just give a path to upgrade to 8.0.6 and to 9.2.0,I don't have the version 8.0.6,what should i do?
thanks

Tom Kyte
December 20, 2004 - 8:34 pm UTC

they mention other methods (export/import) if you cannot or do not want to get the 8.0.6 patch from metalink (i just peeked, 8.0.6 was there for solaris for example)


and please -- you are going to 9206/9205 at least right?

Migration from 8i to 10g

Kapil, December 21, 2004 - 2:10 am UTC

Tom,

Could I also say that the migration from a RULE based optimization on 8i to cost based 10g optimizer would be smoother as it would avoid the concerns of manual statistics gathering?

Regards
Kapil

Tom Kyte
December 21, 2004 - 8:58 am UTC

manual statistics collection is not that hard, but when you add in all of the other stuff that 10g has "auto" for -- that is the real reason for going 10g instead of 9i

meng, December 28, 2004 - 2:51 am UTC

thanks,tom,but how can i upgrade from 8.0.4 to 8.0.6?i dont have the 8.0.6 version database server software,also i dont have 9.2.0.5 or 9.2.0.6,i saw the readme docus,the 8.0.6 patches must install in the release 8.0.6 also.
thanks

Tom Kyte
December 28, 2004 - 10:21 am UTC

please contact support to discuss your options, they would be the ones to get you software and everything like that.

A reader, January 30, 2005 - 8:01 pm UTC

Tom,

Can you tell your opinion regarding 8.1.7 to 9i upgrade on HPUX? What is the best approach? export-import or upgrade assistant? Which takes more time and which makes us look after too many things?

Tom Kyte
January 30, 2005 - 8:24 pm UTC

in 2005, it should only be "upgrade from 8i to 10g", why go to something that'll just make you do it again (upgrade) that much sooner?

read the migration guide -- it'll lay out your options.

and since you'll test the upgrade quite a few times, you can benchmark which "takes longer" (direct upgrade is in general the fastest in all cases)

if you can direct upgrade, do so (in my opinion, it would be my first choice)

Is it right behavior and explanation?

Sanjay Jha, February 03, 2005 - 11:36 am UTC

Tom,
Recently we migrated a database from 7.3.2 to 9.2. However, for some strange reasons, the application still need to be run using Oracle 7 client. There is a functionality which requires data to be exported and imported back to the same database (in different schemas). In a way we wanted to use Oracle 7 exp/imp to work against Oracle 9.2 database.

We did a test and results are below:

Export:
-------
Export: Release 7.3.2.2.0 - Production on Wed Feb 02 16:48:47 2005

Copyright (c) Oracle Corporation 1979, 1994. All rights reserved.


Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
Export done in WE8ISO8859P1 character set

About to export specified tables via Conventional Path ...
Current user changed to C1_OHSO
. . exporting table ACCT_SETTLEMENT 2133 rows exported
Export terminated successfully without warnings.


Import:
-------
Import: Release 7.3.2.2.0 - Production on Wed Feb 02 16:40:52 2005

Copyright (c) Oracle Corporation 1979, 1994. All rights reserved.


Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production

Export file created by EXPORT:V07.03.02 via conventional path
. . importing table "ACCT_SETTLEMENT" 2133 rows imported
Import terminated successfully without warnings.


The actual database(9.2) indeed has same number of rows:

system@SW3> select count(*)
2 from C1_OHSO.ACCT_SETTLEMENT;

COUNT(*)
----------
2133


I was wondering why it worked successfully. On investigation, I found that catexp7.sql utility was run while creating the 9.2 database. The catexp7.sql utility, perhaps gives that ability as explained in the comments of that utility:
***************************************************************************************************
Rem
Rem $Header: catexp7.sql 07-jan-2002.05:50:05 prakumar Exp $
Rem
Rem catexp7.sql
Rem
Rem Copyright (c) 1996, 2002, Oracle Corporation. All rights reserved.
Rem
Rem NAME
Rem catexp7.sql - CATalog EXPort views for v7 SQL script
Rem
Rem DESCRIPTION
Rem Create v7 style export/import views against the v8 RDBMS
Rem so that EXP/IMP v7 can be used to read out data in a v8 RDBMS
Rem
Rem NOTES
Rem
Rem This file is organized into 3 sections:
Rem Section 1: Views needed by BOTH export and import
Rem Section 2: Views required by import ONLY
Rem Section 3: Views required by export ONLY
Rem
Rem No views depend on catalog.sql. This script can be run standalone.
Rem
**************************************************************************************************


My question is that is it a normal behavior as expected by running catexp7 and proved by our test?
My team is worried about data scrambling, which though with physical verification look alike but am not sure whether they are truly going to be identical. What if we have some of the datatypes not existing in version 7, like LOBs etc.?



Tom Kyte
February 03, 2005 - 2:37 pm UTC

7.3 clients are explicitly "not supported" against 9ir2 and 10gr1.

all bets off. not going to happen in any sort of supportable fashion *at all*. this will not work.

Tom, what are the risks we have here?

Sanjay Jha, February 03, 2005 - 4:05 pm UTC

Tom,
While I totally agree with you and had refered to metalink document id 132904.1 (Compatibility Matrix for Export & Import Between Different Oracle Versions), before the developers went ahead and proved "their" point. You will appreciate my situation now, I have nothing to offer to them which will made them to believe otherwise. They want to go ahead now and continue using oracle 7 client.
Could you tell me then why Oracle 9i has gone this far to keep the compatibility by providing catexp7.sql script? And how do you comment on the results from this test which although "not supportable" seems to be working?


Tom Kyte
February 04, 2005 - 1:29 am UTC

you are in a totally unsupportable position, it is clear from metalink that 7.x to 9ir2 has serious, serious issues.


period. that is all i have to say on the subject.

so what if this works by accident. you will find things that don't and you are totally out of luck at that point.

you either should have

a) left everything alone, be frozen at a point in time -- and keep your fingers crossed

b) upgraded everything to 10g -- so you would not be in this unsupportable state for at least 5 years again...

ORA-06553 after database moved over

June, April 05, 2005 - 3:02 pm UTC

Hi Tom, 

I got error after I moved my database to another server with upgraded 9i version, when I tried to log in as one regular user or create user under sysdba:

ORA-06553: PLS-801: internal error [56319]
Error accessing package DBMS_APPLICATION_INFO

the original database(A) is with 9.2.0.4 32 bit. I installed 9iR2 with patch set to 9.2.0.6 64 bit on another unix machine, and use controlfile to create database, as following:
alter database backup controlfile to trace resetlogs;
modified the script with new location and new DB name in noarchivelog mode;
shutdown A immediate and ftp all .DBF and .LOG file over;
comment out remote_login_passwordfile = exclusive;
run create controlfile script to create database B;
alter database open resetlogs;

up to this point, I am able to shutdown and startup b when I connect as sysdba;

however if I got error when I tried to query this view:
SQL> select * from dba_segments;
select * from dba_segments
              *
ERROR at line 1:
ORA-06544: PL/SQL: internal error, arguments: [56319], [], [], [], [], [], [],
[]
ORA-06553: PLS-801: internal error [56319]

SQL> create user test identified by test default tablespace users temporary tabl
espace temp quota 1m on users;
create user test identified by test default tablespace users temporary tablespac
e temp quota 1m on users
                               *
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-06553: PLS-801: internal error [56319]

and if I tried to log on as regular user:
SQL> connect j1234/j1234 (existing one on A database)
ERROR:
ORA-06553: PLS-801: internal error [56319]

Error accessing package DBMS_APPLICATION_INFO


up to this point, I am still able to query other dictionary table. 

What happened here? How to fix? or I just couldn't do this with this software upgrade?

Thanks for your assistance, as always. 

Tom Kyte
April 05, 2005 - 6:56 pm UTC

contact support and get the right directions for migrating your database from 32 to 64 bit which involves more than what you've done (like recompiling plsql)



Upgrading 7.3.4 to 9i: Post-Upgrade Redo Question

Richard, June 29, 2005 - 9:09 am UTC

In </code> http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96530/migaftrm.htm#1008952 <code>it says to backup the online redo logs. Isn't this a no-no?

Tom Kyte
June 29, 2005 - 9:25 am UTC

it would be in my book, yes.

it is never necessary, it can only lead to problems.

Punishment Suggestion.

Richard, June 29, 2005 - 11:07 am UTC

In that case, maybe the author of the *backup your online redo logs" advice should be tested to destruction in the now famous Otis elevator tower :)

Migration - did i do correctly?

Pravesh Karthik from India, July 06, 2005 - 8:14 am UTC

Tom,

I did a migration, wanted to confirm if i did it correctly or not.

Current Version - 9.0.1.0

Migration (This is what i did)
----

1. Installed 9.2.0.1 - only software on a new mount point
2. Applied 9.2.0.6 Patchset on 9.2.0.1 oraclehome
3. Used upgrade script u0900010.sql to upgrade
4. Ran catpatch.sql
4. Brought the DB (9.0.1.0) up with 9.2.0.6 as compatible parameter

I took care of all the steps duting migration and i dont see any error now


My Doubt
-------

1. Install 9.2.0.1 software
2. I should have upgraded using u0900010.sql
3. I should have brought the DB up with 9.2.0.1 as compatible parameter
4. Install the 9.2.0.6 Patchset on 9.2.0.1 oracle home
5. Run catpatch.sql
4. Bring the DB up with 9.2.0.6 as compatible parameter


Which is correct? .. did i do correctly..

If both are wrong, can you suggest the right approach to upgrade from 9.0.1 to 9.2.0.6

Thanks for your consideration,

Pravesh Karthik

Tom Kyte
July 06, 2005 - 8:29 am UTC

support = best place for installation, configuration questions. it is what they do best.

did you read the migration guide?
did you read the readmes that come with?
did you follow their directions?

if so, you did OK, if not you did not do OK.



Migration from Linux to Solaris

Suren, July 13, 2005 - 7:09 am UTC

Hi Tom,

Thanks for all the useful tips.
I want to migrate database 10g of size 12 GB from Linux to solaris. Which is the fastest way and what are the steps involved in it.

Thanks

Suren


Tom Kyte
July 13, 2005 - 12:45 pm UTC

cross platform transportable tablespaces

or a data pump copy

12 gig is pretty small, most anything would do it in a reasonable amount of time.

Oracle 10g mIgration from Linux to Solaris

Suren, July 14, 2005 - 8:54 am UTC

Thanks Tom,

Do you recommend creating all the required tablespaces on target DB before actually importing the data?

Tom Kyte
July 14, 2005 - 11:03 am UTC

if they are going to be in different places (the files) yes, definitely. else it is up to you.

Humaira Liaqat, August 09, 2005 - 10:24 am UTC

Hi Tom,

Thanks for all the useful tips.
I am facing a problem after migration from 8i to 9i.

There are force refresh type materialized views from 8i database to 9idatabase.I am upgrading this 8i database to 9i.
There are m.viem logs in 8i database.
All the tables were imported to a new 9idatabase from 8i with the materialized view logs also created on the tables.

Now the m.views work fine but i am not able to view anything in the log tables after insert or delete though they exist.
M.views are created on prebuilt tables.

Can't delete these logs either using query DROP SNAPSHOT LOG ON table_name.
The query returns Snapshot log does not exists.
Can't say what log it is using.

There are triggers on the prebuilt table to transfer data to another set of tables in the 9i database.

Tom Kyte
August 09, 2005 - 10:43 am UTC

sorry -- insufficient details to debug this here -- please utilize support, they'll collect all of the needed sequence of events.

migration from 8i to 9i materialized view issues --> ora-30353

A reader, August 10, 2005 - 11:55 am UTC

Hi

I just migrated a 8.1.7.4 database to 9.2.0.6, I am facing some dodgy problems as most migrations. The migration is carried bby EXPORT/IMPORT because the platform was migrated as well, from Sun Solaris to HP-UX ITANIUM

I have some materialized views with query rewrite enabled in 8i however when they migrated to 9i the query rewrite is disabled. When I run

ALTER MATERIALIZED VIEW MV_ACTIVITY
ENABLE QUERY REWRITE;

I get ora-30353

expression not supported for query rewrite"
// *Cause: The select clause referenced UID, USER, ROWNUM, SYSDATE,
// CURRENT_TIMESTAMP, MAXVALUE, a sequence number, a bind variable,
// correlation variable, a set result,a trigger return variable, a
// parallel table queue column, collection iterator, etc.
//
// *Action: Remove the offending expression or disable the REWRITE option on
// the materialized view.

The query which define this MV is

SELECT ID_BIN, CARD, MAX(FE_OPERATION) LAST_TRANSACC
FROM FP_V_TRANSACC_HEAD
GROUP BY ID_BIN, CARD


I dont see any offending clause here, why this MV can have query rewrite in 8i but not in 9i?

Cheers


Tom Kyte
August 10, 2005 - 12:59 pm UTC

ops$tkyte@ORA9IR2> create table emp as select * from scott.emp;

Table created.

ops$tkyte@ORA9IR2> alter table emp add constraint emp_pk primary key(empno);

Table altered.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create materialized view mv
  2  as
  3  select deptno, job, max(sal)
  4    from emp
  5   group by deptno, job;

Materialized view created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter materialized view mv enable query rewrite;

Materialized view altered.



There is nothing preventing that from happening, if you tried to recreate the MV, does it fail as well? 

even recreating it fails

A reader, August 11, 2005 - 5:42 am UTC

Hi

I recreate the view and it still fails, I noticed that the base tabla is a View and this view has union all between two tables and one has this predicate

WHERE ano || mes < TO_CHAR (ADD_MONTHS (SYSDATE , -1) , 'YYYYMM')

However I still dont understand why this works in 8i but not in 9i???



Tom Kyte
August 11, 2005 - 10:03 am UTC

it would have been a bug in 8i.


expression not supported for query rewrite"
// *Cause: The select clause referenced UID, USER, ROWNUM, SYSDATE,
// CURRENT_TIMESTAMP, MAXVALUE, a sequence number, a bind variable

upgrading database 9.2.0.1 to 9.2.0.6

khater ali, October 13, 2005 - 5:23 am UTC

how to apply patch in oracle.for example currently
our database version
is 9.2.0.1.0 now we are going to upgarading our
database to 9.2.0.5.0
or 9.2.0.6.0 so please tell me how to apply the
patch and procedures
and prerequistics. our o/s is sun solaris 9.Then how can i find which patchset is suitable for this upgradation through web

Migration from Universe Database to Oracle

Pertisth Mankotia, December 03, 2005 - 5:07 am UTC

PLease tell me is there any way to migrate from Universe database to oracle database.



Tom Kyte
December 03, 2005 - 10:28 am UTC

I am not familar with universe at all, sorry.

Moving data from one db to another db

thirumaran, December 19, 2005 - 9:24 am UTC

Hi Tom,

I had a Oracle 9i Table1(source) containing 1 lakh records , i need to move these data to two seperate target databases (oracle 9i) likely DB2(Table2) & DB3(Table3) which have the same table structure of the source. Both the source & Target's are located in remote sites.
The Operating system is same across all 3 DB.

What is the best method to be followed to move the data with minimum down time .

1) can i use a procedure or sql*loader with db link .
2) What is your recommended approach

Thanks in adv
thirumaran

Tom Kyte
December 19, 2005 - 9:33 am UTC

create table t as select * from t@remote_database;

or if the tables exist, just

inert into t select * from t@remote_database;


just use a dblink.

Version Errors

Pradikan, December 21, 2005 - 3:33 am UTC

Hi TOM,

We are using two database versions. Oracle 9.2.0.5 and 9.2.0.7. Recently we have taken export of a schema from the version 9.2.0.7 and imported it in 9.2.0.5. Everything went fine and we could see that some package bodies got compiled with errors. My question is why these package bodies are not getting compiled. Do wee need to apply any patch? Please let me know at the earliest. Following is the error we get while compiling these package bodies.


SQL> ALTER PACKAGE PA_SA COMPILE BODY;

Warning: Package Body altered with compilation errors.

SQL> SHO ERR
Errors for PACKAGE BODY PA_SA:

LINE/COL ERROR
-------- -----------------------------------------------------------------
1234/13  PL/SQL: SQL Statement ignored
1235/44  PLS-00201: identifier 'NVL2' must be declared
1243/13  PL/SQL: SQL Statement ignored
1244/44  PLS-00201: identifier 'NVL2' must be declared
SQL> ALTER PACKAGE PA_ON_SA40_FVL COMPILE BODY;

Warning: Package Body altered with compilation errors.

SQL> SHO ERR
Errors for PACKAGE BODY PA_ON_SA40_FVL:

LINE/COL ERROR
-------- -----------------------------------------------------------------
312/9    PLS-00103: Encountered the symbol "INTO" when expecting one of
         the following:
         := . ( @ % ; <a SQL statement>

This is the line in the package body wher this error occurs,

MERGE INTO SA40_FVL A

  USING (SELECT pRec.SA40_FVL_ID SA40_FVL_ID FROM DUAL) B

     ON (A.SA40_FVL_ID = B.SA40_FVL_ID)

   WHEN MATCHED THEN



Thanks In Advance.

Pradikan
 

Tom Kyte
December 21, 2005 - 7:29 am UTC

does nvl2 work in 9205? (it was a function not visible in plsql for a long long time - work around would be to use dynamic sql)

version errors

Pradikan, December 21, 2005 - 8:26 am UTC

Hi Tom,

Thanks for replying immediately. Using Dynamic SQL we've rectified the first problem relating to NVL2. Can you suggest me what should be the case for MERGE statement? Can't we Use MERGE in 9.2.0.5? Is there any other alternative for using the same.

Thanks

Pradikan

Tom Kyte
December 21, 2005 - 7:24 pm UTC

can you give me a small example? shouldn't be a problem - can you reproduce with a small complete example? (create table... and show error?)

version errors

Pradikan, December 22, 2005 - 2:38 pm UTC

Hi tom,

I am getting this error while compiling a package in the version 9.2.0.5. This works fine in 9.2.0.7 version of oracle.  Following is the section of the package body where the error occurs.

What we can see is an INTO statement preceeded by the MERGE command. My question is when this works fine with 9.2.0.7 why can't we work in 9.2.0.5. Is there any work around for using the same statement as mentioned below in the error.



SQL> SHO ERR
Errors for PACKAGE BODY PA_ON_SA40_FVL:

LINE/COL ERROR
-------- --------------------------------------------------312/9    PLS-00103: Encountered the symbol "INTO" when expecting one of
         the following:
         := . ( @ % ; <a SQL statement>

This is the line in the package body wher this error occurs,

MERGE INTO SA40_FVL A

  USING (SELECT pRec.SA40_FVL_ID SA40_FVL_ID FROM DUAL) B

     ON (A.SA40_FVL_ID = B.SA40_FVL_ID)

   WHEN MATCHED THEN


Also let me know whether I should apply any patch to overcome this error.

Thanks in Advance

Pradikan 

Tom Kyte
December 22, 2005 - 5:20 pm UTC

did you see my comments above. right above.

DB migration from 9.2.0.5 (HP-UX 11.0) to 9.2.0.5 (Solaris 8.0)

Subbu, July 03, 2006 - 5:19 pm UTC

Tom,

Thanks for the MView based approach for DB migration.
Sorry for this lengthy question.
We have a migration project to move current database 9.2.0.5 to another data center.

Here is the environment detail:

Current source database: Oracle 9.2.0.5 hosted on HP-UX 11.0
New database (to be migrated to): Oracle 9.2.0.5 will be hosted on Solaris 8.0

The source database is around 150G in size and it is OLTP based. Most of the large tables are hash partitioned which have 100Â’s of millions of records. Database does not need 24x7 availability and we can afford to have downtime of 15hrs for this migration. However, in order to ease the DB migration, Data center managed by different DBA team suggested that to use "Materialized Views (Read-Only replication)" to move the source data over to new site. No other choice left to consider due to cross platform DB migration other than DB size.

Before committing this project, we want to have clear understanding and be prepared for any issues we might encounter in the following key areas:

Over all system/database/network performance / security / locking behavior / possible Undo tablespace issues.

Having said that, i have the following questions on each topic.

1. Performance:
===============

Our DB schema contains 300 tables.
What kind of performance impact source database may have if the remote database (new DB using MVs) does fast refresh every 5 minutes (refresh interval decided by data center DBAs). We know the refresh interval needs to be set high. Our original idea is to refresh during off-peak hours once a day. On average, per day, SOURCE DB expects data volume (including inserts/updates/deletes) of approximately 200,000 records.

Any monitoring tools available or query on data dictionary to check the performance overhead ?


2. Security:
============
Source database maintains MV logs on the master tables. So the question is, assuming restricted and minimal grants are in place for remote MV site,
Can the source database DBA control their MV refresh frequency?
Because we do not want remote site people knowingly/unknowingly violating the frequency of agreed refresh schedule. I guess, they can setup more than one remote DB instance on their side and each does multiple MV refresh from SOURCE.

Does Oracle have any mechanism/control in place to prevent this from happening?

3. How to sync-up the DB schema changes (like adding a column/modify/drop column on a table) between Master site and Remote site

Lets say, Read-Only MV was created using “ON PREBUILT TABLE”. I need to add a column to some of the tables in the source database.

a) What sequence of steps I need to follow before altering any source table structure?
b) How to break the refresh before adding a column in the source table, Who has this control, SOURCE or REMOTE MV site?
c) Do I need to recreate the snapshot base table and its MView (which was using Prebuilt table) with “Complete refresh” Or I can just do only FAST REFRESH.
d) What kind of coordination should occur between source DB team and REMOTE DB team if there is BULK data migration on the source database,

Thanks for your time, help and suggestions.


Tom Kyte
July 07, 2006 - 7:27 pm UTC

If you cannot move less than 150 gig in 15 hours using conventional means - we have a problem (if database is 150 gigs, we need not move indexes, temp, undo, system and such... it is maybe what - 100gig of data?)

I do not think I would use this "atom bomb" technique to "crack a walnut", you have a very generous window to move the data - no need to introduce the materialized view logs, the overhead of watching the replication (and doing it initially!).

100gig of data should be pretty easy to move in 15 hours, even with export import! Using a dump and reload - or sqlplus copy - or dblinks. I don't think I'd go the MV approach with such a big window of time and so little data.

Thanks

Subbu, July 07, 2006 - 10:04 pm UTC

Tom,

Thanks for your suggestion.

I agree with you but unfortunately people on other side who involves in DB migration pushing to do it by MV approach.

Assuming MV is the approach, Would you please answer the questions. I Just want to be prepared for the worst.

Thanks for your time.

Tom Kyte
July 08, 2006 - 8:50 pm UTC

I don't assume squat.

Assuming you do it right, you won't use a materialized view. Period. I don't like telling people how to do it wrong. Ask them to tell us WHY they are pushing it to be done in a

o method that involves more work
o impacts the existing production system (hence NEEDS BE TESTED REALLY GOOD IN TEST FIRST - delay that migration by a long time for that please)
o makes extra work
o is harder

when they don't need to??

Subbu

A reader, July 10, 2006 - 4:09 pm UTC

Tom,

Thanks for your advise. Will work with Other team on the implications of using MV approach for DB migration in our case.

Keeping the DB migration aside (MV approach) for this discussion.

Could you please consider the above questions as "General knowledge sharing" topic and answer the questions.

Hope you do not mind.

Tom Kyte
July 11, 2006 - 7:33 pm UTC

search for "prebuilt table" on this site, I've examples.

Is there a chance of transaction getting lost in this scernerio?

Sunil Pant, November 18, 2006 - 1:32 pm UTC

Tom,
In this example
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:744625626787#15013618923218 <code>
you do
"create materialized ...refresh complete" and "create materialized view log..." . Is there a chance of transaction getting lost, if there are DML statements running against that table between those statements? I wonder if one should create materailized view log and do refreshes after that not to miss any transactions?
Thanks,

Upgradation from 9205 to 9207

Sivasankar.S, December 12, 2006 - 5:42 am UTC

Could you please tell me the steps we should follow to upgrade oracle 9.2.0.5 database into 9.2.0.7 database in Solaris environment.

Tom Kyte
December 12, 2006 - 7:13 am UTC

you would read the files that come with the software - they describe this step by step by step.

Need to Migrate from 9.2.0.6 to 10.2.0.1

Ram K, January 19, 2007 - 10:25 am UTC

Tom..

We have our current Production Database in 9.2.0.6 on solaris 8, and we are planning on moving to 10gR2. this new upgrade will be totally on a new Solaris 10 Server..The question i have is :

1. Once i have the new box ready with 10gR2, what is the best way to move my data from 9.2.0.6 DB ?..I usually do an EXP/IMP. Is that the best method ?. I have about 30GB of Data with around 40 tablespaces and not many schemas ( 6 schemas).

2. does 10gR2 offer a better method of migrating data from Oracle 9.2 0.6 ?..

Any help is greatly appreciated..

ps: by the way, just purchased your new Book..fantastic one.


thanks again
Ram

Migrating 10.1.0.3

Hesham, February 01, 2007 - 4:44 am UTC

Which one is recommended and stable to migrate to 10.1.0.5 or 10.2.0.2?

Is it recommended to use DBUA or EXP/EMP for the migration?
Tom Kyte
February 01, 2007 - 1:01 pm UTC

I would not really consider exp and imp for an upgrade.

What are the Alternatives ?

Ram K, February 01, 2007 - 6:41 pm UTC

Tom..

If Current DB is on Server1 (Sun 8 and Oracle92), and we purchased a new Server Server2(with Sun 10), and have already installed Oracle 10gR2, how do we move the data from Server1 to Server 2?..I got about 30GB of Data with 3-4 schemas ?
Tom Kyte
February 02, 2007 - 10:36 am UTC

restore database on new server and upgrade it.

Migrate 8i to 10g ... @utlu102i.sql errors out

Karthikeyan, February 06, 2007 - 4:20 am UTC

hi,
when i run the script utlu102i.sql from a temporary directory , We get the following error

01:02:40 SQL> spool Database_Info.log
01:03:04 SQL> @utlu102i.sql
DECLARE
*
ERROR at line 1:
ORA-01403: no data found
ORA-06512: at line 1560

Can you suggest us what might be going wrong here.

thank you.
Tom Kyte
February 06, 2007 - 11:13 am UTC

see support Note 407031.1

Cross Platform Data Migration

Su Baba, February 21, 2007 - 1:03 pm UTC

What's the best approach to migrate a database (~500 GB) from Windows to Linux? Would expdp/impdp work for a database this size?
Tom Kyte
February 21, 2007 - 1:16 pm UTC

if you have 10g, cross platform transport - in effect backup and restore across operating systems.


Cross Platform Transport

Su Baba, February 21, 2007 - 1:51 pm UTC

Do you mean I can just take a cold database backup on Windows and restore these files to a Linux system??!!

Tom Kyte
February 21, 2007 - 3:11 pm UTC

Su Baba, February 21, 2007 - 5:36 pm UTC

From the link you provided above, the doc says:

-----------------------------------------------------------
Oracle Database 10g Release 2 introduces database transport across the same Endian platforms which provides a fast and easy way to transport the whole database across platforms with the same Endian format. This feature uses RMAN to quickly convert the datafiles and create the new database. The newly created database is a copy of the source database but on a different operating system/hardware platform; for example, fast Windows to Linux or Solaris to HP-UX migration.
-----------------------------------------------------------

This means that if the operating systems have different Endian formats (e.g. between Solaris and Windows), the method will not work. This is contrary to what you said in

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:7145231318014

Search for "Big and Little Endian February 24, 2005" on that page.
Tom Kyte
February 21, 2007 - 7:37 pm UTC

that is a cross platform TABLESPACE transport.


this feature I am pointing to is a cross platform DATABASE transport.


Hence, no "contrary" involved.

You can transport a TABLESPACE across different byte orders.

You can transport an entire DATABASE (including system and all) within the same byte order.

Wrong Oracle Patch

Deepak, March 15, 2007 - 12:43 pm UTC

Hi Tom,

Am in deep trouble hope you will help.

My OS is SOLARIS 64 bit
My Oracle Version is : 9.2.0.1 32 bit

By mistake one of the DBAs installed 64bit patch set (9.2.0.5) on the DB. Therefore linking is failing.

Please let me know how can I get arround of this issue without reinstalling teh Database s/w.

Can I install 32 bit patch on the present installation.

Please help.
Tom Kyte
March 15, 2007 - 12:52 pm UTC

please utilize support

what about backup strategy

karthik, June 23, 2007 - 2:42 am UTC

Tom,
If we use export/import to migrate wont the backup's get invalid as import deals with new database incarnation.
Also let me know about using oracle patchset to upgrade will have any effect on previous backups before the day patchset is applied(eg...i am using an hot backup before the patchset is applied and applying archives through
pre-patch to post patch)
Tom Kyte
June 23, 2007 - 9:02 am UTC

I don't ever suggest using exp and imp to ... oh wait, that is what I said originally way back when...

you can recover over an update like that - the update scripts that you ran in the database are logged, they are in the redo stream.

karthik, June 26, 2007 - 12:39 am UTC

Update scripts that are logged in the redo stream--I think you mean this running after doing an upgrade.My question is what about doing a point in time recovery before the upgrade was carried out.Say if i carry out a upgrade from 9206 to 9208 using exp/imp and when i need a database point in time before import was carried out all my archives generated pre-upgrade belongs to a different database..so of no use to do a recovery...right?
Also you mentioned only to reorg/rebuild the database you may use the exp/imp method...but if the cost of doing such a reorg involves losing my ability to do a point in time recovery what will be your suggestion?
Tom Kyte
July 02, 2007 - 9:09 am UTC

you do not carry out upgrades from 9206 to 9208 using export/import.

What you did was exported data from 9206 to a file.
You then imported that data into a 9208 database

they have nothing to relation to eachother (the 9206 and 9208 databases), they are separate and distinct.

You will not do a point in time recovery of anything here - exp/imp are data COPY tools, not backups.

InTeGration server is on 10.2.0.1.0 and Devlopment server is on 9.2.0.1.0

Star Nirav, June 28, 2007 - 6:40 am UTC

Dear Tom,

My Integration server is on 10.2.0.1.0 and Development server is on 9.2.0.1.0, I would need to have data refresh on demand, so I would like to automate the script. Can you assist me on this...?

2) As per doc # 132904.1, it says that I can not direct do export / import. From 9.2.0.1 to 10.2.0.1: First apply the 9.2.0.4 patchset (or any higher Patchset release, such as 9.2.0.8). Full database export with the 9.2.0.4, Export utility (resp. 9.2.0.8), and full database import with the 10.2.0.1 Import utility is a supported migration method.

As per some budget problems, we can not go for upgradation at this moment, so may i request you to suggest any simplex way for doing this...?

Looking for your kind attention here..
Regards,
Star Nirav

Tom Kyte
July 02, 2007 - 10:17 am UTC

this is an impossible situation. How can you develop on a platform you are not deploying to?

you have entirely unpatched databases, at different release levels, you will not get anywhere fast here.

you want to save money? do the upgrade, then life becomes easy again and you stop wasting time trying to band-aid things together.

Gentle Reminder...

A reader, July 02, 2007 - 5:06 am UTC

::: Gentle Reminder :::

This is very urgent requirement... Needs your kind attention and expecting your input...

Dear Tom,

My Integration server is on 10.2.0.1.0 and Development server is on 9.2.0.1.0, I would need to have data refresh on demand, so I would like to automate the script. Can you assist me on this...?

2) As per doc # 132904.1, it says that I can not direct do export / import. From 9.2.0.1 to 10.2.0.1: First apply the 9.2.0.4 patchset (or any higher Patchset release, such as 9.2.0.8). Full database export with the 9.2.0.4, Export utility (resp. 9.2.0.8), and full database import with the 10.2.0.1 Import utility is a supported migration method.

As per some budget problems, we can not go for upgradation at this moment, so may i request you to suggest any simplex way for doing this...?

Looking for your kind attention here..
Regards,
Star Nirav

Tom Kyte
July 03, 2007 - 9:46 am UTC

and I gave you my input. You are wasting tons of money and time and resources - in the name of saving money, time and resources.

eg: you are fooling no one, except maybe yourself. you want to save money, time, and resources - get your development platform to be the same as your production platform - now.

you read the note, you know your 'options', I cannot change anything regarding those facts.



You are right but......

Star Nirav, July 06, 2007 - 5:41 am UTC

Hello Tom,

Thanks for your advice but we wanted to mention here that what you are saying is impossible is actually possible here because
1) we are doing only refresh at schema level, not at database level so --
testing is being done w.r.t. schema and for that we can not just upgrade the server to 10g...!!

2) Export can be done by this way. I will connect to 9i export utility and using 10g service name and then I can take export and
for import, i shall connect to the same 9i IMP utility and 9i service name.

Please dont assume things like cross-platform and etc... There are always workarounds and solutions.

We have asked because we thought, you are more experienced and expert than us and can better advice us.

Anyways, We didnt like the approach but we dont have other choice rather than just believing in you.

You must agree on that the response, you are giving now is not like the same response, you was giving.


God Bless you !!!
Star Nirav

Tom Kyte
July 06, 2007 - 12:58 pm UTC

but you asked how to work around a full export. so you "did not say what you meant to say", else why would you refer us to a note that only talks about full exports??!?!?!?

You must agree that what you said was not what you meant. or what you are doing.

You must agree that garbage in begets garbage out. I can only respond to that which is written. Assumptions - I try not to make any.


I didn't assume anything about cross platform.

I made NO ASSUMPTIONS

I only commented on the facts that

a) you develop on version X
b) you deploy on version Y
c) you say you have money problems that prevent you from doing version Y in A


and my comment was: you want to save money? go to version Y RIGHT NOW. You are wasting untold bits of money right now by being at version X and version Y. You will only waste more money that way.


9iR2 (32-bit on RHEL AS 3 on x86) -> 10gR2 (64-bit on RHEL AS 4 on x86_64)

Korsi, August 16, 2007 - 2:53 pm UTC

Greetings Tom,

We are planning to migrate one of our 32-bit Oracle Database production servers to a new built 64-bit Oracle Database server. OS/DB Version information as follows:

Existing Production Server:
OS:
Red Hat Enterprise Linux AS release 3 (x86)
DB:
Oracle9i Enterprise Edition Release 9.2.0.8.0 (x86)
JServer Release 9.2.0.8.0 - Production

New Server being planned to migrate the above Database:
OS:
Red Hat Enterprise Linux AS release 4 (x86_64)
DB:
Oracle10g Enterprise Edition Release 10.2.0.1.0 (x86_64)

Our present database size is around 80GB.

As the server to be migrated is a Production server, we need to plan this migration to keep downtime as short as possible, be aware of the risks involved and estimate pros and cons of the migration method we choose.

Browsed through various documents of metalink & otn to see the possible options for this migration. Unable to find any document that closely match our requirement.

Request your suggestions to help us plan/implement this migration task in best possible way.

Also any links to appropriate documentation are appreciated.

Thanks in advance
Korsi

migration from windows

Shankar, January 26, 2008 - 12:38 pm UTC

Is exp/imp the only option to migrate a database from windows to unix (AIX)? our database version is 9.2.0.4. and it has to be on this version because of third party application.

Thanks.
Tom Kyte
January 29, 2008 - 2:19 am UTC

you have database links

you have the ability to dump and reload the database using any tool you want to a)buy, b) download, c) invent

as well as export/import.

clarification

amit poddar, August 04, 2008 - 8:45 pm UTC

Hi,

you mentioned following steps above to migrate platform using materialized view on prebuilt table

a) create empty tables in new database....
b) put constraints etc on them (deferrable please, important for mview groups!)
c)
d) put mv logs on remote tables
e) keep fast refreshing after that (could switch d/c above to avoid possible double complete
refresh)
f) when ready to cut over -- kill access to old db, do a last refresh -- drop materialized views
and there you go.

won't step "d" should be the first one ? I am saying this because what happens if someone changes the table before step d and after when step c is started, won't we miss those changes ?

thanks
amit
Tom Kyte
August 05, 2008 - 9:08 am UTC

you won't miss changes (you left c blank?), it would know it needed to complete refresh.

therefore, it would make sense to create the logs, refresh complete, then refresh fast after that.

migration 9.2.0.1 to 10.2.0.1

haja, November 20, 2008 - 7:08 am UTC

Hi Tom,

I have 9.2.0.1 oracle version 32 bit database and running fine. i want to migrate from 9.2.0.1 to 10.2.0.1 oracle version. whether it is posible or not.
Tom Kyte
November 24, 2008 - 11:31 am UTC

of course it is, we even wrote an entire document about it.


http://docs.oracle.com/docs/cd/B19306_01/server.102/b14238/toc.htm

migration of MF Cobol to oracle 10g

Reene, November 26, 2008 - 8:33 am UTC

Hi Tom

can we migrate MF Cobol flat files to oracle 10g.
is there a tool to do this.
any pointer on this would help.

we need to move a MF cobol(on Linux) based legacy system to oracle .
i tried to google but to no avail.

Thanks
Tom Kyte
November 28, 2008 - 4:22 pm UTC

they are just files - you can use sqlldr if you convert them from ebcdic - you can use sqlldr even if you do not.

http://asktom.oracle.com/pls/ask/search?p_string=ebcdic


but, you would typically take this as a fine opportunity to re-engineer the application, I have a feeling a bunch of flat files loaded into a database and a bunch of cobol translated into php or java or whatever would just be about the ugliest looking thing ever.

migrate from 9i to 11g

A reader, October 31, 2011 - 1:01 pm UTC

Tom:

I have a few 9iR2 oracle small databases running on IBM AIX server in Location A. DB Sizes are small about 5 GB - 15 GB,

I need to take those out of that location A to Location B (differnt company completely) which will have a server runnning VMWare.
The O/S on the new machine will be either Red Hat Linux or windows server 2008 (not sure yet)

I need to install a brand new oracle 11gR2 on the new machine, create new similar databases, and then import the 9i databases over.

What is the best thing to do this:

a) exp/imp (logical dump at the schema or DB level)
b) create a physical backup in 9i (data files ,control files, etc) and restore from it in 11g.
c) other ways?

Can you advise on the best method.

Someone is telling me you cant backup a 9i database and restore it into 11g database (physical files).

We only have 1 or 2 schemas in each database so shall i opt for schema level export of all objects and data and then create a new database on new 11g machine and then import the 9i database dump file.
Tom Kyte
October 31, 2011 - 2:27 pm UTC

While there is a cross platform full database transport with 10g and above, it only works if you have the same byte order.


You cannot go AIX to x86 directly - you can transport tablespaces, but not the entire database due to the byte ordering.


so, you can:

a) upgrade the 9i to 11g so you could data pump
b) export/import using the old fashioned, but very slow, export import tools
c) use transport to transport the data (that can go cross platform), but you'll still need to use exp/imp or data pump to get over the other stuff (procedures, views, sequences, etc)



Probably old fashioned export/import will be the path of least resistance.

migration

A reader, October 31, 2011 - 3:45 pm UTC

Thanks Tom.

You made my day with you recommendation.

The "Byte Order" is a new term for me. what is this and can you see it in oracle dynamic views?

also, would you do export/import at schema or database level (assuming I have 1 or 2 schemas per database). I am only interested in the application/data schema and not the oracle 9i stuff (i.e SYS, SYSTEM, WMSYS, OUTLN, etc.)




Tom Kyte
November 01, 2011 - 2:46 am UTC

byte order has to do with the order the bytes in a word are stored on your computers architecture. Some systems store the bytes in a word (there are 2, 4 or 8 bytes in a word depending on whether you are 16, 32 or 64 bit) from hi to lo (sort of backwards if you look at it) and others from lo to hi.

See http://en.wikipedia.org/wiki/Endianness


I would use schema level exports definitely. Full database is too ugly for just two schemas - and by doing schema level you could achieve at least a tiny bit of parallelism easily by exporting two at a time and importing two at a time.

Database migration

rizwan, December 09, 2011 - 4:11 am UTC

Hi Tom ,

We have a migration project wherein we are migrating databases from 10.2.0.3 which is on platform RHEL relase 4 32 bit to 11.2.0.2 on platform Oracle Enterprise Linux update 7 64 bit .. Database size is 80 GB and there are aroun 80 users in database .. For this example i am considering database name to be ORCL ..

Below are approach we are considering .. Could you tell us which is the best appraoch according to you ? And the approach which will be least time consuming ? And also if anything critical which i am missing which should be consider? .. Thanks in advance

First option :- full exp/imp using datapump

1) Install oracle database 11.1.0.6 64 bit Oracle Enterprise Linux update 7 64 bit
2) Patch Oracle home to 11.1.0.7
3) Create Database named ORCL
4) shutdown and startup existing database in restricted mode to take consistent export
5) Take full database export using expdp
6) Copy dumpfile to new server using ftp in binary mode
7) Import dumpfile in new database
8) Solve any errors in import log (ignore errors like IMP-00015: following statement failed because the object already exists, IMP-00061: Warning: Object type already exists with a different identifier,
9) Check count and status of all objects in all schemas (excluding default schemas ) and compare with source database objects ..


2nd option :- Schema level expdp/impdp

1) Install oracle database 11.1.0.6 64 bit Oracle Enterprise Linux update 7 64 bit
2) Patch Oracle home to 11.1.0.7
3) Create Database named ORCL
4) Create public synonyms, roles , nondefault profiles in new database
5) shutdown and startup existing database in restricted mode to take consistent export
6) Take full database export using expdp
9) Copy dumpfile to new server using ftp in binary mode
10) Import dumpfile using schemas=username option for all schemas excluding default schemas
11) Check count and status of all objects in all schemas (excluding default schemas ) and compare with source database objects ..

3rd Option :- Using COLD backup

1) Install 10.2.0.3 64 bit on new server
2) Install 11.1.0.6 64 bit on new server
3) Patch 11.1.0.6 to 11.1.0.7
4) Patch 10.2.0.1 to 10.2.0.3
5) Shutdown database ORCL on existing server
6) Copy all datafiles, redologfiles , controlfile , password file, parameter file to new server
7) Startup database in upgrade mode from 10g home (To convert database to 64 bit )
8) Run script utlirp.sql . (To convert database from 32 bit to 64 bit )
9) Shutdown database and startup in normal mode and run utlrp.sql
10) Upgrade database to 11g using normal upgrade method


4th Option :- Using Cold Backup

1) Install 11.1.0.6 64 bit on new server
2) Patch 11.1.0.6 to 11.1.0.7
3) Copy utlu111i.sql script from new server 11g home to exsiting server ..
4) Run pre upgrade script (utlu111i.sql) on existing server
5) Make any changes which needs to be done while database is running in 10g home.
6) Shutdown database ORCL in existing server and Copy all datafiles, redologfiles , controlfile , password file, parameter file to new server
7) Startup database from 11g home in upgrade mode
8) Run script utlirp.sql . (To convert database from 32 bit to 64 bit )
9) Run script to upgrade database to 11g and perform post upgrade tasks
10)Shutdown and startup database in normal mode





Tom Kyte
December 11, 2011 - 2:34 pm UTC

why all of the mentions of 11.1.0.6/7 if the goal was 11.2.0.2?


Database migration

rizwan, December 12, 2011 - 3:55 am UTC

Sorry tom ,
Intention was 11.1.0.7 not 11.2.0.2 .. kindly give your feedback

Regards,
Rizwan
Tom Kyte
December 12, 2011 - 6:11 am UTC

do it however you want in a manner that doesn't use exporting and importing and test it multiple times. having the 10.2.0.3 and 11.1.0.7 on the target machine sounds practical.


I don't know your requirements well enough to say which is "best", for example, if you told me "downtime is to be limited to the smallest window", I might suggest streams and a cutover - for example.

Data Migration Activity Help

ASHISH MISHRA, December 13, 2011 - 3:42 am UTC

Dear Sir's
Please help me in my database migration from old Structure database to new database Structure, as we have developed new application and want to switch from old but with using same data (but not same database structure(schema), instead with new and different structure from old database)

What i did are bellow things.

1. Created new database with different database structure from old
2. Created "insert into (select .......)" scripts to fetch out the data and put it into new database from old (with different structure)
4. I verified and tested that data and did cross checking with old data.

but since old system is live and data is changing ,now kindly suggest how to go for data migration as user cannot give more then 4 hours downtime and he only gives me cold backup in tape drive which i restore on Solaris server as user has Solaris based DB server then we run those scripts to migrate from their DB Structure To Our.

How to do this all thing in 4 hours as database is of size 100 GB Approx and takes 5 days we we did first time,kindly suggest as we have old migrated db also which is till May-2011 and now what should i do (starting from scratch or use old migrated and do incremental backup (that i dont know how i will do as their will be rows (ADDED/UPDATED/DELETED)since may 2011) ,but i want anything to get completed in 4 hours.

please help with your expert suggestions.



and further more i am posting one of my insert into select script kindly help to tune it as its taking hours and days to complete

*******************

INSERT INTO LABRESULTPARAMETER
(
LAB_RESULT_PARAMETER_ID,
PARAMETER_ID,
RESULTVALUE,
LAB_RESULT_ID,
INTERNALREMARKS,
CREATEDBY,
CREATEDDT,
UPDATEDDT
)
SELECT
SEQ_LABRESULTPARAMETER.NEXTVAL,
IP.INVESTIGATION_PARAMETER_ID,
VALUE,
LS.LAB_RESULT_ID,
DESCRIPTION,
1,
SYSDATE,
SYSDATE

FROM TRAN_05_07@CHIKIT_HISMIGR T

INNER JOIN TRAN_05_03@CHIKIT_HISMIGR LO ON LO.WORKORDERNO = T.TRAN_05_03_WORKORDERNO
INNER JOIN SERVICEMASTER SM ON SM.SERVICE_CODE = LO.MAS_TAB_08_TESTID
INNER JOIN LABSERVICEMASTER LSM ON LSM.SERVICE_ID = SM.SERVICE_MASTER_ID
INNER JOIN INVESTIGATIONPARAMETER IP ON
IP.PARAMETERSEQUENCE = SUBNO AND IP.INVESTIGATIONPARAMETERNAME = SUBNAME
AND IP.LABSERVICEMASTERID = LSM.LAB_SERVICE_ID
INNER JOIN LABORDER LORD ON LORD.LABORDERNO = T.TRAN_05_03_WORKORDERNO
INNER JOIN LABRESULT LS ON LS.LAB_ORDER_ID = LORD.LAB_ORDER_ID
WHERE T.VALUE IS NOT NULL AND LO.MAS_TAB_08_TESTID NOT IN
(SELECT H.MAS_TAB_08_TESTID FROM MIG_SUBPARAM_HISTOCYTO@CHIKIT_HISMIGR H);
*******************

Thanks & Regards

Ashish Mishra
Tom Kyte
December 13, 2011 - 7:33 am UTC

why are you creating a new database? why not just do it in a single database? 100gb is pretty small - using parallel direct path (insert /*+APPEND*/) after temporarily setting the database to be in noarchivelog mode (but do set back after you migrate) and then creating indexes should go pretty quick.



upgarde

lalu, December 13, 2011 - 6:48 am UTC

100 GB Approx and takes 5!
Does not it look too big??How abt doing it in parallel.
Updating multiple tables simultaneously, using parallel etc!

100GB is not so big nowadays.
Else try using triggers to catch the db changes during ur upgrade process and apply after the upgrade.

Migration Suggesation

Ashish, December 14, 2011 - 5:47 am UTC

Dear Sir
would you kindly explain me more detail, how to go for it in my case please.

1. creating new DB because i have create new application which requires a nee db structure so kindly suggest me a technique ,by which i can run my scripts to fetch data from old db and put in into new db in fastest way i can.


Thanks & Regards
Ashish
Tom Kyte
December 15, 2011 - 7:31 am UTC

I would still suggest a single database. You can create a new schema in there quite easily.

So, my answer stands. anything else is going to add a LOT of time to the entire process.

Migration Help

Aashish, December 16, 2011 - 4:14 am UTC

Dear Sir
Ok I created both schema in same Database and server(Physically),now i want to run my those (insert into abc from select.....) migration script to take data from "schema A" structure to "schema B" ,now sir kindly suggest me what are the best best ways to reduce time to minimum
Like
1..Can disabling index,constraints,some how putting my DB into LOGGING improve my performance) in Schema A(Source).

2..Can disabling index,constraints,some how putting my DB into LOGGING improve my performance) in Schema b(Destination).

3.Using hints like (*\+append *\ or parallel can improve performance.

or any other ways i can use to improve performance ,kindly suggest


Thanks & Regards
Aashish

Tom Kyte
December 16, 2011 - 5:38 pm UTC

already answered that the first time:


why are you creating a new database? why not just do it in a single database? 100gb is pretty small - using parallel direct path (insert /*+APPEND*/) after temporarily setting the database to be in noarchivelog mode (but do set back after you migrate) and then creating indexes should go pretty quick.



put database in noarchive log mode

using parallel direct path insert /*+ append */ (no redo, no undo)

create indexes (and constraints) afterwards using parallel when appropriate (no redo, no undo)

then put database back into archivelog mode.


Migration Help

Aashish, December 16, 2011 - 4:18 am UTC

Dear lalu
Can you suggest me how can i use parallel way of running my scripts to complete in less then 5 hours.

** would like to inform you that most scripts are interlinked with previous scripts.

Thanks & Regards
Aashish

Migration Help

Aashish Mishra, December 17, 2011 - 3:33 am UTC

Dear Sir
Thanks very much for you help ,i am modifying my scripts according to you suggestion to see the difference,and sir kindly suggest me one more thing that is that any ways in oracle by which i can set some flag in my DB so that latter i can identify the rows inserted/deleted/modified after setting the flag so that i can do migration in incremental approach

*any thing apart from trigger as user wont allow us to set triggers on their live application as their current DB is already slow

Thanks & Regards

Aashish
Tom Kyte
December 17, 2011 - 9:24 am UTC

nope, not really.

There is a tiny chance to use flashback query - but it would require all of the undo to be maintained during the entire migration process - and even then, the flashback queries would be fairly expensive to run against the entire database at the end (meaning - they would take a long time and then you'd have to do them again and again and again because you'd never really get caught up)

If you are trying to do an online migration - something you actually *never* mentioned, then I suggest you investigate golden gate, use two databases and program the upgrade. It'll take a bit of work and time.


funny - you mentioned you had 4 hours of downtime to do this before...

migration

sam, December 17, 2011 - 10:09 am UTC

Tom:

I noticed people use the words "migration" and "upgrade" and "export" etc. synonymously.

I think in oracle definitions there is a difference. Can you clarify the meaning and when it is best to use each method.

Tom Kyte
December 17, 2011 - 12:17 pm UTC

depends on the context, use it in a sentence - and I'll let you know what they probably meant.


An upgrade is pretty darn clear - you are changing versions.

migration should involve moving from one machine to another

export is a tool you shouldn't be using anymore.

Migration

Aashish, December 18, 2011 - 11:31 pm UTC

Dear Sir
i am still saying that i will get 4 hours downtime and logic behind asking recording incremental change is that i was thinking that if its possible then i can run migration in 2 part ,first part i will run till date (2 -3 days before final migration date) and then finally on migration night i would have gone for only 2-3 days data which is very easy to migrate in 4 hours.

sir this was my idea to make it more practical and logical in 4 hour migration

Thanks & Regards
Aashish
Tom Kyte
December 19, 2011 - 7:33 am UTC

you would need a tool like golden gate to help you with that.

...
If you are trying to do an online migration - something you actually *never* mentioned, then I suggest you investigate golden gate, use two databases and program the upgrade. It'll take a bit of work and time.
....


applying deltas from one schema to another totally different schema is a very tricky proposition.

Migration Help

Aashish, December 20, 2011 - 11:21 pm UTC

Dear Sir
One error i was facing while runnning insert /*+append*/ then i come to know that i have to issue commit; after every insert /*+append/ ,which has caused to increase time again ,sir any solution to that

and sir i am not doing online migration but downtime is so less(4 hours),which is making me think of such alternates .

Thanks & Regards
Aashish
Tom Kyte
December 21, 2011 - 7:41 am UTC

you do NOT have to commit after each one. You do need to commit before you can read that table in your session (other sessions can read it fine).

since you only use insert append for a large bulk insert operations - such as a data load - the need to actually query the table immediately should be very very small.

What is your use case that mandated you bulk load and then immediately read the table? Seems a bit 'unique'.

Also - how could this be a big performance issue? The time it takes to commit 1 row is about the same amount of time it takes to commit 1,000,000 rows (commit times are rather flat). Since you are doing LARGE bulk loads, the amount of time you spend committing is rather trivial as compared to the amount of time you spend direct path-ing. I cannot imagine a commit adding more than LESS THAN 1% to the overall runtime with insert /*+APPEND*/ - because, again, you are doing LARGE bulk loads by definition.

Migration Issue

Aashish, December 23, 2011 - 4:58 am UTC

Dear Sir
Thanks for suggestion well i added commit after each insert /*+APPEND*/ .. as my one query is dependent on others (In most of the cases),and sir i have not declared any constraints on source and destination tables and creating indexes as when they needed to fetch the records on the columns which are used in condition ..

sir am i using right approach to reduce the time.?


Thanks & Regards
Aashish
Tom Kyte
December 23, 2011 - 8:45 am UTC

probably

MySQL to Oracle Migration

Ankit, January 24, 2013 - 4:25 am UTC

Hi Tom,

We are planning to migrate from MySQL DB to Oracle DB, our main DB size is around 150GB. The code is in Java and use Hibernate to generate the SQL queries so we do not generate SQL with purpose of exploiting the features of any particular DB like Oracle/MySQL etc.

From your experience can you explain to me if we need to change anything in our code (which is not written with any target DB in mind) so as to fully exploit Oracle's functionality?

Regards,
Ankit
Tom Kyte
January 30, 2013 - 12:50 pm UTC

so we do not generate SQL with purpose of exploiting the features of any particular DB like Oracle/MySQL etc.

sadder words have never been written. never...


From your experience can you explain to me if we need to change anything in our code (which is not written with any target DB in mind) so as to fully exploit Oracle's functionality?

yes, especially if you've just had everything generated "database independent"

can I tell you what? of course not, I don't know what you do, what you need to do, how you do it, why you do it and so on. I could talk for days and days (weeks really) on the features of the database - without saying a thing that was relevant to you because I don't know what you do at all...

MySQL Features :(

Ankit, February 01, 2013 - 3:20 am UTC

Hi Tom,

It is indeed sad but I do not think that MySQL offers any feature which can be exploited by application design, there is no concept of bind variables like in Oracle, the query statistics are collected real time, there is query cache which stores results but in most cases it causes more trouble than benefit due to its cache locking mechanism.

This is why I want to make sure that when we move to Oracle we exploit its scalability features fully.

Let me give you a bit of background - We use Hibernate to generate queries using MySQL Libraries i.e. by using the
Property org.hibernate.dialect.MySQLDialect in the Hibernate config parameters.

99.9% of the database task is done using SQLs and there are ~10 stored procedures written for non-code business purposes.

We have a requirement now to sustain following DB updates to update value of column price and inventory for following product categories:

---0.8 million updates per hour for one of the category.
---0.25 million updates per hour for rest of the categories.

This data will be processed using feeds whose data will be processed on hourly basis.

We are not confident that MySQL can handle this type of traffic and are planning to move to Oracle.
I understand that to build a scalable application using Oracle DB we need to use bind variables.

Now I ask what else (other than using bind variables and using stored procedures) we can do to sustain this requirement using Oracle DB?
Tom Kyte
February 01, 2013 - 8:39 am UTC

mysql fully supports bind variables - are you kidding???

are you sure you are exploiting mysql as it stands?


Now I ask what else (other than using bind variables and using stored procedures) we can do to sustain this requirement using Oracle DB?


in order to be successful at something, you have to understand it - as do your developers.

suggest you start by reading:
http://docs.oracle.com/cd/E11882_01/server.112/e25789/toc.htm

from cover to cover - bookmark:

http://www.oracle.com/pls/db112/homepage

review:

http://docs.oracle.com/cd/E11882_01/server.112/e16638/toc.htm

learn SQL (really learn sql, really really truly learn sql, not just how to have it automagically (poorly) generated)


Confused

djb, February 01, 2013 - 8:56 am UTC

Not sure what to make of this :-

'mysql fully supports bind variables' - the previous poster doesn't say that, he says the opposite...
Tom Kyte
February 01, 2013 - 2:27 pm UTC

exactly, previous poster is wrong.

I was saying

"mysql fully supports bind variables" - "are you kidding, as in are you kidding me by saying they don't"


mysql does support bind variables

Chuck Jolley, February 01, 2013 - 10:07 am UTC

It seems mysql does support bind variables.
So it's puzzling what the question is talking about.

NB. Not that I know anything about MySQL, but google came up with page after page of examples of variable binding in mysql.

Bind variables

Ankit, February 04, 2013 - 10:47 pm UTC

Hi Tom,

I just got that a bit incorrect what I meant was that there is no performance incentive by using bind variables in MySQL like in Oracle.

Of course MySQL supports bind variables which as you say need to be used to secure the system against SQL Injection attacks.
Tom Kyte
February 06, 2013 - 8:06 am UTC

I just got that a bit incorrect what I meant was that there is no performance
incentive by using bind variables in MySQL like in Oracle.


sure there is.

http://www.mysqlperformanceblog.com/2006/08/02/mysql-prepared-statements/


Needed help

A, April 03, 2013 - 12:22 pm UTC

Hello Tom,
Any suggestions on how to migrate an existing 10GB database from MySQL to Oracle 10g?

I have gone through the below link. Don't you think doing through SQLDeveloper will take very long time? To be fair with you, I'm not sure that's why thought of asking you.

http://www.oracle.com/technetwork/database/migration/mysql-093223.html


Thanks


Tom Kyte
April 22, 2013 - 1:49 pm UTC

10gb is pretty tiny today in 2013 - you'd be surprised. Most of your data in that 10gb isn't really data - it is indexes and metadata, your real data might only be 3-4gb in size (we'll just rebuild indexes).

you can always go with a dump and load - dump the data from mysql in a csv format and then use an external table to reload.


but you'll be surprised as how fast 3-4gb could be moved these days.