Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Johan.

Asked: May 02, 2004 - 5:59 pm UTC

Last updated: June 28, 2016 - 1:03 am UTC

Version: 8.1.6.2

Viewed 10K+ times! This question is

You Asked

Tom

You mentioned that the AskTom site was recently upgraded to 10g. I take it therefore that you consider it to be of sufficient quality to run production code, even though the officially released version is only a couple of months old ?

We have an in-house system that is currently running on Oracle 8.1.6.2 on Tru64 Unix and we are in the process of planning an upgrade to 9iR2. Would you recommend that we upgrade directly to 10g, or should we rather consider to go to 9iR2 first ?

We will obviously be testing everything, but since the testing is quite intensive, I would like to know if 10g could already be considered to be stable enough for running mission critical production applications, before we invest time in testing with it.

and Tom said...

Oracle Corporation ran on Oracle10g for quite a while before releasing it to customers.

The bug database, email, many extremely large scale production systems.

Yes, it is of "sufficient quality".


If you were faced with the choice today (may 2004) of

a) 9ir2
b) 10g

I would say b) 10g -- hands down. No question. I'm saying this not because we get "more" from you going to 10g (we don't, same to us if you go 9ir2 or 10g) -- but because you get "more"


Note: I'm saying "if you are going to upgrade" -- not "take every system you have and immediately upgrade". If an upgrade is going to happen, I would consider only 10g at this point.

Rating

  (316 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Timely suggestion

rk, May 04, 2004 - 1:03 am UTC

Your suggestion to get into 10g from 8.1.7 instead of 9i came in the right time. Infact, many like me must be expecting a clear reply like this.

Is there any specific document that talks about 'TO TAKE CARE' when migrating from 8.1.7 to 10g.

The reason for this question is that many parameters in initora (like sort_area_size) are improved by other parameters in 9i itself.

Any document that gives parameter changes information between 8i and 10g will be of great value for many people like me.

Tom Kyte
May 04, 2004 - 7:17 am UTC

the new features guide for 9ir1, 9ir2, 10g are something you'll want to read -- they are talked about in there, and it would give you a good overview of what you just installed.

But that's something new

Igor, May 04, 2004 - 4:41 am UTC

I heard Elison said on presentation of 9i that normally even Oracle waits for 6 months before migrating to new version. Or you want to say that it's different because it is 10._1_ :-)



Tom Kyte
May 04, 2004 - 7:23 am UTC

I've never heard him say that. We were on 10g well before 10g was released. Same with 9i -- the bug database is usually the first to go. Relatively "simple" but huge and heavily used.

my advice

A reader, May 07, 2004 - 5:00 am UTC

Hi

My Advice is NEVER EVER migrate to the first release of a new version. Even it´s stable. Why? Because in 1 year or so you are forced again to migrate to the later release because the first release is unsupported.

So wait for the second release as 9i or third as 8i if you dont wanna waste time migrating

agree

A reader, May 07, 2004 - 1:06 pm UTC

I agree with last poster. DO NOT upgrade to the fist release. Always migrate to the terminal release

Tom Kyte
May 07, 2004 - 1:24 pm UTC

define terminal release.

that's not known until there is, well, a new release.


repectively disagree with this thought process (spoken as a person in a company that doesn't get an extra dime if you are on 7.x or 10g for an upgrade)

the point is -- if you are *going* to upgrade, it makes no point to upgrade to old versions, especially when the new release has many salient manageability features to make it more than worth considering.

the point Oracle forces you migrate several times

A reader, May 07, 2004 - 3:49 pm UTC

When 8.1.5 was released we upgraded from 7.3.4. After just 6 months (or even less) we were forced to upgrade to 8.1.6, then after around 8 months or so we were forced again to migrate to 8.1.7. We got burnt. An database migration is considered major, cant play with that and we spent tons of $$$ and time. But hey Tom this is our opinion just to share with others so they might think twice before proceed to migrate to a first release. (I have already seen bugs in 10.1 that will only fixed in 10.2!!!!)

Terminal release, is when Oracle Corporation says so. Always wait until then if you dont want to waste money and time and to hear from Metalink, sorry sir you must upgrade to next release because the bug is only fixed there, or sorry sir but only terminal release will be supported so upgrade.

I remember a feature (was it Streams?) which was announced as new feature in 9i but did not make it until 2, terminal release. Another reason to wait for terminal releases!


Tom Kyte
May 07, 2004 - 4:05 pm UTC

I'm helping a customer today (and for the last 3 days) on 8.1.6.3. They have active open tars, patches being delivered and support being offered. No one *forced* you on anything. Many people are running still 734, 815, 816 and even earlier versions. As long as you aren't making major changes (new hardware, more capacity, new features) they are not really motivated to change.

I see bugs in 9iR2 that fall into the same category as you describe -- to get them fixed in 9iR2, we'll backport from 10.2 as well.


following the wait for the terminal release cause it'll have more features gets you into "i'll never upgrade cause there is always another release with another set of functions coming" doesn't it.

You are right, all are entitled to their opinion, we just have to agree to have different opinions on this topic.

Moving from terminal to terminal

Mark J. Bobak, May 07, 2004 - 5:20 pm UTC

Tom,

I think the point people are trying to make is that 10.1 is
the initial release of 10g. There will be a 10.2, sooner or
later (and possibly a 10.3?).

When 9.0.1 came out, people who were on 8.1.7 and jumped to
9.0.1, were soon "forced" to move to 9.2.0. "Forced"
meaning that 9.0.1 was de-supported relatively quickly,
and to stay supported, they needed to move to 9.2.0.
In fact, if I remember correctly, 9.0.1 was desupported
*before* 8.1.7.

I think that is where people get the idea that
they ought to move from terminal release to terminal
release, where the support cycles are the longest.

My two cents,

-Mark

Tom Kyte
May 10, 2004 - 6:36 am UTC

As I said, 8.1.6.3 -- still being used, still being supported.. 9.0.1 same thing (901 -- still supported today and until 2006). No one is "forced".


But in any case -- 10.1, supported till 31-JAN-2007 for error correction support (new patches) and 2010 for support.

There will be a 10.x, 10.y, 11.z, 12, 13, 14 whatever. Just about every year (I've been at Oracle since 1993 there has been:

7.0
7.1
7.2
7.3
8.0
8.1.5
8.1.6
8.1.7
9ir1
9ir2
10gr1

11 years, 11 versions)

"terminal"

Connor McDonald, May 08, 2004 - 9:46 am UTC

Ah yes, but (from my own experience which may be an anomaly) I can count on one hand the number of times a request for backport has been denied, whereas "you'll need to upgrade to version X" is a far more common occurrence.

This is not a criticism per se (hell, if I was a software vendor I'd be pushing people toward current releases as well), but it does reflect the attitude of oracle customers where they suffer the paradox of:

- you get best support on current releases/patchsets
- current releases/patchsets come out frequently
- its a lot of work to move to a new release

Cheers
Connor

Tom Kyte
May 10, 2004 - 6:47 am UTC

but -- as long as a product is in ECS (error correction support) that is not the right answer (to get to a patchset -- eg: 9.2.0.4 -> 9.2.0.5 sure, but not major releases).

And 10g is slated to be in ECS until 2007

support Doc ID: 190435.1

whereas 9ir2 is 2005

Product & Product Version(s) Platform(s) & Platform Version(s) End of ECS Date End of ES Date End of EMS Date Migration Path Notes

*Oracle Database 10.1.0.x ALL Platforms 31-JAN-2007 31-JAN-2010 31-JAN-2009 TBD

*Oracle Database 9.2.0.x ALL Platforms 31-DEC-2005 31-DEC-2008 31-DEC-2007 TBD


sooo, 10.1 will be around for much longer than 9ir2 support wise.

Ever considered a job in marketing ?

Adrian, May 10, 2004 - 2:28 am UTC

>> Same with 9i -- the bug database is usually the first to
>> go. Relatively "simple" but huge and heavily used.

I'm not sure that you want to be advertising a huge bug database... ;o)

Tom Kyte
May 10, 2004 - 7:58 am UTC

it goes back forever....

it contains everything

including internal problem reports (eg: when one of our internal systems is misbehaving)

including all tars

including lots of stuff that isn't "a bug"

:)

Oracle Product Support

John Simpson, May 10, 2004 - 7:02 am UTC

Tom

Interesting comments regarding upgrading to 10g database, but when are the Oracle Development tools (Forms, Reports etc) going to be supported against 10g database? Seems odd that Forms 10g is (at the moment) certified against 9iR2, but not 10g database

John

asktom on Linux

Jeff Hunter, May 10, 2004 - 10:15 am UTC

I see from the home page that asktom.oracle.com is moving to Linux. I've been involved in several small migrations from Solaris to Linux and have found Linux to be less reliable than Solaris in a single host/single instance configuration. Could you explain a little (without divulging trade secrets, of course) about the architecture of the new asktom (OS version, no of nodes (RAC I assume), general config of nodes, etc).

Tom Kyte
May 10, 2004 - 11:24 am UTC

define less reliable?



running RHAS 3.0 on Dell, 2 cpus, single instance.

wait at least when this is suppoted

A reader, May 10, 2004 - 10:26 am UTC

10g OEM metadata is only suppoted in 9iR2, funny thing huh :-(

Wait till the new release, at least when the metadata is suppoted in 10g

less reliable

Jeff Hunter, May 10, 2004 - 11:51 am UTC

Less reliable as in the linux box (RHEL 3.0, HP, 2 cpu) hangs about twice a month where you can't logon to the database, can't telnet to the box, can't even get a console connection. The box has to be powered off and back on which always leaves all sorts of recovery fun for the DBAs. We've experienced this behaviour on just about all our RHEL3 boxes, whether they run Oracle, JBOSS, or even plain old NFS.

The same services can run on a similarly configured Solaris box with virtually no interruption.

Tom Kyte
May 10, 2004 - 12:48 pm UTC

I've never experienced that. Running on Dell. I've even had my laptop experience more uptime than that.


Most all of our customer facing demo systems are running on that stuff. A hang like that would be unacceptable.

My internal linux server has been up for the last 48 days straight (10's of thousands of hits a day on that -- database hits, not webpages).

Sounds like something else is afoot. have you opened any service tars with RH, what have you done to diagnose the system hang? Perhaps you have some hardware issue/incompatibility.

I've rebooted linux as frequently as I would solaris (infrequently).

to jeff

Marvin, May 10, 2004 - 12:34 pm UTC

Hi

We have 9iAS R1 and Oracle 8.1.7.4 running on two Suse Enterprise 7 boxes, database uptime is so far almost 5 months. 9iAS 3 months!!!

May be it's RHAS 3 bug? They released a patch CD few months ago, did you update?

Thanks.

Jeff Hunter, May 10, 2004 - 2:39 pm UTC

I've suspected it is hardware or OS related as it doesn't matter what app we run, but have had a hard time proving it. We've logged problems with RH and HP (2 months ago) but to no avail. The biggest problem seems to be that since we can't get to the box when it's hanging, we can't diagnose it. The condition doesn't leave any error messages that we can find. We have just applied a bunch of patches to the OS and hopefully this will solve the problems, but we're really gun-shy about moving heavy duty production stuff to Linux.

Development Tools

John Simpson, May 11, 2004 - 4:04 am UTC

Tom

Interesting comments regarding upgrading to 10g database, but when are the
Oracle Development tools (Forms, Reports etc) going to be supported against 10g
database? Seems odd that Forms 10g is (at the moment) certified against 9iR2,
but not 10g database

John

Tom Kyte
May 11, 2004 - 8:37 am UTC

the tools work and are supported against these databases.

the ias *infrastructure database* -- not yet.

infrastructure database seems certified

A reader, May 11, 2004 - 9:35 am UTC

from certification matrix for linux

OS Product Server Status Addtl. Info. Components
SuSE SLES8 31-bit 10g (9.0.4) 10g Certified None Yes Yes
Red Hat Enterprise Linux AS/ES 3 10g (9.0.4) 10g Certified None Yes Yes

the issue is with OracleAS InterConnect is only certified with 9.2.0.4 metadata repository

What about the cool new stuff!

Tim Hall, May 13, 2004 - 8:24 am UTC

Hi.

I agree that the cost (time and money) of migrating can be high but nobody has mentioned all the new features. Oracle 10g includes some really nice stuff and it seems to be pretty stable. Anything that makes my job easier saves money in the long run as I can do more work per day.

As for App Server, 9iAS was a mess and AS10g actually works. The decision to migrate was simple :)

The biggest stumbling block for a fullscale 10g upgrade for me is that OCS is not certified against 10g yet so I'm forced to keep 9i for some things.

Cheers

Tim...

Tom Kyte
May 13, 2004 - 10:45 am UTC

I would say b) 10g -- hands down. No question. I'm saying this not because we
get "more" from you going to 10g (we don't, same to us if you go 9ir2 or 10g) --
but because you get "more"
^^^^^^^^^^ that's what i meant there :)


there is a ton of truly useful stuff in 10g, useful to developers and dba alike

10g adds unknown database objects to my schema?

Peter Tran, May 13, 2004 - 10:48 am UTC

Hi Tom,

I've upgraded one of our test database to 10g to give it a spin with our applications before we upgrade production. I've noticed that 10g is inserting unknown database objects into my schema. I did the standard install, so I may have all the monitoring options turned on.

E.g.

peter@tst10g> select table_name from tabs;

TABLE_NAME
------------------------------
BIN$2dnSP5SNM9LgMBCsSwEllQ==$0
BIN$2dnSP5SYM9LgMBCsSwEllQ==$0
BIN$2dnSvV/6eX/gMBCsSwElmw==$0
BIN$2dnSvV/veX/gMBCsSwElmw==$0
BIN$2dnTCvfENbLgMBCsSwElnQ==$0
BIN$2dnTrJe9ToHgMBCsSwElnw==$0
...

peter@tst10g> select index_name from ind;

INDEX_NAME
------------------------------
BIN$2dnSP5SMM9LgMBCsSwEllQ==$0
BIN$2dnSP5SXM9LgMBCsSwEllQ==$0
BIN$2dnSvV/5eX/gMBCsSwElmw==$0
BIN$2dnSvV/ueX/gMBCsSwElmw==$0
BIN$2dnTCvfDNbLgMBCsSwElnQ==$0
BIN$2dnTrJe8ToHgMBCsSwElnw==$0
BIN$2dnTrJf/ToHgMBCsSwElnw==$0
BIN$2dnTrJf0ToHgMBCsSwElnw==$0
...

1) Are these objects to be expected?
2) What process is creating them?
3) Should I leave them or drop them periodically?
4) Should I be concerned with diskspace and performance?

Thanks for your help.

-Peter

Tom Kyte
May 13, 2004 - 11:24 am UTC

1) yes, they are recycle bin objects.

2) you are, when you drop things :)

3) you can purge them if you like, up to you

4) diskspace will be automagically reclaimed for you -- as you need space, these extents will be reused and the objects will disappear from the recycle bin


See
</code> http://asktom.oracle.com/~tkyte/fb.htm <code>

starting at slide 13 for a quick overview of what is happening here.

-

Scott, May 15, 2004 - 5:08 am UTC

the problems with RH on HP is the only HP problems.
It seems, you have a problems with RAM chips.

Reviews
Bookmark Review | Bottom | Top
Timely suggestion May 04, 2004
Reviewer: rk from india

Your suggestion to get into 10g from 8.1.7 instead of 9i came in the right time.
Infact, many like me must be expecting a clear reply like this.

Is there any specific document that talks about 'TO TAKE CARE' when migrating
from 8.1.7 to 10g.

The reason for this question is that many parameters in initora (like
sort_area_size) are improved by other parameters in 9i itself.

Any document that gives parameter changes information between 8i and 10g will be
of great value for many people like me.


Followup:
the new features guide for 9ir1, 9ir2, 10g are something you'll want to read --
they are talked about in there, and it would give you a good overview of what
you just installed.


Bookmark Review | Bottom | Top
But that's something new May 04, 2004
Reviewer: Igor from France

I heard Elison said on presentation of 9i that normally even Oracle waits for 6
months before migrating to new version. Or you want to say that it's different
because it is 10._1_ :-)




Followup:
I've never heard him say that. We were on 10g well before 10g was released.
Same with 9i -- the bug database is usually the first to go. Relatively
"simple" but huge and heavily used.


Bookmark Review | Bottom | Top
my advice May 07, 2004
Reviewer: A reader

Hi

My Advice is NEVER EVER migrate to the first release of a new version. Even it´s
stable. Why? Because in 1 year or so you are forced again to migrate to the
later release because the first release is unsupported.

So wait for the second release as 9i or third as 8i if you dont wanna waste time
migrating


Bookmark Review | Bottom | Top
agree May 07, 2004
Reviewer: A reader

I agree with last poster. DO NOT upgrade to the fist release. Always migrate to
the terminal release


Followup:
define terminal release.

that's not known until there is, well, a new release.


repectively disagree with this thought process (spoken as a person in a company
that doesn't get an extra dime if you are on 7.x or 10g for an upgrade)

the point is -- if you are *going* to upgrade, it makes no point to upgrade to
old versions, especially when the new release has many salient manageability
features to make it more than worth considering.


Bookmark Review | Bottom | Top
the point Oracle forces you migrate several times May 07, 2004
Reviewer: A reader

When 8.1.5 was released we upgraded from 7.3.4. After just 6 months (or even
less) we were forced to upgrade to 8.1.6, then after around 8 months or so we
were forced again to migrate to 8.1.7. We got burnt. An database migration is
considered major, cant play with that and we spent tons of $$$ and time. But hey
Tom this is our opinion just to share with others so they might think twice
before proceed to migrate to a first release. (I have already seen bugs in 10.1
that will only fixed in 10.2!!!!)

Terminal release, is when Oracle Corporation says so. Always wait until then if
you dont want to waste money and time and to hear from Metalink, sorry sir you
must upgrade to next release because the bug is only fixed there, or sorry sir
but only terminal release will be supported so upgrade.

I remember a feature (was it Streams?) which was announced as new feature in 9i
but did not make it until 2, terminal release. Another reason to wait for
terminal releases!



Followup:
I'm helping a customer today (and for the last 3 days) on 8.1.6.3. They have
active open tars, patches being delivered and support being offered. No one
*forced* you on anything. Many people are running still 734, 815, 816 and even
earlier versions. As long as you aren't making major changes (new hardware,
more capacity, new features) they are not really motivated to change.

I see bugs in 9iR2 that fall into the same category as you describe -- to get
them fixed in 9iR2, we'll backport from 10.2 as well.


following the wait for the terminal release cause it'll have more features gets
you into "i'll never upgrade cause there is always another release with another
set of functions coming" doesn't it.

You are right, all are entitled to their opinion, we just have to agree to have
different opinions on this topic.


Bookmark Review | Bottom | Top
Moving from terminal to terminal May 07, 2004
Reviewer: Mark J. Bobak

Tom,

I think the point people are trying to make is that 10.1 is
the initial release of 10g. There will be a 10.2, sooner or
later (and possibly a 10.3?).

When 9.0.1 came out, people who were on 8.1.7 and jumped to
9.0.1, were soon "forced" to move to 9.2.0. "Forced"
meaning that 9.0.1 was de-supported relatively quickly,
and to stay supported, they needed to move to 9.2.0.
In fact, if I remember correctly, 9.0.1 was desupported
*before* 8.1.7.

I think that is where people get the idea that
they ought to move from terminal release to terminal
release, where the support cycles are the longest.

My two cents,

-Mark


Followup:
As I said, 8.1.6.3 -- still being used, still being supported.. 9.0.1 same
thing (901 -- still supported today and until 2006). No one is "forced".


But in any case -- 10.1, supported till 31-JAN-2007 for error correction
support (new patches) and 2010 for support.

There will be a 10.x, 10.y, 11.z, 12, 13, 14 whatever. Just about every year
(I've been at Oracle since 1993 there has been:

7.0
7.1
7.2
7.3
8.0
8.1.5
8.1.6
8.1.7
9ir1
9ir2
10gr1

11 years, 11 versions)


Bookmark Review | Bottom | Top
"terminal" May 08, 2004
Reviewer: Connor McDonald from Oz

Ah yes, but (from my own experience which may be an anomaly) I can count on one
hand the number of times a request for backport has been denied, whereas "you'll
need to upgrade to version X" is a far more common occurrence.

This is not a criticism per se (hell, if I was a software vendor I'd be pushing
people toward current releases as well), but it does reflect the attitude of
oracle customers where they suffer the paradox of:

- you get best support on current releases/patchsets
- current releases/patchsets come out frequently
- its a lot of work to move to a new release

Cheers
Connor


Followup:
but -- as long as a product is in ECS (error correction support) that is not the
right answer (to get to a patchset -- eg: 9.2.0.4 -> 9.2.0.5 sure, but not major
releases).

And 10g is slated to be in ECS until 2007

support Doc ID: 190435.1

whereas 9ir2 is 2005

Product & Product Version(s) Platform(s) & Platform Version(s) End of
ECS Date End of ES Date End of EMS Date Migration Path Notes

*Oracle Database 10.1.0.x ALL Platforms 31-JAN-2007 31-JAN-2010
31-JAN-2009 TBD

*Oracle Database 9.2.0.x ALL Platforms 31-DEC-2005 31-DEC-2008
31-DEC-2007 TBD


sooo, 10.1 will be around for much longer than 9ir2 support wise.


Bookmark Review | Bottom | Top
Ever considered a job in marketing ? May 10, 2004
Reviewer: Adrian from UK

>> Same with 9i -- the bug database is usually the first to
>> go. Relatively "simple" but huge and heavily used.

I'm not sure that you want to be advertising a huge bug database... ;o)


Followup:
it goes back forever....

it contains everything

including internal problem reports (eg: when one of our internal systems is
misbehaving)

including all tars

including lots of stuff that isn't "a bug"

:)


Bookmark Review | Bottom | Top
Oracle Product Support May 10, 2004
Reviewer: John Simpson from Dorset, UK

Tom

Interesting comments regarding upgrading to 10g database, but when are the
Oracle Development tools (Forms, Reports etc) going to be supported against 10g
database? Seems odd that Forms 10g is (at the moment) certified against 9iR2,
but not 10g database

John


Bookmark Review | Bottom | Top
asktom on Linux May 10, 2004
Reviewer: Jeff Hunter from greenwich.ct.us

I see from the home page that asktom.oracle.com is moving to Linux. I've been
involved in several small migrations from Solaris to Linux and have found Linux
to be less reliable than Solaris in a single host/single instance configuration.
Could you explain a little (without divulging trade secrets, of course) about
the architecture of the new asktom (OS version, no of nodes (RAC I assume),
general config of nodes, etc).


Followup:
define less reliable?



running RHAS 3.0 on Dell, 2 cpus, single instance.


> 10g OEM metadata is only suppoted in 9iR2, funny thing
> huh :-(

I think, Oracle cleaned the only 1CD on 10g. All toons of *** have migrated to 3CD with "greed control". :)))

What the heck?

Peter Tran, May 18, 2004 - 8:37 pm UTC

What the heck was that last post from Scott?

Tom - can you delete it? It just looks like a copy of paste of the entire thread.

Anyway, regarding your latest (asktom) move to Linux with 10g. Can you tell me how much physical memory you are running and what's the size of your SGA? If it's more than 8G, can you tell me how you enabled the memory configuration to get an SGA larger than 1.7G? With default LINUX configuration supporting only 4G (32 bit), you can only get a max 1.7G SGA. You have to go to mess with the kernel to enable setting the DB_CACHE_SIZE; otherwise, you have to fall back on the old *_AREA_SIZE parameters.

Thanks,
-Peter

Tom Kyte
May 19, 2004 - 7:47 am UTC

why do you have to fall back to the old _area_size parameters? pga in dedicated server is in the process, nothing to do with the sga.ask_tom@ASKUS> show sga

show sga

Total System Global Area 838860800 bytes
Fixed Size 781168 bytes
Variable Size 677647504 bytes
Database Buffers 159383552 bytes
Redo Buffers 1048576 bytes


OracleCSService failed to start after upgrade

Jo Sat, June 03, 2004 - 9:24 pm UTC

Tom

I am upgrading to Oracle10G 10.1 from 9.2.0.3. I am on W2K SP4. After upgraded, the database did not start because OracleCSService failed to start. The listener service also did not start. When I tried to start them, they said internal error. I could not find anything on Google on how to fix this? Do you have any idea?

Tom Kyte
June 04, 2004 - 7:32 am UTC

(support -- metalink.oracle.com -- would be the first place I would have gone...)

but the database isn't dependent on the CS service, that should not prevent it. The only service I run when testing is the Oracle%ORACLE_SID one (i start the listener from the command line when I need it)

Interesting

Greg Johnson, June 04, 2004 - 9:48 am UTC

After reading Mark and Conner's comments and your replies regarding support dates of 8.1.7 and 9.0.1, I had to go and check to see which release expired first:

Product Version(s): 9.0.1 (9i) & 9.0.1.x (9i)

Error Correction Support (ECS): 31-DEC-2003
Extended Support (ES): 31-DEC-2006

Product Version(s): 8.1.7 (8i) & 8.1.7.x (8i)

Error Correction Support (ECS): 31-DEC-2004
Extended Support (ES): 31-DEC-2007

I'll leave others to join the dots on the above, but personally I'm usually pretty sceptical about upgrading to a first release of a new Oracle DB version. The first release of new Oracle versions does seem to suffer from a reputation of being not “as stable" as subsequent releases.

Tom Kyte
June 04, 2004 - 10:55 am UTC

817 was originally scheduled to go in 2003, was going to upto about 3 months before the scheduled end date. At the last minute, they extended it.


So, look at 9ir2 and 10gR1 and make *that* comparision.... That's what counts for the future.

Where can I find the documents?

Mick Jacker, June 04, 2004 - 1:51 pm UTC

Tom

You wrote that the new features guide for 9ir1, 9ir2, 10g are very useful to read. Where can they be found?


Tom Kyte
June 04, 2004 - 2:19 pm UTC

otn.oracle.com has all documentation.

</code> http://otn.oracle.com/documentation/index.html <code>

A reader, June 10, 2004 - 4:57 pm UTC

Tom,

Will 10g be available for Solaris 9 X86?

Thanks
Asif.

Tom Kyte
June 10, 2004 - 8:10 pm UTC

according to metalinks certify -- yes, it is projected to be on that platform. we skipped 9i but looks like it is back for 10g

A reader, June 10, 2004 - 11:50 pm UTC

Any specific dates for the release?

Thanks.


Tom Kyte
June 11, 2004 - 7:55 am UTC

not that I could see. if you can, file an itar and they would be able to track it down.

"Oracle 10g upgrade", version 8.1.6.2

psperez, June 14, 2004 - 3:45 pm UTC

So out of all this discussion, was there a direct solution on how to migrate from 8i to 10g? Seems to that working with Oracle products there is "always" going to be some issue, but also look forward to new stuff.

When I look for article, first I like to get the resolution then I read on to the hoops everyone had to jump in and out of.

So is there a white paper on this process?

Tom Kyte
June 15, 2004 - 8:03 am UTC

the upgrade guide covers all of the paths you may take -- from direct in place "upgrade" to "export/import" and so on.

</code> http://docs.oracle.com/docs/cd/B13789_01/server.101/b10763/toc.htm <code>



Migrating Oracle 8i to 10g

A reader, June 16, 2004 - 1:52 am UTC

Hi Tom

We are in the process of migrating 2 Oracle 8i (8.1.5 and 8.1.7) databases to Oracle 10g. Each 8i database resides on a separate box. ie. 8.1.5 on boxA and 8.1.7 on boxB - both Sun SPARC Solaris 2.6 32-bit.

We'd like to consolidate the 2 different versions to one new box, boxC, running Sun SPARC Solaris 2.9 64-bit.

That is,

Oracle DB Version OS Version Server Name
================= ========== ===========

Before:
8.1.5 Solaris 2.6 32-bit boxA
8.1.6 Solaris 2.6 32-bit boxB

After:
10.0.1 Solaris 2.9 64-bit boxC

... both applications previously in 8.1.5 and 8.1.7 resided on 2 separate boxes now running 10g on the same box.

Could you please list the steps as to how I should go about in carrying out this migration/upgrade.

Thanks in advance.

Regards
Hien

Tom Kyte
June 16, 2004 - 12:30 pm UTC

looks like schema level exports to me....

depends on the size and scale of the databases of course. there are many paths from exp/imp to transportable tablespace to upgrade the existing instances and data pump to ......

Excellent Feedback

Hien, June 16, 2004 - 9:07 pm UTC

Tom

There's a typo error. The 8.1.6 should be 8.1.7.

Both versions of the databases' tablespaces are dictionary- managed - NOT transportable. Each database is about, roughly, 4Gb in size.

As I've read some doco, you have to do exp/imp for 8.1.5 and with the terminal release 8.1.7 you can use the migration assistant tool to migrate the database.

However, the doco does state that whether or not the migration of the database has to be on the same manchine. Can you please shed some light? Thanks

Regards
Hien



Tom Kyte
June 17, 2004 - 8:08 am UTC

they are transportable, don't know why you say "not". dmts can be transported.

you would upgrade the database on the machine it is on (or backup and restore it entirely to the other machine, get it up and running and then migrate it)

My take on migrating to a newer version

Nick, June 17, 2004 - 8:52 am UTC

My current approach is to wait for the second patch set at least if possible. The start of this year (2004) we migrated our production databases from 8.1.7.2 to 9.2.0.4. We could have gone to 9.2 earlier but we chose to wait. I anticipate moving to 10g sometime next year.

The reason I want to wait until at least 2 patch sets is due to experience. We ran quite successfully on 8.1.7.2 for almost 3 years. When we migrated from 8.1.5 to 8.1.7 the current release was 8.1.7.1. We had problems that were corrected in the 8.1.7.2 patch set. (That patch set was released the week following our migration to 8.1.7.1. Timing is everything.)

I do not want want to be at the "bleeding edge" of any technology again. I have lived through the worst case scenario. Almost 20 years ago my company was a beta test site for Oracle 5.0. Yes, Oracle five. We were trying to develop a new system against an unstable Oracle code base that was being patched almost weekly. Now that was an interesting time. We wanted to use functionality that was not available in Oracle 4 so we didn't have much choice. Our project was several months late but the application was rock solid when completed. But 20 years later that project still influences my thinking. No more bleeding edge for me.

Tom Kyte
June 17, 2004 - 10:20 am UTC

i had as many people with issues on 8170 as 8171 as 8172 as 8173 as 8174 as ....................



Shall we upgrade or not

Anurag, June 19, 2004 - 2:23 am UTC

Tom, Our company is using Oracle9i. We've a lots of data containing large images of size 5mb to 100 mb. Its basically a search type of system for spatial data. We've heard about Oracle10g, Does it contains new features that will help us more in retrieving and maintaining Spatial data . Our vendor is forcing us to migrate. Pl. advise.


Tom Kyte
June 19, 2004 - 8:06 am UTC

You want to read about spatial and locator functionality. There are many new spatial features, but 99.99% of the functionality has been there for a long long time.

it is all on otn.oracle.com

Upgrade, can be a major problem with Oracle Applications

Lance Charbonneau, June 21, 2004 - 5:14 pm UTC

Hi Tom,

We are a shop that uses Oracle E-business Suite. The issue of upgrading the database is much more complicated with Oracle Apps. We were on 11.5.5 Apps, with an 8.1.7.4 database. We wanted to go to 9i, but it was only certified for 9.0.1.4. Oracle would not support us at 9.2. We upgraded the database to 9.0.1.4. Oracle then de-supported 9.0.1.4 for Oracle Applications, but 8.1.7.4 is still supported. We upgraded ourselves into being de-supported! We had to upgrade the applications to 11.5.9 to be supported on 9.2, and upgrading the database is easy compared to upgrading Oracle's E-Business Suite. If you are using Oracle Apps, you are pretty much denied from using any new Oracle Technology for awhile. The latest version of Oracle Apps still uses Forms and Reports 6i and iAS 1.0.2.2.2. You can finally 'glue' some parts of iAS10g to Oracle Apps through an "Early Adopter Program" (doesn't sound very stable to me). Depending on what applications you are running (especially Oracle's Applicatons), I would advise waiting until the terminal release of a database before upgrading.

10g tkprof deprecated feature?

A reader, July 01, 2004 - 11:13 am UTC

Hi

In old tkprof we could use explain= to check execution plans, in 10g however I always see this:

Error in CREATE TABLE of EXPLAIN PLAN table: GOYA.prof$plan_table
ORA-00922: missing or invalid option

parse error offset: 1048
EXPLAIN PLAN option disabled.

no longer valid this option?

Tom Kyte
July 01, 2004 - 11:37 am UTC

explain plan stinks in tkprof, the tkprof report will show REALITY with the information from the trace file.

you do not need explain=, you haven't needed or wanted it for years.

oh yes it´s useful

A reader, July 01, 2004 - 12:01 pm UTC

I need explain plan because autotrace doesnt show the real plan!!!!
But seems it´s deprecated feature :-(

thx anyways

Tom Kyte
July 01, 2004 - 7:43 pm UTC

no,

in a tkprof -- you get the REAL plan

explain plan = thing autotrace uses = autotace and explain plan are the SAME (wrong at times).

explain= should not be used.

the real plan is already *there*

Dave

Dave, July 01, 2004 - 3:20 pm UTC

set sql_trace and timed_statistics = true

run your sql

tkprof tracefile output_file

You will have the REAL explain that that was used in the statement. Did you know explain= will show the explain plan it WILL use, not the one it did in the query you ran

of course I know

A reader, July 01, 2004 - 4:54 pm UTC

Yea right you see all these funny excution plan with object IDs, how about you make some script to decode that?

did you also know

A reader, July 01, 2004 - 4:56 pm UTC

That autotrace can sometimes generate bad plan and in reality it uses something else. Not sure why it's not fixed :-0



Tom Kyte
July 01, 2004 - 8:01 pm UTC

autotrace and explain plan (guess what autotrace uses?) generate 'inaccurate' plans in very very predicable and sensible cases

a) bind variable peeking, they cannot do it, they don't have the bind variable values to peek at!

b) bind variable types -- "where x = :x" -- is that :x a number, a string or a date? to explain plan *it is ALWAYS a string*. this can and will change the plan.


well known phenonema, unavoidable.

if you have a plan_table (create it using utlxplan.sql in $ORACLE_HOME/rdbms/admin), explain= on tkprof will give you the same information as autotrace.

Really?

Dave, July 01, 2004 - 6:58 pm UTC

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production

SQL> create table test (dummy number);

Table created.

SQL> alter session set sql_trace = true;

Session altered.

SQL> alter session set timed_statistics = true;

Session altered.

SQL> select * from test where 1 = 0;

no rows selected

SQL> alter session set sql_trace = false;

Session altered.

SQL>  alter session set timed_statistics = false;

Session altered.

SQL> exit


C:\oracle\admin\DEMO\udump>tkprof demo_ora_3688.trc ds2.txt

select * 
from
 test where 1 = 0


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       0.00          0          0          0           0

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 63  

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  FILTER  
      0   TABLE ACCESS FULL TEST 


looks very readable to me
 

Try that in 10G Dave

A reader, July 02, 2004 - 2:53 am UTC

alter session set events '10046 trace name context foreer, level 4';

SQL> r
  1    SELECT t_direccion_mail, t_destinatario_mail
  2      FROM grupo_mail
  3     WHERE t_nombre_grupo = 'OXF'
  4*      AND b_bloqueado    = 'N'

no rows selected

tkprof zazz_ora_24061.trc  zazz_ora_24061.trc.txt sys=no

  SELECT t_direccion_mail, t_destinatario_mail
    FROM grupo_mail
   WHERE t_nombre_grupo = 'OXF'
     AND b_bloqueado    = 'N'

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.01          1          1          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.01       0.02          1          1          0           0

Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: 66

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  TABLE ACCESS BY INDEX ROWID OBJ#(95464) (cr=1 pr=1 pw=0 time=14080 us) 
      0   INDEX RANGE SCAN OBJ#(95465) (cr=1 pr=1 pw=0 time=14074 us)(object id 95465)


very readable yes

and dont try with DUAL, that´s always shown 

Tom Kyte
July 02, 2004 - 9:16 am UTC

it is however, 100% accurate and correct.  and if you exit your session, the object names should be there.  

Consider:

select *
from
 t t1 where x = :x
                                                                                            
                                                                                            
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.01       0.03          0          0          0           0
Fetch        2      0.00       0.00          2          4          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.01       0.03          2          4          0           1
                                                                                            
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 60  (OPS$TKYTE)
                                                                                            
Rows     Row Source Operation
-------  ---------------------------------------------------
      1  TABLE ACCESS BY INDEX ROWID T (cr=4 pr=2 pw=0 time=476 us)
      1   INDEX RANGE SCAN T_IDX (cr=3 pr=2 pw=0 time=467 us)(object id 53517)
                                                                                            
                                                                                            <b>that was reality... that is what really happened</b>

Rows     Execution Plan
-------  ---------------------------------------------------
      0  SELECT STATEMENT   MODE: ALL_ROWS
      1   TABLE ACCESS   MODE: ANALYZED (FULL) OF 'T' (TABLE)

<b>that is what explain plan says would happen.  but it didn't.</b>


If you really want to use explain=, you can use this workaround (they messed up the create table in tkprof)


Workaround:
Precreate the plan table using ?/rdbms/admin/utlxplan.sql 
and instruct TKPROF to use it using the TABLE option, e.g. 

tkprof .... explain=scott/tiger table=scott.plan_plan

 

10g

Dave, July 02, 2004 - 5:10 am UTC

SQL*Plus: Release 10.1.0.2.0 - Production on Fri Jul 2 08:43:14 2004
 
Copyright (c) 1982, 2004, Oracle.  All rights reserved.
 
 
Connected to:
Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning and Data Mining options
 
SQL> create table test (dummy number);
 
Table created.
 
SQL> alter session set sql_trace = true;
 
Session altered.
 
SQL> alter session set timed_statistics = true;
 
Session altered.
 
SQL> select * from test where 1 = 0;
 
no rows selected
 
SQL> alter session set sql_trace = false;
 
Session altered.
 
SQL> alter session set timed_statistics = false;
 
Session altered.



select *
from
 test where 1 = 0
                                                                                                                             
                                                                                                                             
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.01          0          1          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       0.01          0          1          0           0
                                                                                                                             
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 57
                                                                                                                             
Rows     Row Source Operation
-------  ---------------------------------------------------
      0  FILTER  (cr=0 pr=0 pw=0 time=11 us)
      0   TABLE ACCESS FULL TEST (cr=0 pr=0 pw=0 time=0 us)

Now admittedly if i use set event 10046 I get the object number (not very useful)

Use sql_trace and you get the real object name and the real execution plan, it really wont lie to you 

Tom Kyte
July 02, 2004 - 9:57 am UTC

I only use 10046, i get the names.

names in 10046

Dave, July 02, 2004 - 10:42 am UTC

Can you show us an example of getting the names in a 10046 trace so i can see where i am going wrong?

In 10g all the sys stuff has table or view names, my query doesnt

Thanks

Tom Kyte
July 02, 2004 - 11:01 am UTC

my example with a 10046 trace is above...

ops$tkyte@ORA10G> create user a identified by a;
User created.
 
 
  1* grant create session, alter session, create table to a
ops$tkyte@ORA10G> /
Grant succeeded.
 
ops$tkyte@ORA10G> alter user a quota unlimited on users default tablespace users;
User altered.
 
ops$tkyte@ORA10G> @connect a/a
a@ORA10G> create table t ( x int );
 
Table created.
 
a@ORA10G> set echo on
a@ORA10G> @trace
a@ORA10G> alter session set events '10046 trace name context forever, level 12';
 
Session altered.
 
a@ORA10G> select * from t;
 
no rows selected
 
a@ORA10G> select * from all_users where 1=0;
 
no rows selected



the tkprof shows:

select *
from
 t
                                                                                                        
                                                                                                        
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          1          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.00          0          3          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       0.00          0          4          0           0
                                                                                                        
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 61
                                                                                                        
Rows     Row Source Operation
-------  ---------------------------------------------------
      0  TABLE ACCESS FULL T (cr=3 pr=0 pw=0 time=64 us)
                                                                                                        
                                                                                                        
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       1        0.00          0.00
  SQL*Net message from client                     1        0.00          0.00
********************************************************************************
                                                                                                        
select *
from
 all_users where 1=0
                                                                                                        
                                                                                                        
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       0.00          0          0          0           0
                                                                                                        
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 61
                                                                                                        
Rows     Row Source Operation
-------  ---------------------------------------------------
      0  FILTER  (cr=0 pr=0 pw=0 time=7 us)
      0   HASH JOIN  (cr=0 pr=0 pw=0 time=0 us)
      0    HASH JOIN  (cr=0 pr=0 pw=0 time=0 us)
      0     TABLE ACCESS FULL OBJ#(16) (cr=0 pr=0 pw=0 time=0 us)
      0     TABLE ACCESS FULL OBJ#(22) (cr=0 pr=0 pw=0 time=0 us)
      0    TABLE ACCESS FULL OBJ#(16) (cr=0 pr=0 pw=0 time=0 us)


On stuff I'm allowed to see, i see it -- things I own. 

A reader, July 02, 2004 - 11:35 am UTC

what does your @trace do?


(by the way don't know if this problem, but your text box assumes I am using an american keyboard) my @ key produces ")

Tom Kyte
July 02, 2004 - 11:37 am UTC

a@ORA10G> @trace
a@ORA10G> alter session set events '10046 trace name context forever, level 12';

Dave, July 02, 2004 - 11:41 am UTC

Doh, just saw the set echo on

I did quit my sql plus session

A reader, July 02, 2004 - 3:03 pm UTC

In my 10046 trace I did quit SQL*PLUS session then run tkprof and saw no names. I will post the exact steps on Monday

Cheers

ASM feature

P, July 09, 2004 - 11:50 am UTC

I was just reading about ASM feature and wanted to get more information.

To use ASM we need to create separate ASM instance/database. This instance/database is shared by other normal databases as a repository (and is need for other ASM operations) to store information about ASM disks and files. I want to know what happens in case of ASM instance/database failure (I can imagine so many scenarios about this). I tried finding answer on metalink and allover the web but couldn’t. Could you help me clarify?



Tom Kyte
July 09, 2004 - 1:51 pm UTC

Same thing that happens if your UFS driver fails or your veritas software fails or whatever piece of software you are using to manage your disks fails.

the disks become unavailable until you restart the disk manager.

Think of ASM as a "file system driver for databases" -- it is just that. The same things that would happen if you kill your file system drivers would happen if the asm instance failed.

Support dates are not set in stone ...

Robert Howlett, July 15, 2004 - 1:25 am UTC

Tom,

I think the telling comment from you on this is

"817 was originally scheduled to go in 2003, was going to up to about 3 months
before the scheduled end date. At the last minute, they extended it."

And thats the point - terminal releases will get their support dates extended, proabably because ORACLE can't shift enough of their customer base off a stable terminal release and to cut off support would cause too much of a backlash. I'm pretty sure the same thing happened with 7 - people didn't want to budge off the terminal release because it was stable, so support was extended.

So, when you say:

"look at 9ir2 and 10gR1 and make *that* comparision.... That's what counts "

Sure, but although 10gR1 is slated for a longer support window NOW, chances are when desupport time rolls around, 9iR2 will be extended, because thats what a large percentage of the customer base will be on.

You can just about guarantee 10gR1 will be cut off on the proposed date, just like 9iR1 was.



Tom Kyte
July 15, 2004 - 12:10 pm UTC



chances are, not and you cannot guarantee anything. there is no historical backing for that. and even supposing it did happen -- they would be equals as far as that goes (a year extension was exceptional)

there would be (is in fact) 0% chance I would consider spending the time to migrate to 9ir2 right now, today. Period. Look at the time frames here -- we are talking end of support years out -- not months, years.

I've been using Oracle as long or longer than most.
I was a customer long before I was an employee.

We get nothing extra for selling 10g over 9i (or vice versa).

I'm just telling you what I would do and why.

ASM-isnt it single point of failure

P, July 15, 2004 - 10:10 am UTC

tom,
Isn’t ASM becomes single point of failure or a bottleneck. Also one more database for backup/maintenance.

thank you,
P

Tom Kyte
July 15, 2004 - 1:00 pm UTC

ASM is no more a SPOF than veritas, ufs, the OS is.

it is a file system driver for database files.

(you don't backup your veritas configurations or whatever you use? you just remember it all in your head?)

10g - Wait a while longer...

Peter Clark, aka Angus McT., July 16, 2004 - 10:10 am UTC

Tom,

You are bound to recommend going to 10g rather than 9i, but you are probably not the right person to ask, being an Oracle employee. The fact that Oracle used 10g themselves before releasing it means nothing - it would be rather alarming if they hadn't been using it! My experience as a DBA, and that of most other DBAs I know is that v7 wasn't really stable and reliable until 7.3, and v8 wasn't quite what it was cracked up to be until 8.1.7., and neither was v9 until 9.2. Why should 10g be any different? I don't see this as a problem; the new versions are always likely to need modifying and fixing for some time after they are unleashed onto the world at large and used in many different scenarios for numerous different types and sizes of application and database. Thankfully there are plenty of people who like to be at the cutting edge and act as guinea pigs for we more cautious types who prefer to sit back and let others do all the hard work and hit all the problems. If you have the luxury and the time to be able to play with a new version then do it, but I wouldn't recommend upgrading critical databases to the first release of a new version of Oracle - experience has shown it to be too risky.

Tom Kyte
July 16, 2004 - 1:32 pm UTC

why would me, being an Oracle employee, be the wrong person to ask?

I build systems.
I run systems.
I use systems.
I'm an Oracle user first and foremost.

I've been the DBA.
I've been the developer
Heck -- I still am, the very fact that I'm an Oracle employee has no bearing or relevance on that *at all*.

sorry - but I wholeheartedly disagree with virtually everything you say timeline wise above. 7.0 was very stable -- as was 7.1 (heck, lots of people still *run* 7.1.6 today).

806 was an extremely popular release -- still have people running it.

8.1.6 was huge.

this is not guinea pig land.

My experience has categorically shown *different*.

10g has fixes for 9ir2 features that got stressed. 9ir2 might not even have them (they get fixed in the current release, backported as needed). 10g is a better 9ir2 -- the features that are most likely to cause issues are not the pre-existing features (they are actually stronger and more bullet proof in 10g) but rather the brand new stuff that hasn't been banged on for a bit.

So, you want a better 9ir2? Use 10g.

First problem of 10G

Yong Wu, July 16, 2004 - 1:56 pm UTC

I just tested data pump export on 10G for Linux and got this problem

expdp system/manager dumpfile=emp.dmp logfile=emp.log schemas=scott

Export: Release 10.1.0.2.0 - Production on Friday, 16 July, 2004 10:47

Copyright (c) 2003, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
FLASHBACK automatically enabled to preserve database integrity.
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** dumpfile=emp.dmp logfile=emp.log schemas=scott
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 768 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/SE_PRE_SCHEMA_PROCOBJACT/PROCACT_SCHEMA
ORA-39125: Worker unexpected fatal error in KUPW$WORKER.UNLOAD_METADATA while calling DBMS_METADATA.FETCH_XML_CLOB [PROCACT_SCHEMA:"SCOTT"]
ORA-04063: view "SYS.KU$_CLUSTER_VIEW" has errors

I searched metalink. There is 3 entries regarding to the issue. I followed the instruction and still have the same problem. This is a very basic utility.

Hope to get help from Ask Tom.

Thanks

Tom Kyte
July 16, 2004 - 2:13 pm UTC

this would be a clear "open a tar" issue don't you think?

although i might try seeing what the errors on that view were, that would be a reasonable starting point. log in as sysdba and either show errors view ku$cluster_view or query user_errors.

Yong Wu, July 16, 2004 - 4:30 pm UTC

Here what in metalink

This will be fixed in the 10.1.0.3 patchset when it comes available. 

SQL> select * from user_errors where name='KU$_CLUSTER_VIEW';

NAME                           TYPE           SEQUENCE       LINE   POSITION
------------------------------ ------------ ---------- ---------- ----------
TEXT
--------------------------------------------------------------------------------
ATTRIBUTE MESSAGE_NUMBER
--------- --------------
KU$_CLUSTER_VIEW               VIEW                  1          0          0
ORA-04063: view "SYS.KU$_COLUMN_VIEW" has errors
ERROR                  0
 

Tom Kyte
July 16, 2004 - 11:00 pm UTC

not acceptable answer to me - bug or note number please?

kirtish, July 19, 2004 - 11:14 am UTC

Hi Tom,
I have installed 10G on my staging server but currently i am having some problem with it. I am not able to connect to this server from the network aftr the upgrade. I am getting the following error:
ORA-12526: Message 12526 not found; product=RDBMS; facility=ORA

Could you tell me what exactly is this error? For the error indicated as if its not able to find oracle itself on my server.

Tom Kyte
July 19, 2004 - 11:30 am UTC

your oracle home environment variable is not set.

[tkyte@tkyte-pc tkyte]$ oerr ora 12526
12526, 00000, "TNS:listener: all appropriate instances are in restricted mode"
// *Cause: Database instances supporting the service requested by the client
// were in restricted mode. The Listener does not allow connections to
// instances in restricted mode. This condition may be temporary, such as
// during periods when database administration is performed.
// *Action: Attempt the connection again. If error persists, then contact
// the database administrator to change the mode of the instance,
// if appropriate.


still some assistance required

Kirtish, July 19, 2004 - 12:13 pm UTC

Hi Tom,
Thanks for your such a fast responce but i am not able to solve the problem yet. I am working on windows 2000 server could you tell me what action is need from my side. I have checked the registery entry but i am not able to find anything wrong with the oracle_home entry. It is correctly showing it as e:\oracle.

Please let me know what should i do.

Thanks
Kirtish

Tom Kyte
July 19, 2004 - 12:16 pm UTC


are you using a 9i client to point to this server? the message is new with 10g. if your oracle home is in fact set up -- that could cause it as well.

and if so, the message above tells you what is wrong.

Thanks

Kirtish, July 19, 2004 - 12:25 pm UTC

HI Tom,
Thanks for such a quick reply. ITs amazing the speed at which respond. Infact i am trying to connect from 9i client.
So then i will have to use a 10G client or get back to 9i.Is their anyother work around.

Thanks once again for your such a quick and accurate responce.



Tom Kyte
July 19, 2004 - 1:47 pm UTC

no, the 10g client would not work either -- the database is in restricted mode, alter it out of that mode, then you can connect.

Export DataPump: ORA-39125 Fatal Error in Worker while Calling DMSYS.DBMS_DM_MOD

Yong Wu, July 19, 2004 - 1:12 pm UTC

The metalink id/DOC ID are 272874.1 and 277972.1

I followed the instruction. But I have the same problem.

ORA-39125: Worker unexpected fatal error in KUPW$WORKER.UNLOAD_METADATA while
calling DBMS_METADATA.FETCH_XML_CLOB [PROCACT_SCHEMA:"SCOTT"]
ORA-04063: view "SYS.KU$_CLUSTER_VIEW" has errors

thanks

Tom Kyte
July 19, 2004 - 1:55 pm UTC

neither of those reference that view.

so again, what article tells you "wait"?

and what happens if you

alter view KU$_CLUSTER_VIEW compile;
show errors view KU$_CLUSTER_VIEW



Yong Wu, July 21, 2004 - 2:39 pm UTC

In metalink, past this
ORA-39125: Worker unexpected fatal error

it comes back three entries. the second one

DOC ID 277972.1
 
Fix

This will be fixed in the 10.1.0.3 patchset when it comes available. 

Here is what I did

SQL> alter view KU$_CLUSTER_VIEW compile;

Warning: View altered with compilation errors.

SQL> show errors
No errors.
SQL> select * from user_errors where name='KU$_CLUSTER_VIEW';

NAME                           TYPE           SEQUENCE       LINE   POSITION
------------------------------ ------------ ---------- ---------- ----------
TEXT
--------------------------------------------------------------------------------
ATTRIBUTE MESSAGE_NUMBER
--------- --------------
KU$_CLUSTER_VIEW               VIEW                  1          0          0
ORA-04063: view "SYS.KU$_COLUMN_VIEW" has errors
ERROR                  0

I have no clue what is the problem from this.

Thanks
 

Tom Kyte
July 21, 2004 - 6:46 pm UTC

I see no relation between your error and that note.


Have you filed a tar with support -- that patch you are waiting for would do nothing for you.

10g migration

reader, July 27, 2004 - 4:38 am UTC

Tom,

We like to migrate from 9ir2 to 10g. To understand the impact on the migrated environment, as you always say, the application needs to be tested, tested...I fully agree with you. We will do that..

As we do not want to use any new feature of oracle 10g at this moment and prefer to go for a natural migration, like to know some information like.. 9i features that are desupported in 10g , some new reserved keywords and other application related stuff like this. Can you give some details on this or point any other source to get them?

Regards

9i listener

Sam, July 31, 2004 - 1:24 pm UTC

Can 9i client tool such as sqlplus be used to connect to 10g database? On the server where my 10g database is located, should I have 10g listener process or even 9i listener process would work? Thanks.

Tom Kyte
July 31, 2004 - 1:50 pm UTC

sure, 9i clients can and do connect to 10g (as do 8i clients)

you would be definitely using a 10g listener. that single listener can service requests for any lower versioned database on that same server (eg: your 10g listener can listen for 10g requests, 9i database requests and so on)

virtual book

A reader, July 31, 2004 - 2:30 pm UTC

Hi

Where is that virtual book thing? I went to tahiti.oracle.com but cant find anything about it

Tom Kyte
July 31, 2004 - 2:40 pm UTC

when you search, it is one of the checkboxes "Format results into a virtual book"

ASM in 10g

Robert, July 31, 2004 - 4:16 pm UTC

I was just wondering where the meta data related to disk groups are stored? Because ASM instance is not open , it just mounts the disk groups. I am talking about metadata as a result of issuing commands in ASM instance like CREATE/ADD/ALTER DISKGROUP. Thanks.

Tom Kyte
July 31, 2004 - 5:28 pm UTC

in the devices mounted by asm themselves. you have a string of "mounts" in the initialization file, that gets it to the metadata stored in the devices themselves. (and why you might have to dd over a disk that was in a disk group once before -- if the meta data remains behind...)

metadata on the disk itself

Robert, July 31, 2004 - 5:49 pm UTC

Does it mean V$ASM% views get info by reading the disk? interesting!

If I lose a disk, is metadata also lost! May be I am confused. Also, could you clarify what you meant "and why you might have to dd over a disk that was in a disk group
once before -- if the meta data remains behind." Thanks.

Tom Kyte
July 31, 2004 - 6:58 pm UTC

if you lose a disk, you restore it from backups.


the meta data is stored on the disk, if you just wipe out an ASM instance (you were testing) and build a "real one", you might get "sorry, that disk already belongs to group yadda yadda" -- if so, that just means the metadata is still there and we need to remove it.

upgrade assistant

Sam, August 01, 2004 - 9:37 am UTC

Tom, If I use the upgrade assistant GUI to perform my 8.1.7.3 database to 10g, does it help to check the old init parameters like db_block_buffers and so on and automatically change it to new ones like db_cache_size? OR it will give an error and ask us to change them? Thanks.

Tom Kyte
August 01, 2004 - 11:11 am UTC

you'll be doing 8174 to 10g but that aside -- only if a parameter was obsoleted would it do anything about it. db_block_buffers is still valid in 10g.

8.1.7 to 10 G Reading guide

Alvin, August 19, 2004 - 5:47 am UTC


Our company plans to migrate from Oracle 8i running on solaris to 10G running on linux to take advantage of the grid capabilities.

I'm making a reading guide..

1. Upgrade guide
1.1 New Features
2. Concept Guide
3. Administration Guide
4. Data guard
5. Performance tuning guide
6. Grid computing chapters

Am i making the right reading track guide for myself ?

Btw, i want to thank you for writing the expert one on one and the Effective oracle by design book.

Tom Kyte
August 19, 2004 - 9:48 am UTC

well, it is always a good idea to catch up on backup and recovery :) so I'd add 3.5 -- the backup and recovery/rman guides.

but yes, that outline follows what I have laid out roughly in Effective Oracle by Design.

Solaris to linux

Alvin, August 20, 2004 - 1:12 am UTC

Chapter 3 of the Upgrade guide states :
--8<--
Migrating Data to a Different Operating System
When using the Database Upgrade Assistant or when performing a manual upgrade, you cannot migrate data in a database on one operating system to a database on another operating system. For example, you cannot migrate data in an Oracle9i database on Solaris to an Oracle Database 10g database on Windows 2000 using the Database Upgrade Assistant. However, you normally can use Export/Import to migrate data between databases on different operating systems.
--8<--

Are solaris and linux considered different despite the fact that they are both UNIX based OS's ?


Tom Kyte
August 20, 2004 - 10:47 am UTC

they are as different as night and day. different byte orders and everything (totally different hardware architectures).

yes, they are different.

migration 10g SE vs EE

Jelena, August 24, 2004 - 6:10 am UTC

Hi Tom,
we are considering upgrading to 10g (at the moment we use 9.2.0.4 SE on solaris). From your answers i guess till now 10g should be definitelly mature enough for production...
But i have couple of questions:
1. as we consider switching to 10g on Linux and using RAC with up to 4 Linux servers, my question is how we can have cache fusion if we stay with SE (as there is no Oracle Stream in SE). Is there any other option to keep cache on all servers in sync which comes with SE?
2. do you know if anyone is using RAC on prod with SE and in case we need to switch later to EE - i read that we don't need to do any changes to db, just upgrade software... so do you think migration SE->EE would be possible with only couple of hours of downtime (shutdown, install, start up)?

Thanks for your help,
Jelena

Tom Kyte
August 24, 2004 - 7:52 am UTC

1) what does RAC have to do with streams?!?!? not sure what you are asking by:

my question is how we can have cache fusion if we stay with SE (as
there is no Oracle Stream in SE)

the two have nothing whatsoever to do with eachother.

2) going from SE to EE is a simple script upgrade, yes.

RAC in 10g

Jelena, August 24, 2004 - 8:26 am UTC

Hi Tom, thanks for your answer.
My first question was actually how good is mechanism which does cache sync between those multiple servers. I heard that in 9i there were some problems with cache syncronisation, so i was just wandering if it is reliable in 10g.
(I thought that maybe Oracle Stream could also be used for that, but obviously it's not the case).
Problem there is that we don't have really a way to test it, as it's difficult to do in test environment, and we would certanly not like to install all RAC architecture on production and then figure out that cache is not synced as we need it.
Jelena

Tom Kyte
August 24, 2004 - 8:57 am UTC

"i heard there were problems" ummm, no not really. there are many people in production on 9i RAC today.

streams is for data sharing across databases -- like replication.




Bugs in Initial Releases

Richard, August 24, 2004 - 9:42 am UTC

Hi,

Peter Clark (the infamous Angus McT) does have a point.

A while back, me and others </code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:711027832625, <code>drew attention to bugs in previous Oracle releases. If I'd gone live with the version I harped on about, I'd have been miffed - TKPROF is an important thing in Oracle DBA land.

To be fair, I have to say that it's not an Oracle-specific trait - all gigantic applications suffer from initial release nasties; however, I'm in agreement with Peter - the wise DBA will ALWAYS hang back, and see what the more adventurous members of the clan stumble across!

Tom Kyte
August 24, 2004 - 10:25 am UTC

did you know, there are bugs in all releases of Oracle? (change Oracle to "your favorite piece of software", it is universal)

*all* of them?

even the terminal releases?

they are not restricted or even particularly unique to "the first release".

tkprof was made *immediately* available to anyone that wanted it. And it could have easily been left out of 9204 just as 901.

Bugs... again

Richard, August 24, 2004 - 11:53 am UTC

Hi,

Yes, I know all *favourite applications* have bugs (some, like several M$ applications, have more bugs than a bum's vest), but hanging back is still the way to go, because that way you at least have a fighting chance of finding out from others what they are and how to avoid them.

As for initial releases being just as bug-ridden as later releases, I don't see how that's possible. If application ABC has (unbeknown to anyone) 100 bugs, then, upon initial release (v1r1), it has 100 bugs. If, after User/vendor checks, amendments are made and the spiffed-up application is released as v1r2, then surely v1r2 has fewer bugs? Or should one assume that by fixing one bug, the development team invariably add more of 'em? Perhaps I don't want to hear the answer to that!

Tom Kyte
August 24, 2004 - 1:05 pm UTC

Ok, -- you say "9ir1 -- avoid because tkprof was missing and tkprof is vital". 9ir2 (all of the way up to 9205) has a bug that makes sql_trace timings go through the roof -- something that if you did not know, would make you perhaps look at that wrong stuff.

but 9ir1 didn't have that problem.

hmmm.


the database is an evolution. there were as many changes (more actually if you count them) between 9ir1 and 9ir2 than 8ir3 and 9ir1. there are LESS changes between 9ir2 and 10gr1 perhaps than between 9ir1 and 9ir2.

so, which is the v1?

the "r2" is a major release. the patch sets in between -- they are not.


"r2" is as V1 as "r1" is.

being afraid

Jeff Hunter, August 24, 2004 - 12:29 pm UTC

If you are afraid of upgrading you should ask yourself if you are in the right field. Sure, as a DBA it's your job to be cautious. At the same time, it is your responsibility to the business to explore Oracle features that would enhance the business and save money/time/processing power. I don't think anyone is advocating upgrading your Production database to the latest and greatest release the day it comes out. Upgrade in Test/Development a couple times. Learn how to use the new software. Backup a database. Then test your apps against the new version. If they break, fix them. If they don't, upgrade.

8i to 10g

Alvin, August 25, 2004 - 2:23 am UTC

After having read the 10g Upgrade guide. Export/Import is the only way we can migrate. My question would be...

1. can i do incremental export/import ??
2. Assuming #1 is a NO.. Do i have to restart the db in restrict mode and export with consistent=y ?


Tom Kyte
August 25, 2004 - 7:35 am UTC

why is export/import the only way?

1) would not make sense for an upgrade? the incremental grabs every row in every TABLE that has had a single byte changed in it.

2) if you started the db in single user mode, consistent= would not be necessary.


but why is export/import the "only way" direct upgrade from 8i is certainly supported and would be preferred.

Direct upgrage.

Alvin, August 26, 2004 - 12:47 am UTC

Currently we have 817 on solaris 5.9.. we'll be upgrading to 10g on linux suse on a different machine. Can that be done using database upgrade assistant ??

As of my understanding the Direct upgrade is only possible if your planning to use the same OS and possibly on the same machine.

We're gonna use Different OS on Different machines. Maybe i missed something... will re-read the upgrade guide.



Tom Kyte
August 26, 2004 - 9:39 am UTC

you would need to either:

dump and reload (exp -> imp for example)

or

update the 817 on solaris to 10g using the dbua and then transport the tablespaces from solaris to linux.

downtime issues...

Alvin, August 26, 2004 - 11:03 pm UTC

---8<---
you would need to either:

dump and reload (exp -> imp for example)

or

update the 817 on solaris to 10g using the dbua and then transport the
tablespaces from solaris to linux.
---8<---

A couple of consideration questions.

Will the later option ( Upgrade 10G on Solaris then DBUA to Linux)
1. Entail more costs for the company?
2. Have a smaller downtime window ?
3. have less probable errors encountered ?

Tom Kyte
August 27, 2004 - 7:38 am UTC

everything "depends"

1) depends on how you measure cost and what planning you do and how "good" you are

2) depends on the database size

3) depends on how much you practice each one



If I had a very very large database, upgrade to 10g and transport would be something I would seriously consider to minimize downtime.

If I had a small database, exp->imp would be

If I'm somewhere in the middle, I'd have to test and time each one (since you must practice this a couple of times to see how long to schedule the downtime window for, to make sure you know how to do it, to test your applications on the new system, to find any issues you will have and correct them, etc, etc etc)


If downtime minimization is your ultimate goal -- you could read:

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:744625626787

and specifically:

http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:744625626787#15013618923218 <code>




8.1.7.3 to 10G

Susan, August 27, 2004 - 5:51 pm UTC

We're planning an upgrade from 8.1.7.3 to 10G on Windows 2000. I understand that we'll first have to apply the patchset for 8.1.7.4, however, I've seen references to 8.1.7.4.12 being the terminal release for the Windows platform. Is it possible to upgrade directly from 8.1.7.4 to 10G? I can't seem to find the 8.1.7.4.12 patchset and it appears to introduce a bug for Failsafe which we are running. I've looked at the certify & availibility matrix at metalink and it's not clear what the terminal release is. Thanks for any info.

Tom Kyte
August 27, 2004 - 7:26 pm UTC

8174 -> 10g, yes.

an itar would be called for to find a patch you cannot find.

10g listener

J, August 29, 2004 - 11:11 pm UTC

Hi Tom,

I am in process to upgrade 9i to 10G manually. I did everything before running upgrade script in new database, following the step-by-step upgrade procedure described in database upgrade guide. I am able to start 10G database service and OracleCSService, however I couldn't start listener. Actually all I did so far, nothing related listener and nothing shows up on service (I am working on Window XP). How do I know that I have listener for 10g installed correctly? I thought it should be done automatically when I installed the software. Please be advised kindly or direct to correct documention. Thanks for your help, as always.

Tom Kyte
August 30, 2004 - 8:26 am UTC

just run

c:\> lsnrctl start

from the command line, it'll create the service.

8.1.7.3 to 10G

Susan, August 30, 2004 - 12:54 pm UTC

Tom,

Oracle support has stated to me that I can perform an upgrade directly from 8.1.7.3 to 10G. Can you elaborate on why you advise patching to 8.1.7.4. Thanks.

Tom Kyte
August 30, 2004 - 1:11 pm UTC

if support has

a) said OK
b) given you the step by steps
c) will support you

go for it. It is normally done from the terminal, last supported release which in this case is 8174. according to what I've seen (since you prompted me to poke deeper :) 8.1.7.x is the needed release. I've never done anything other than 8.1.7.4

anyone else have any experiences to relate regarding this?

A reader, August 30, 2004 - 4:37 pm UTC

Found the following comments in the thread "
If it's more than 8G, can you tell me how you enabled the memory configuration to
get an SGA larger than 1.7G? With default LINUX configuration supporting only
4G (32 bit), you can only get a max 1.7G SGA. "

I didn't understand this. Is there any restriction for max SGA to be 1.7 G in Linux (I am planning to move from Solaris to Redhat advanced server 3.0)?



In response to Tom a couple of comments up...

Nick, August 31, 2004 - 8:32 am UTC

We successfully migrated 2 production databases from 8.1.7.2 to 9.2.0.4. This doesn't answer the question about migrating to 10, but it does suggest that 8.1.7.2 was sufficiently mature to handle upgrades.

A reader, August 31, 2004 - 7:45 pm UTC

Thanks for the info. It doesn’t really answer my question. My question was why the max is 1.7 GB when you can create 4GB SGA?

Tom Kyte
September 01, 2004 - 7:42 am UTC

begging your pardon but you wrote:

<quote>
I didn't understand this. Is there any restriction for max SGA to be 1.7 G in
Linux (I am planning to move from Solaris to Redhat advanced server 3.0)?
</quote>

and I responded with a link that shows how to set up a larger SGA?!?


so, sort of shows that 1.7 gig (as posted by another reader) isn't the hard limit at all. sorry if that was not clear.

As that reader stated "with the DEFAULT configuration...."

A reader, September 01, 2004 - 1:58 pm UTC

Thanks for the details! Now the issue is clear for me.

I guess that I didn't present my question properly.

I just need a 2-3 GB SGA and I didn't understand the about thread and asked you a question.

Thanks for sharing this info.

Oracle Developer

Dawar, September 02, 2004 - 12:30 pm UTC

Tom,

Destination OS: SuSE Linux Enterprise Server 8.
Source OS : Sun Microsystems Inc. SunOS 5.5.1

I have installed Oracle 10.1.0.2 on SuSE Linux Enterprise Server 8.
My legacy database version is Oracle 7.3.4.and is installed on SunOS 5.5.1.

I would like to exp/imp data from Oracle 7.3.4 to Oracle 10.1.0.2.

I noticed some utilities in Oracle 10g (dbconsole) under maintenance tab.

Utilities

Export to Files
Import from Files
Import from Database
Load Data from File
Gather Statistics
Reorganize Objects
Make Tablespace Locally Managed


Can I use above utilities to acheive my goal?

Will " Import from Database" utility is a solution for me?
If yes how it will work?

I will appreciate your feed back.

Please note: This is my first time to use exp/imp from one database version to another database.

Regards,
Dawar

Tom Kyte
September 02, 2004 - 1:45 pm UTC

you will use the 7.3 export (exp) tool against 7.3 to create the DMP file. you will use the command line.

then ftp the file to the target machine.

you will use the 10g import (imp) against the result file. You can use dbconsole to import if you like, but it would be as easy to just "run the command"

Oracle Developer

Dawar, September 02, 2004 - 4:08 pm UTC

Source OS : Sun Microsystems Inc. SunOS 5.5.1
This is connect to the your feed back above.

Tom,
My actuall size of the database are as follows:

SQL> select sum(bytes) from dba_data_files;

SUM(BYTES)
----------
3227516928

But size of my dmp file before FTP is as follows:

dhs3 41: ls -ltr dawar.dmp
-rw-r--r--   1 oracle7  dba      51577856 Sep  2 12:07 dawar.dmp

Here what I got in Linux server (Destination) after FTP

oracle@dhshrwits:~/Documents> ls -ltr dawar.dmp
-rw-r--r--    1 oracle   oinstall 51563371 2004-09-02 12:38 dawar.dmp

All sizes have some differences.

Case Study:
My goal is to export entire database, oracle 7.3.4 from sunOS 5.5.1. and put in the Linux server through ftp.
The file name is dawar.dmp which contains all data.

I tested exp and ftp.

Please confirm my all steps.
I did telnet to my Sun Box from Linux Sever.
Logon to sqlplus on Sun OS via telnet where Oracle 7.3.4 existed.

Connected to: Oracle7 Server Release 7.3.4.5.0 - Production
With the parallel query option
PL/SQL Release 2.3.4.5.0 - Production
Enter array fetch buffer size: 4096 >

Export file: expdat.dmp > dawar.dmp

(1)E(ntire database), (2)U(sers), or (3)T(ables): (2)U > 1

Export grants (yes/no): yes > yes

Export table data (yes/no): yes > yes

Compress extents (yes/no): yes >

Export done in US7ASCII character set

About to export the entire database ...
. exporting tablespace definitions
....................
.................

Export terminated successfully without warnings.

dhs3 41: ls -ltr dawar.dmp
-rw-r--r--   1 oracle7  dba      51577856 Sep  2 12:07 dawar.dmp

exit from sqlplus and telnet sun OS.

Now from Linux server I run ftp from the directory I like to transfer dawar.dmp as:
oracle@dhshrwits:~/Documents> ftp ipaddress of Sun OS
Logon as a Oracle user. 

ftp> get dawar.dmp
local: dawar.dmp remote: dawar.dmp
227 Entering Passive Mode (159,225,155,29,146,23)
150 ASCII data connection for dawar.dmp (159.225.140.75,36190) (51577856 bytes).
226 ASCII Transfer complete.
51879696 bytes received in 19:54 (42.41 KB/s)
ftp> exit

File has transfer to the Linux Server. 

LINUX SERVER (DESTINATION)
oracle@dhshrwits:~/Documents> ls -ltr dawar.dmp
-rw-r--r--    1 oracle   oinstall 51563371 2004-09-02 12:38 dawar.dmp
oracle@dhshrwits:~/Documents>

Am I missing any thing in the whole process?
If not, so why the size has a big difference?

Any feedback will be appreciated.

Regards,
Dawar 

Tom Kyte
September 02, 2004 - 4:29 pm UTC

ftp> bin

you want to do a BINARY TRANSFER!

Oracle Developer

Dawar, September 02, 2004 - 5:40 pm UTC

Tom,

ftp> bin

works for me.
Now I have no difference in my dmp files b4 & after ftp.

oracle@dhshrwits:~/Documents> ls -ltr dawar*
-rw-r--r--    1 oracle   oinstall 51577856 2004-09-02 14:23 dawar.dmp


What about difference with database size?
Is it not very big?

SQL> select sum(bytes) from dba_data_files;

SUM(BYTES)
----------
3227516928

Regards,
Dawar
 

Tom Kyte
September 02, 2004 - 9:49 pm UTC

dump files have "create index..." -- but not the index data.

dump files have 'create tablespace temp size 500 gigawads' -- but not the temp tablespace itself

and so on....

a dmp file is not datafiles, it is just rows in a file.

Oracle Developer

Dawar, September 03, 2004 - 1:06 am UTC

Tom,
Thanks for your feedback.

Here is the situation.
I do not have sys or system password for Oracle 7.3.4 which is installed on SunOS 5.5.1
In 1994 or 1999 consultants installed oracle 7.3.4 database. And they left.
Few months back I joined this project.

I found one user which has more privileges than any other user. Its password is manager and the user name is same as project name.

I logon to that user prior to exp.

I did not see any errors while I exported data.
I just wanted to make sure that all data has exported.

1) Is there any way to check that this user has all necessary role or privileges to exp data?

2) Do I need to create same user name in 10g database?

Regards,
Dawar



Tom Kyte
September 03, 2004 - 9:54 am UTC

1) if you did a full database export AND this export didn't say "no, you cannot do that, you don't have the privs" then it exported.

2) you can use fromuser touser, if you like. so 'do you have to', no. 'do you want to', probably.

Oracle Developer

Dawar, September 03, 2004 - 11:40 am UTC

I am fine with sys and system password.
I am able to alter both users and assign new passwords to them.
So this password issue has solved.

Now I will do full database exp again.
I will create tablespace name in target database similar to source DB Oracle 7.3.4.
And after words I will do full imp into 10g.

Regards,
Dawar

Oracle Developer

Dawar, September 03, 2004 - 1:14 pm UTC

Tom,

I am doing exp/imp from Oracle 7.3.4 to 10.1.0.2.0.

Source OS: Sun Soaris 5.5
Target OS: SuSE Linux Enterprise Server 8.


Here I got feedback from Oracle TAR Support:

You should connect as system when performing the export:

and she also said:

Full database export will export all schemas including SYSTEM. The dicationary objects owned by the SYS user are NOT exported ever a
nd are generated by running the appropriate CATALOG/CATPROC scripts, possibly CA
TPATCH.SQL in 10g.

1) whom should I connect when performing the exp?
SYS or SYSTEM.

2) Do I need to do imp with the same user name?

3) Are system objects exported?

Regards,
Dawar

Tom Kyte
September 03, 2004 - 1:46 pm UTC

1) seems you were told already? any DBA account would do for a full export.

2) any DBA account would do

3) seems you were told already?

Oracle Developer

Dawar, September 04, 2004 - 1:12 pm UTC

Source OS,DB:Sun Solaris 5.5, Oracle 7.3.4.5
Target OS,DB:SuSE Linux Ent. Server 8,Oracle 10.1.0.2.0

Tom,

I am doing exp/imp from Oracle 7.3.4 to Oracle 10.1.0.2.0.
I have done export successfully and put dmp file on Linux server through ftp.
Under /opt/oracle/documents directory.>

>/opt/oracle/documents
I ran imp from command line.
>imp --- as a system/password

Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 -
Production
With the Partitioning, OLAP and Data Mining options

Import
file: expdat.dmp > dn.dmp

Enter
insert buffer size (minimum is 8192) 30720>


Export file created by EXPORT:V07.03.04 via
conventional path
List contents of import file only (yes/no): no >


Ignore create error due to object existence (yes/no): no >


Import grants (yes/no): yes >

Import table data (yes/no): yes >


Import entire export file (yes/no): no >

Username: system

Enter table(T) or partition(T:P) names.
Null list means all tables for user
Enter table(T) or partition(T:P) name or . if done:


-----------------------------------------------------
---------------------------------------------
------------------------------------------


very last row I got message as follows

: Import terminated successfully with warnings.

Note: I selected every thing as default on prompt after imp.
is it ok?

Prior to imp I created same tablespace name in target DB as source DB.

Do I need to create schemas in target DB as well OR more?

Regards,
Dawar


Tom Kyte
September 04, 2004 - 1:42 pm UTC

did you bother to read the warnings and attempt to determine "if they mean anything relevant to you at all"??????

TOP PRIORITY IS Minimal downtime. OLTP 24x7

Alvin, September 07, 2004 - 4:36 am UTC

Migrating from Solaris 8i to Linux 10g x86 ( Different machines )
--
--Approach 1 :
--

Is it POSIBLE for me to
1. Get an Incremental exports from 8i and do incremental exports to 10g ?
2. While solaris 8i would still be in production ?
3. Then before making linux 10g the production database apply the last incremental export from 8i ? (A concept similar to that of standy databases)

Is that a feasible way of upgrading from 8i to 10G (accross platforms)?

**** VS ****
--
--Approach 2 :
--

Somehow my boss is conviced that
1. Patching 817 to 8174 on Solaris
2. DBUA 8174 to 10g Solaris
3. Transportable tablespace from 10g solaris to 10g Linux suse (Do some endian conversion as well)

Will result in more downtime than the incremental export/import ( If that's even possible ).

Is it correct if i tell my boss that the later approach would be much much faster because.

1. from 8i to 10G Oracle makes some minimal changes specifically on tablespace headers so it would become OS independent hence making transportable tablespace a mere 'plug' operation ?

2. Minimal Physical IO, compared to the IO of Export=Scans all blocks on all segments + Import IO + IO of the recreation of all indexes

Tom Kyte
September 07, 2004 - 9:06 am UTC

incremental export gets *any table that has changed, the entire table, even if one byte was modified*

the incremental exports will not get you very far I'm afraid.

A potential path for you could be read only replication (materialized views). It won't work with LONGs, but for "most average tables", it could be used. See

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:744625626787#15013618923218 <code>

for the technique, it uses the "prebuilt table" clause -- meaning when you drop the materialized view, the tables convienently stay behind! intact!

So, you could:

a) create a new 10g database with the schema (empty)
b) create mv's on these prebuilt tables
c) refresh them complete, then refresh incrementally (that'll keep them in sync)

to "cut over", you would

o stop connections to the 8i instance (except for this dblink of course)
o do a final refresh
o drop the mv's

the tables on 10g are ready to go at that point.


I would strongly encourage you to

a) set up a test solaris instance (restore your production instance somewhere)
b) test this procedure out, load up your test instance, see what impact it will have, what issues you might encounter

since you have to setup the 10g test instance to load test, functionality test anyway -- suggest you do the same to the solaris side to gauge what adding the MV logs and refresh process will do to you for a while -- while you are transitioning.

Oracle Developer

Dawar, September 07, 2004 - 1:24 pm UTC

Source OS,DB:Sun Solaris 5.5, Oracle 7.3.4.5
Target OS,DB:SuSE Linux Ent. Server 8,Oracle 10.1.0.2.0

Tom,

I am doing exp/imp from Oracle 7.3.4 to Oracle 10.1.0.2.0.
I have done export successfully and put dmp file on Linux server through ftp.
Under $ORACLE_HOME/bin>

oracle@dhshrwits:~/product/10.1.0/db_1/bin> ls -ltr dn*
-rw-r--r-- 1 oracle oinstall 51577856 2004-09-07 09:59 dn.dmp


I ran imp from command line.
>imp --- as a system/password

Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 -
Production
With the Partitioning, OLAP and Data Mining options

Import
file: expdat.dmp > dn.dmp

Enter
insert buffer size (minimum is 8192) 30720>


Export file created by EXPORT:V07.03.04 via
conventional path
List contents of import file only (yes/no): no >


Ignore create error due to object existence (yes/no): no >


Import grants (yes/no): yes >

Import table data (yes/no): yes >


Import entire export file (yes/no): no >

Username: system

Enter table(T) or partition(T:P) names.
Null list means all tables for user
Enter table(T) or partition(T:P) name or . if done:

IMP-00015: following statement failed because the object already exists:
"CREATE TABLE "SRW_CMD_NAMES" ("CMD_NUM" NUMBER(5, 0), "CMD_NAME" VARCHAR2(3"
"0)) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 10240 NE"
"XT 10240 MINEXTENTS 1 MAXEXTENTS 121 PCTINCREASE 50 FREELISTS 1 FREELIST GR"
"OUPS 1)

IMP-00017: following statement failed with ORACLE error 1917:
"GRANT DELETE ON "MENU_V_APPL_GRP" TO "WITS" WITH GRANT OPTION"
IMP-00003: ORACLE error 1917 encountered
ORA-01917: user or role 'WITS' does not exist

IMP-00003: ORACLE error 1917 encountered
ORA-01917: user or role 'CASEDICT' does not exist
IMP-00015: following statement failed because the object already exists:
"CREATE FORCE VIEW "SYSTEM"."MENU_V_OPTION" ("MENU_"
"NAME","APPLICATION_NAME","OPTION_NUMBER","SHORT_NAME","DISPLAYED","OPTION_T"
"EXT","OBJECT_TEXT_ID","COMMAND_TYPE","COMMAND_LINE") AS "
"SELECT "MENU_NAME","APPLICATION_NAME","OPTION_NUMBER","SHORT_NAME","DISPLAY"
"ED","OPTION_TEXT","OBJECT_TEXT_ID","COMMAND_TYPE","COMMAND_LINE" FROM MENU_"
"B_OPTION "
IMP-00017: following statement failed with ORACLE error 1917:
"GRANT DELETE ON "MENU_V_OPTION" TO "WITS" WITH GRANT OPTION"

IMP-00003: ORACLE error 1917 encountered
ORA-01917: user or role 'CASEDICT' does not exist


IMP-00015: following statement failed because the object already exists:
"CREATE FORCE VIEW "SYSTEM"."SRW_STE" ("OWNER","APP"
"ID","ITEMID","TYPE","NAME") AS "
"select "OWNER","APPID","ITEMID","TYPE","NAME" from system.srw__ste where ow"
"ner = user or appid in"
" (select appid from srw_granted)"
" with check option"

------
------
Any feed back will be appreciated.

Regards,
Dawar


Tom Kyte
September 07, 2004 - 2:11 pm UTC

Ignore create error due to object existence (yes/no): no >

answer "yes" instead????

it seems you precreated some objects. (this stuff is right on the screen??)

do you have a DBA that can help you out -- this is pretty standard stuff for them.

Oracle Developer

Dawar, September 07, 2004 - 2:19 pm UTC

We do not have any DBA.

I need to fixed it.

Please give me some guide lines I will try to fixed the problem.

Thanks

Dawar

Tom Kyte
September 07, 2004 - 2:45 pm UTC

did you read above?

"answer yes" since you apparently have pre-created some of these objects.

Please bear in mind, this isn't really "Oracle support Lite" here -- this is pretty basic sort of stuff. Very basic actually.......

Oracle Developer

Dawar, September 07, 2004 - 2:53 pm UTC

Tom,
Tis time I eneterd Ignore create error due to object existence (yes/no): no > yes.

But getting following error.

IMP-00034: Warning: FromUser "" not found in export file
Import terminated successfully with warnings.

Regards,
Dawar

Tom Kyte
September 07, 2004 - 2:59 pm UTC


could be time to check out documentation? to see what things mean/are/do??

seems you hit "enter" and gave it a blank name to import. hmm.

FromUser "" <<<=====


please, read the documentation. It does really exist and could be quite enlightening.

nuff said on this topic.

MV approach to migration.

Alvin, September 07, 2004 - 11:08 pm UTC

Hmmm.. isnt that similar to advance replication ?

1. would AR be easier since i'd have to replicate all the table objects and just handle them as a group ?
2. Can i propagate changes from solaris 8i to linux 10g ?



Tom Kyte
September 08, 2004 - 9:06 am UTC

no, advanced replication is very different. it is transactional, has a larger impact on the source system.

You have very simple requirements here -- setting up and using basic replication (read only replication) is very simple.

1) you can handle them as a group with MV's as well?
2) yes

Use of replication manager.

Alvin, September 08, 2004 - 11:27 pm UTC

--- 8< ---
You have very simple requirements here -- setting up and using basic replication
(read only replication) is very simple.

1) you can handle them as a group with MV's as well?
2) yes

--- 8< ---

I see. The read only replications seems most viable. Sorry for the wrong terminology used I meant Replication not Advance Replication.

1. I'd have to use replication manager right ? Since i'm only shooting for basic replication (Single master + 1 read only snapshots). Would the use of replication manager be easier ?

To clarify:
2. Does Advance replication = Multimaster settings + conflict resolutions ?

3. If i create an MV it will automatically create a TABLE SEGMENT for me right ? can i, after the initial refresh, create an index on the base TABLE SEGMENT ?

4. Or if i create an index on my MV and later after dropping my MV. The index will automatically switch to the BASE TABLE SEGMENT ? which do i index the MV or the Base table ?

5. Migration of sequences/packages/triggers would be simply use the Export/Import utility right ?



Tom Kyte
September 09, 2004 - 7:59 am UTC

1) i didn't use any gui's in my example. you don't HAVE to use anything you don't want to.

I've not used the gui personally. It would be upto you. You have to setup the prebuilt tables -- creating snapshot logs and mv's after that is rather straightforward.

2) yes.

3, 4, 5) we are using *prebuilt tables* -- that is the crux of this implementation. without a prebuilt segment -- when you drop the mv on the new site -- bamm, there goes your data too. using the pre-built table is what makes this "work"

You will
o create that table
o index
o either not put the constraints on OR use deferrable constraints

then create the mv's (see the example -- it goes step by step by step over what you would do)



sequences are something you need to do manually -- after you shutdown the source (old) database, you'll need to select out the the sequence name, cache size, etc -- and the current value and generate "create sequence s ... start with that_value" statements.

I would hope you would have the code in source code format somewhere -- this would be a good time to get it all in order actually. Set up your test copy of production, your test copy of "new" and do this migration say 30 times - to know you know exactly what to do -- test the old apps on the new database, make sure everything is there. test the old apps on the old database (copied) and make sure you get everything synced up.

Thanks.

Alvin, September 09, 2004 - 10:45 pm UTC

--- 8< ---
without a prebuilt segment -- when you drop the mv on the new site -- bamm, there goes your data too. using the pre-built table is what makes
this "work".
--- 8< ---

Thanks. I will probably build the MV's with refresh on Demand so i can synch them during off peak hours.

upgrade 10g

sam, September 15, 2004 - 11:55 am UTC

Hi Tom,
I upgraded my 9.2 database to 10g. Upgrade looks good. I am able to connect to the database through sqlplus but when I try connecting either to connect through isql*plus it errors -ERROR - ORA-12500: TNS:listener failed to start a dedicated server process. Toad also returns the same error.
Here is my TNSNAMES

abc=
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = mycomputer)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = abc.DOMAIN)
)
)

EXTPROC_CONNECTION_DATA =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
)
(CONNECT_DATA =
(SID = PLSExtProc)
(PRESENTATION = RO)
)
)

and LISTENER

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = abc)
(ORACLE_HOME = C:\oracle10g\product\10.1.0\Db_1)
(PROGRAM = extproc)
)
)

LISTENER10G =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = mycomputer)(PORT = 1521)
)
)
)
)

Also getting the same error for enterprise manager. Can you direct when I might start looking for the problem.

Thanks,
Sam

Tom Kyte
September 15, 2004 - 11:59 am UTC

are you running the 10g listner

did you use the tools to create the configuration files.


[tkyte@xtkyte-pc tkyte]$ oerr ora 12500
12500, 00000, "TNS:listener failed to start a dedicated server process"
// *Cause: The process of starting up a dedicated server process failed.
// The executable could not be found or the environment may be set up
// incorrectly.
// *Action: Turn on tracing at the ADMIN level and reexecute the operation.
// Verify that the ORACLE Server executable is present and has execute
// permissions enabled. Ensure that the ORACLE environment is specified
// correctly in LISTENER.ORA. The Oracle Protocol Adapter that is being
// called may not be installed on the local hard drive. Please check that
// the correct Protocol Adapter are successfully linked.
// If error persists, contact Oracle Customer Support.
[tkyte@xtkyte-pc tkyte]$



asm

reader, September 16, 2004 - 2:41 pm UTC

If I chose ASM striping and mirroring, is it like softwate striping and mirroring? If that is the case, there would be more load on CPU? Is this correct? If so, why would one want to use ASM rather than hardware striping and mirroring? Thanks.

Tom Kyte
September 16, 2004 - 3:18 pm UTC

if you have excellent hardware based stripes and mirrors, you can very well use that. It is not an either or, you can use both.

but software striping is not "cpu intensive" at all -- we already have been doing it forever (we call them extents)

Oracle10g and TOAD

Praveen, September 18, 2004 - 11:32 am UTC

Hi Tom,
I am not sure if I can ask this question in this thread or whether you are aware of the following problem (as you are very fond of SQL*plus... ;) ).
I was happily usig toad 7.6 to work with oracle 9i. But now an upgrade to 10g has caused toad fail to connect. Error "OCI version 10.1.0.2.0 is not supported". But if I install 9i client and try to connect, then it works fine. Is there any workaround?
Thanks and regards

Praveen

Tom Kyte
September 18, 2004 - 12:38 pm UTC

you'd have to ask the makers of toad if they have a 10g client version or not.

Basically what is happening is that you have a 9i compiled piece of client software (toad).

it uses, it depends on the 9i client software.


in order to use the 10g client software -- you'd need a new binary from the makers of toad.

SQLPlus would be the same -- the 9i sqlplus needs the 9i client, the 10g sqlplus needs the 10g client and so on.


As it is -- you are OK right now, you have installed the software needed by toad and a 9i client connecting to a 10g database is "ok, supported and normal"

To Pravin and others who need to learn how to rate the response.

A reader, September 18, 2004 - 12:58 pm UTC

Pravin,
Just curious, on what basis did you give rating of just two stars to Tom's response(?) above?. I think the ratings dropdown is meant for response Tom gives in answer to question asked in original thread and the subsequent follow-ups by him for related questions.

Seems to me that you(like many others who have done elsewhere) just wanted to rate the response *low* so that you can ask new question on same thread, which obviously is not fair.

Just my $0.02.





I agree with the comment made above

reader, September 18, 2004 - 1:42 pm UTC

Those who give "low" ratings to Tom do not even have a fraction of knowledge about Oracle products that Tom has. Hope people like Praveen realize that.

A reader, September 19, 2004 - 8:02 pm UTC

I guess something like "Do you think the response wiould be helpful to you?" would solve Praveen and *many others'* problem

A reader, September 19, 2004 - 8:23 pm UTC

Tom,

Is it necessary to upgrade to any other version while upgrading from 8.1.7.0 to 10g (OS would be upgraded from 32bi to 64bit). Is there any reason why it should be upgraded to 9i? Any reason why it should not be upgraded to 10g?

Tom Kyte
September 19, 2004 - 9:50 pm UTC

direct upgrade from 8ir3 (817) is supported. no reason to stop at 9i.

A reader, September 19, 2004 - 10:12 pm UTC

Could you please suggest some document from oracle?
Thank you

Tom Kyte
September 20, 2004 - 7:51 am UTC

</code> http://www.oracle.com/pls/db10g/portal.portal_demo3?selected=1 <code>

Look for the box entitled "upgrade information"

Download

APL, September 23, 2004 - 2:15 am UTC

One of my friends told that the 10g which we are downloading from OTN site is a trial version. Its evaluation period will over after 30 days. We can download all earlier oracle releases from net and there is no trial period for those versions. But why these exceptions for oracle 10g?

Tom Kyte
September 24, 2004 - 7:44 am UTC

no exception, the trial eval license has always been that way for *evaluation* software since the beginning of downloads as far as I know.

There are eval licenses
There are the 'otn' agreement 'i agree to use this software to develop a prototype to show it can be done but not for development or production'

they are both 'in force' and enforced in a 'we trust you to follow this' (eg: the software will not cease functioning on day 31)

A reader, October 11, 2004 - 9:48 pm UTC

Tom,

Sorry, I am unable to find from documentation if there is a patch between 8.1.7.0.0 and 10g. Could you please clarify if 8.1.7.0.0 should be upgraded to 8.1.7.4 before moving to 10g?

Thanks for the help.


Tom Kyte
October 12, 2004 - 7:41 am UTC

you need not have patched up, no.

you can upgrade from 8170

A reader, October 12, 2004 - 10:55 am UTC

Tom

Thanks you, what's the current (best) release version that can be upgraded to. If the upgrade is planned for mid next year, which is better? Upgrade to 10.1.0.2 or wait until 10.2.0.3?

Thanks again.

Tom Kyte
October 12, 2004 - 11:22 am UTC

10.1.0.3 is shipping, available, and would be the only choice if I was asked "upgrade us"

A reader, October 12, 2004 - 11:30 am UTC

Tom,

If the upgrade is planned for aug next year, should we plan for 10.1.0.3?

Thank you.

Tom Kyte
October 12, 2004 - 11:37 am UTC

you should plan on the current patch set of 10.1.0.x

starting today that would be 10.1.0.3, tomorrow it might be pathset .4

just like there was 920, 9201, 9202, 9203, 9204, 9205, and so on....

8170, 8171, 8172, 8173, 8174.... and so on

"Going to 10g will be more of a risk than going to 9i"

Mike S, October 19, 2004 - 10:18 am UTC

- the above apparently being the view of senior management here, in respect of upgrading our mission-critical 8.1.7.4 1Tb OLTP customer account management database. This is going to get significantly bigger (will at least double in size) over the next couple of years, due to our taking on the customer base of a company we have merged with. Any upgrade would probably take place in summer 2005.

I disagree with the management view - not least because I've just completed the 10g New Features for Administrators course ;-) - but also because I suspect the impact of taking on this mass of new data will be to pose more tuning and configuration issues for us than our small DBA team can handle. I think we'll need all the assistance we can get from ADDM in particular and the extra functionality we get from 10g in general.

Any thoughts on how best to present this argument to senior management? What do you think are the most important features that 10g has to offer for the above scenario, that 9i doesn't have?

Tom Kyte
October 19, 2004 - 10:44 am UTC

mgmt is all about cost and risk right?

o cost -- hey, if you goto 9i, you'll be in the SAME BOAT 1 year from now as far as support goes. You'll have to pay for this migration twice

o risk -- hey, if discover during the course of testing (yes, we'll have a test), any issues - we have a fall back plan. In fact -- if we are at such and such a date -- we'll have to "do or don't do" decision. At that point in time, we would have sufficient information to know whether we will ultimately be successful. Remember in test there is no such thing as a down production system.



A reader, October 28, 2004 - 11:25 am UTC

Hi Tom,

With reference to above post, would the 9i support end next year? In that case, why should anyone migrate to 9i? Also, you talked about "risk", is there any risk involved with 10g that isn't with 9i? Why would anyone say "no, only to 9i"

Tom Kyte
October 28, 2004 - 7:04 pm UTC

you can see all of the details on metalink. Error Correction Support does -- not total support (9i will be in the same mode of support on 01-jan-2006 as 8i is on 01-jan-2005)

A reader, November 01, 2004 - 12:10 pm UTC

Earlier post:

And 10g is slated to be in ECS until 2007

support Doc ID: 190435.1

whereas 9ir2 is 2005

Product & Product Version(s) Platform(s) & Platform Version(s) End of
ECS Date End of ES Date End of EMS Date Migration Path Notes

*Oracle Database 10.1.0.x ALL Platforms 31-JAN-2007 31-JAN-2010
31-JAN-2009 TBD

*Oracle Database 9.2.0.x ALL Platforms 31-DEC-2005 31-DEC-2008
31-DEC-2007 TBD

-----------------------------

Latest from 190435.1:

Database 10.1.0.x ALL Platforms 01-FEB-2009 01-FEB-2012 01-FEB-2011 TBD
*Oracle Database 9.2.0.x ALL Platforms 01-JUL-2007 01-JUL-2010 01-JUL-2009 TBD

------------------------------

Could you please clarify these two (from last post)?


9i will be in the same mode of support on 01-jan-2006 as 8i is on 01-jan-2005

How? Because 9iR2 ECS ends in 2007. It seems 9iR2 is fine until 2007. In that case how are they same?


if you goto 9i, you'll be in the SAME BOAT 1 year from now as far as support goes. You'll have to pay for this migration twice


Same as above. Could you please clarify?






Tom Kyte
November 02, 2004 - 6:06 am UTC

they moved my cheese :)

that is why i put the note number in -- looks like they moved it out.

My position however, remains the same.  if faced with upgrading an 8i instance, there is exactly one version I would consider and it is not 9i.

  1  select months_between(to_date('01-feb-2009'),sysdate)/12,
  2  months_between(to_date('01-jul-2007'),sysdate)/12
  3* from dual
ops$tkyte@ORA9IR2> /
 
MONTHS_BETWEEN(TO_DATE('01-FEB-2009'),SYSDATE)/12
-------------------------------------------------
MONTHS_BETWEEN(TO_DATE('01-JUL-2007'),SYSDATE)/12
-------------------------------------------------
                                       4.24611842
                                       2.66278509

10g gives me almost 2 more years of life from that upgrade.  And if you are running 817 now, you've probably been running it for nearly 4years as it is -- so the timeframe feels right. 

A reader, November 03, 2004 - 12:42 pm UTC

Tom,

It appears 9i stays solidly calm for 2.66 years. Given the fact that oracle itself is not sure of when the next version would be (because of bugs in the current), why would you still advocate 10g? I might have to go for a painstaking migration again in 6 months? You may even desupport multiple times in 4.4 years. Everytime a migration? Coming to migration to a new version, could you please suggest some documents? Is it as complicated/timetaking as a migration to a new release?

Thank you

Tom Kyte
November 05, 2004 - 10:33 am UTC

HUH??????

what they heck are you saying? "because of bugs in current"? did you know there are some in 9iR2? and 10gR1? and 8iR3? there are some in 10gR2 undergoing beta.


I am firm in "if I'm going to upgrade, I'm sure as heck going to the latest production version". Period.

A reader, November 05, 2004 - 12:19 pm UTC

Thanks very much for your patience

Drop table

A reader, November 05, 2004 - 5:45 pm UTC

"4) diskspace will be automagically reclaimed for you -- as you need space, these extents will be reused and the objects will disappear from the recycle bin"

OK so if I do 'drop table t;' (without purge) and later some other tables in that tablespace grow and need the space, Oracle will silently "purge" that table so then 'flashback table t to before drop' will return an error?

So this doesnt exactly follow the "traditional" Windows "recycle bin" metaphor? i.e. unless I explicitly 'Empty Recycle Bin' or open the Recycle Bin and permanently delete an item, it will always stay in the Recycle Bin. Windows will NEVER silently, permanently delete something.

I have seen that, in Windows, if I try to delete really large files (bunch of ISO images), I get a dialog box 'These files are too large for the recycle bin, do you want to permanently delete them?" Is there something like this in Oracle 10g?

Tom Kyte
November 06, 2004 - 10:25 am UTC

Windows does cycle things out in a FIFO queue manner.

At least windows XP pro does.


I just tested it, my properties lets windows use 10% (2gig in my case) of the disk for the recycle bin. I dragged an 817 stage area into the recycle bin (about 650meg) and then a 9iR2 one (about 1.4 gig) -- an older directory of 500meg disappeared out of the recycle bin. I then dragged a 10g stage into there (had to clean them up anyway :) and the 817 went away as did all old files.


Think about what would happen if they did not -- eventually the recycle bin would fill up and what then? No one would be able to delete files without getting the dialog "sorry -- too big for the recycle bin".

So, not that we are trying to perfectly imitate windows (hardly) -- but the analogy holds up.

upgrading standby

Samuel, November 06, 2004 - 11:58 am UTC

(1) I have a standby database. If I upgraded my 9iR2 database to 10g, what steps do I have to do with regard to standby database?

(2) I was reading upgrade doc and it was saying somewhere (I can't find it exactle where I read) that before upgrade, it will place the database in No archivelog mode? Why is that? If I have standby database, after I upgrade my primary how does standby behave? Do I have to reacreate standby after I upgrade my primary? Should I upgrade my primary and standy separately? As you see, I am confused! Please help with best practice of upgrading 9iR2 database to 10g in standby configuartion. THANKS.

Tom Kyte
November 06, 2004 - 12:17 pm UTC

please use support for things like this, upgrade and migration -- they are really "good" at it.

In fact, the migration guide for 10g (and other releases) does have a section "update your standby database"

cannot connect to 9i DB from 10g DB

A reader, November 06, 2004 - 4:34 pm UTC

Hi Tom,

I just installed an oracle 10.1.0.2.0 database on a redhat ES/3 machine. After installation, I can successfully connect to it from a 9i Box which was built on a windows machine, but I CANNOT connect to this 9i box from this newly-built 10g box via sqlplus? The error I get is:

ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

Have you experienced this problem before? Can 10g connect to 9i using sqlplus?

thanks,



Tom Kyte
November 06, 2004 - 6:07 pm UTC

[tkyte@tkyte-pc-isdn admin]$ sqlplus scott/tiger@ora9ir2.us.oracle.com

SQL*Plus: Release 10.1.0.2.0 - Production on Sat Nov 6 17:40:04 2004

Copyright (c) 1982, 2004, Oracle. All rights reserved.


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production

scott@ORA9IR2>



(10.1.0.2 on RH AS3)

[tkyte@tkyte-pc-isdn admin]$ oerr ora 12514
12514, 00000, "TNS:listener does not currently know of service requested in connect descriptor"
// *Cause: The listener received a request to establish a connection to a
// database or other service. The connect descriptor received by the listener
// specified a service name for a service (usually a database service)
// that either has not yet dynamically registered with the listener or has
// not been statically configured for the listener. This may be a temporary
// condition such as after the listener has started, but before the database
// instance has registered with the listener.
// *Action:
// - Wait a moment and try to connect a second time.
// - Check which services are currently known by the listener by executing:
// lsnrctl services <listener name>
// - Check that the SERVICE_NAME parameter in the connect descriptor of the
// net service name used specifies a service known by the listener.
// - If an easy connect naming connect identifier was used, check that
// the service name specified is a service known by the listener.
// - Check for an event in the listener.log file.

[tkyte@tkyte-pc-isdn admin]$


A reader, November 11, 2004 - 9:36 am UTC

Should we be on the latest patchset (8.1.7.4) to receive Extended support?

Tom Kyte
November 11, 2004 - 10:34 am UTC

you need to be on 8174 to be supported.

A reader, November 15, 2004 - 10:51 pm UTC

Tom,

While updating to a terminal patchset (8170 to 8174) should any application related things be taken care of? Will there be any potential issues with applications that are using oracle? If so, could you please eloborate some details or suggest a document?

Tom Kyte
November 16, 2004 - 6:37 am UTC

this is a perfect example of a question that is great for metalink.

I could answer with "no", but not be aware of some issue specfic to your situation on your platform.

So, i would say "in general, no" -- but this is perfect for support.

10g

Venkat, November 24, 2004 - 12:38 am UTC

Tom,

We want to have the consolidated list of 10g features (especially sql and plsql related)that are desupported/deprecated/changed/enhanced compared to 9i releases. Kindly help.



A reader, November 24, 2004 - 2:59 pm UTC

Tom,

Apart from the upgrade guide should someone be concerned about ASM to migrate to 10g? The upgrade assistant does the job. However, I don't know what ASM is. Is it necessary for the upgrade? What level of understanding should one have about ASM and other things such as RAC, filesystems, grid control before and while upgrading?

Tom Kyte
November 24, 2004 - 3:39 pm UTC

ASM is a totally different way of managing storage -- totally. No files.

Definitely read the Admin Guide on this -- 100%, no questions.

A reader, November 28, 2004 - 8:30 pm UTC

Tom,

Couple of questions regarding upgrade.

1) Any idea which of the following upgrades is widely used and for what reasons?

a) Database Upgrade Assistant b) Export/Import c) Manual upgrade

2) This page (7) mentions "Upgrade the Production Site" and this appears in "After Upgrading a Database" document. It's slightly confusing. Does that mean it's been discussing upgrading a test database till then?

</code> http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10763/afterup.htm <code>




Tom Kyte
November 29, 2004 - 7:59 am UTC

it was discussing how to update a standby database in that section.


but -- yes, in general, upgrade production comes after you have upgraded test a couple of times (practice makes perfect)

startup migrate or upgrate

Jianhui, February 22, 2005 - 4:14 pm UTC

Tom,
Are migrate and upgrate interchangable here? If not, what's the difference. I have tried both and from v$instancce.status, it says "OPEN MIGRATE" for both startup commands. Little bit confusing here. I dont even see MIGRATE option in starup command from SQL*PLUS online document(</code> http://oraclesvca2.oracle.com/docs/cd/B14117_01/server.101/b12170/ch13.htm#i2699631 <code>, but it worked from my test and I see it somewhere in the book or internet.

Best,


Tom Kyte
February 22, 2005 - 5:21 pm UTC

then don't use migrate if you don't see any documentation for it? it is not documented...

10g

Bipin, March 02, 2005 - 9:32 am UTC

Hi Tom,
I read all the documents on 10g But nowhere i could find on Pro*c.We are planning to go for 10g But one of our application is running on Pro*c code. The Developer who has generated the code is no more with the organization and code is never been haded-over to new developers. Now, I would like to know whether the pro*c has to compiled in new Oracle Home?
Secondly (i know this is not related with oracle) But how can i get source file from existing application?

Tom Kyte
March 02, 2005 - 10:09 am UTC

</code> http://otn.oracle.com/pls/db10g/portal.portal_demo3?selected=3#index-PRO <code>

if all you have are binaries, and you do not have the .pc files, you have been what is known as "robbed"


if you just have the executables and no source code, well -- you are out of luck.

are you saying you have no source code?

Not of 10g

Bipin, March 02, 2005 - 1:59 pm UTC

Hi Tom,
this is related with 9i database.I'm asking you about 10g not of 9i.

Yeah, you are right i do not have source code.

Warm Regards
Biping

Tom Kyte
March 02, 2005 - 4:54 pm UTC

you are confusing me then.

you mentioned 10g, not me. but anyway, otn.oracle.com -> documentation. all there.


but, if you have no source code and the author of said code has left the business, the only things you might have going for you would be:

a) they are really nice, kept the source code and will give it back to you.
b) they are not nice, have source code, won't give it back to you -- in which case
you could litigate.
c) they haven't kept the source code, in which case -- you are just totally out of
any sort of luck, you have a binary and nothing else. You lost the program.

8.0 to 10g upgrade & Licensing

Yogesh, March 21, 2005 - 10:25 am UTC

We are planning for database upgrade from 8.0 to 10g. This will be two step upgrade i.e. 8.0.4 to 8.0.6 / 8.1.7 and then to 10g. In this scenario do we have to buy the licence for intermediate versions like 8.0.6 or 8.1.7? OR can I download the evaluation version and use it for this purpose?

Tom Kyte
March 22, 2005 - 10:44 am UTC

with support, you get upgrades as part of the deal, you have them all.

10g client problem after upgrade

Jelena, March 30, 2005 - 8:31 am UTC

Hi Tom,
we upgraded recently to 10g and now i want to set up some clients to connect to 10g database. Both client and server are SLES 9 boxes. I'm trying to connect with sqlplus but at the end we need to connect application with OCI drivers.

I installed all 4 instant client packages (unzipped them), added tnsnames.ora file, and set LD_LIBRARY_PATH variable
This is the error i get:

ERROR:
ORA-12154: TNS:could not resolve the connect identifier specified

We did the same on one windows box - and it worked.

I also tried to set TNS_ADMIN to point to this location, but it also didn't help.

Any idea what is missing there, or any link how to set up oci client?

Thank you,
Jelena

Here is my tnsnames.ora:
DEV3 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = devora)(PORT = 1521))
)
(CONNECT_DATA = (SID = dev3)(SERVER = DEDICATED))
)
and here is the ls of LD_LIBRARY_PATH directory:
libclntsh.so.10.1 libociei.so orai18n.jar
libheteroxa10.so libocijdbc10.so sdk
README_IC.htm libnnz10.so libsqlplus.so sqlplus
classes12.jar libocci.so.10.1 ocrs12.jar tnsnames.ora
glogin.sql libocci10_296.so.10.1 ojdbc14.jar


Tom Kyte
March 30, 2005 - 10:00 am UTC

first, just try:

$ sqlplus 'scott/tiger@(description=(.......)'

I usually set up a test.sh like this:

sqlplus 'x/y@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=xxx.us.oracle.com)(PORT=15253)))(CONNECT_DATA=(SERVICE_NAME=cxx)))'


that'll test if "everything is OKEY DOKEY"

then, to see where it is pulling the tnsnames.ora from, I:


[tkyte@xtkyte-pc tkyte]$ csh
[tkyte@xtkyte-pc ~]$ (strace sqlplus scott/tiger@abc) |& grep -i tnsnames
access("/home/tkyte/.tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/etc/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/home/ora9ir2/network/admin/tnsnames.ora", F_OK) = 0
stat64("/home/ora9ir2/network/admin/tnsnames.ora", {st_mode=S_IFREG|0644, st_size=976, ...}) = 0
open("/home/ora9ir2/network/admin/tnsnames.ora", O_RDONLY) = 7
read(7, "# TNSNAMES.ORA Network Configura"..., 4096) = 976



(hey -- question for all all out there from me :) I used the csh to get the stdout and stderr using |& -- very old fashioned. what would you use in bash?)


and then see where it is reading from. Also, check your sqlnet.ora to see if you have a default domain being added or something.

bash

Greg, March 30, 2005 - 11:05 am UTC

Bash can do stdout and stderr the same as sh or ksh:

use ">" to redirect stdout, if desired, and "2>&1" to send stderr to where ever stdout is going.
e.g. trace the tnsping command and check user out and error out for "tnsnames":

bash-2.03# truss tnsping ora10g 2>&1 | grep tnsnames
access("/.tnsnames.ora", 0) Err#2 ENOENT
access("/var/opt/oracle/tnsnames.ora", 0) Err#2 ENOENT
access("/h/data/10g/network/admin/tnsnames.ora", 0) = 0
stat("/h/data/10g/network/admin/tnsnames.ora", 0xFFFFFFFF7FFFC038) = 0
open("/h/data/10g/network/admin/tnsnames.ora", O_RDONLY) = 3


Tom Kyte
March 30, 2005 - 11:06 am UTC

excellent, thanks

Procedure compile problem

Bhaskar, April 02, 2005 - 7:42 am UTC

Hello Tom,

I have two databases say A and B. When I compile a procedure on Machine B compilation is hanging looking some objects of machine A.

But i had same object on Machine B and i am able to execute sucessfully. I need to recompile the procedure with small modification.

Unless procedure compilation gives error, how to find out what is happening to that procedure and what are the resources are required.


Tom Kyte
April 02, 2005 - 9:25 am UTC

the v$ tables are useful, you can see what sessions are "waiting on" using things like v$session_wait

9i RAC to 10gASM

Jay, April 21, 2005 - 9:49 pm UTC

Tom,

Currently we are on 9i(9204) RAC on Solaris 8.0, Veritas and using EMC frame. We are planning to upgrade to 10g and also thinking towards ASM.

First is it a good idea to look into ASM from the current configuaration & if yes then could you point me towards links or notes.

Thanks.

Tom Kyte
April 22, 2005 - 10:21 am UTC

otn.oracle.com would be a great place to start

and yes, I would seriously consider evaluating it -- if you use it and CRS you can remove many bits of software that you have now.

Oracle Application Server 10g Installation hang?

Shabbir Hussain, May 29, 2005 - 7:52 pm UTC

Hi Tom,

First of all sorry, It is not a right place but I have an urgent problem. My installation does get hang while Installing Oracle App 10g. I had installed in past stand alone server but this time on my pc I have 1GB ram and all test get passed and than installation get hang.

Please pass me the link where I can post the msg. correctly.

Tom Kyte
May 29, 2005 - 8:54 pm UTC

Please utilize support for installation/configuration issues

especially regarding the application server, something I've never personally installed.

10g release 2

Dan, June 03, 2005 - 2:26 pm UTC

I'm looking forward to 10g release 2, because the people I
develop for are under the impression that any release 1
will be full of bugs as some previous posters suggested.

When is it coming out?
(I couldn't find any mention of a release date anywhere online).

Thanks.

Tom Kyte
June 03, 2005 - 5:13 pm UTC

spring 2005...

but remember, it is spring somewhere in the world all year around ;)


"soon"

Spring

Huy, June 09, 2005 - 10:35 am UTC

Surely Spring 2005 can only be Feb-Apr 2005 (Northern hemisphere) or Aug-Oct 2005 (Southern hemisphere) plus or minus 1 month?
Eg, It is not reasonable to call June or December "Spring" anywhere in the world.

Tom Kyte
June 09, 2005 - 5:56 pm UTC

sure it is, spring is in the air.

Take it for what it was -- namely "a joke"

A reader, June 11, 2005 - 8:56 pm UTC

Tom,

For those folks who are using 9i RAC(9205) with sun clusters, is it required to install crs or can just upgared 10g and use sun clusters without the crs.

Thanks.

Tom Kyte
June 11, 2005 - 9:36 pm UTC

crs will be installed as part of 10g

A reader, June 12, 2005 - 1:24 pm UTC

Tom,

Maybe i wasn't clear enough, so i can skip the CRS installation(seprate ORACLE_HOME etc etc..) and just install 10g which will install crs as part of the installation(10g) correct?

Thanks.

Tom Kyte
June 12, 2005 - 2:00 pm UTC

Not sure, I do not typically install everything on every platform. It'll be there one way or the other -- I haven't reviewed the install screens in any sort of depth.

install/configuration questions are best asked of support.

10g RAC and CRS

reader, June 12, 2005 - 3:08 pm UTC

My understanding is that CRS is a MUST regardless of whether you use OS vendor's cluster manager or not. CRS must be installed.

Tom Kyte
June 12, 2005 - 3:41 pm UTC

(that is what I said -- you want 10g rac, crs is going to be there, period)

Latest date for 10gR2 ????

reader, June 29, 2005 - 11:19 pm UTC

Will appreciate your guess because it will be the best ....

The release date for 10gR2 on linux

Thanks



Tom Kyte
June 30, 2005 - 9:27 am UTC

soon, like the end of this month soon I believe.

Two phase upgrade

Yogesh Bhandarkar, July 14, 2005 - 9:43 am UTC

I’m planning for an upgrade from oracle 8 to 10g. Along with new version I’ll be changing the physical box too. As I am upgrading from Oracle 8, it will be two-phase update, i.e. 8 > 8i > 10g. Database size is 200g approximately. So IMP / EXP option is not feasible.

Data is loaded everyday by using nightly batch process. I want to minimize the downtime involved in upgrade process. What could be the best approach for this up gradation?

Will I have to install all three versions on new box?


Tom Kyte
July 14, 2005 - 11:13 am UTC

practice, that is the best way to minimize the amount of time it takes. practice it 100 times. You'll know down to the second how long it will take.

You would need to install the two new versions and restore the old database to that machine.

A reader, July 14, 2005 - 12:11 pm UTC

Practise in fine, but what alternative do you suggest? Hope you wont reply saying try all the options whichever suits you best :-)

Tom Kyte
July 14, 2005 - 12:58 pm UTC

if you don't practice it, it will take days. Mistakes will be made.

if you do practice it, you'll know before hand precisely how long it will take and step by step what to do.

I see no other middle grounds here.

oracle 10g on windows2003

Oct, July 16, 2005 - 12:51 am UTC

Dear Tom
Thanks for all your help.

now with the release of Oracle10g R2 we are planning to shift oracle10gR2 Enterprise edition on windows2003 server.fresh installtion on new server.
Kindly letus know if any issue involved to be take care before proceeding .
any special planning or so.it will be timesaver to us.
Regards
Oct

Tom Kyte
July 16, 2005 - 9:16 am UTC

build an extensive test plan so when you cut over to the new system, your end users don't even know it happened.

Migration

Sandy, July 20, 2005 - 10:08 am UTC

Hi Tom,

Sorry to pop in a question like this, but this is becoming a hot topic in here.. so wanted to get your opinion!

We plan to migrate our datawarehouse from unix to linux.
Its currently a single instance 9ir2 DB....

I don't see an issue in making it 10g on the destination linux box.. however, the problem is how to migrate the huge amount of data itself? what do you think would be the fastest method?

Any chance of using Datapump to export out of 9i and import into 10g? or... RMAN cross platform??

Thanks,
Sandy

Tom Kyte
July 20, 2005 - 12:56 pm UTC

you'd have to upgrade to 10g on unix first in order to transport or data pump the data over.

can you do that? even if you never use 10g on that machine except to move it off?

Migration

Sandy, July 20, 2005 - 1:19 pm UTC

Hi Tom,

Assuming that we can upgrade the unix db to 10g.. then would RMAN be an option? might be getting confused but thought I read somewhere that 10g RMAN supports cross platform backup-restore(?) will that be faster/easier than expdp/impdb?

Thanks,
Sandy

Tom Kyte
July 21, 2005 - 7:30 am UTC

10gr1 allows for cross platform transportable tablespaces (similar to a backup/restore but not really). Rman would be involved if the files need to be converted.



10g tablespace threshold

abc, August 01, 2005 - 6:46 pm UTC

I have explicitly specified the tablespace usage threshold values while creating a tablespace.
Now i want to revert the database-wide default tablespace usage threshold value for tablespace so what package should I use?
dbms_stats,dbms_alert,dbms_space,dbms_monitor,dbms_server_alert,dbms_space_admin

Tom Kyte
August 01, 2005 - 9:33 pm UTC

sorry, not sure what you mean - is ths a gui thing? (dbconsole maybe?)

10g ocp

abc, August 02, 2005 - 12:29 am UTC

This is one of the OCP question which I had in my exam :)

Please reply.

Tom Kyte
August 02, 2005 - 7:26 am UTC

well, the only one that would do it would be
</code> http://docs.oracle.com/docs/cd/B14117_01/appdev.101/b10802/d_salt.htm#999594 <code>

and truly the answer should have been "I would use the GUI to undo what I did in the GUI"

:) DBMS_SERVER_ALERT IS ANSWER Thanks!

abc, August 02, 2005 - 7:09 pm UTC


one more please

abc, August 03, 2005 - 11:34 am UTC

The database is currently in noarchive log mode.
What would be the first step to configure flashback database?
a.enable flashback logging
b.start database in archivelog mode
c.issue alter database flashback on
d. set fast_mttr_target

Please reply

Tom Kyte
August 03, 2005 - 12:09 pm UTC

IF (you have OCP questions)
AND
(you want to do it in the spririt of the OCP)
THEN
you would have a 10g database to play with and you
would answer the questions by testing and researching
END IF;

My demo script has:

connect "/ as sysdba"
shutdown immediate
startup mount
alter database archivelog;
alter system set db_flashback_retention_target = 1440 scope=both;
alter database flashback on;


So, I guess the first step would actually be "none of the above, I need to find out if the database is already running, if so, I need to shut it down, actually, the first step would be "connect" :)

but that would be mincing terms.

:) That's funny...anyways Thanks a lot

abc, August 03, 2005 - 1:44 pm UTC


10g database buffers

Rajendran Durairaj, August 29, 2005 - 3:39 pm UTC

Tom - We recently upgraded our database to 10g.  It shows database buffers as 0 bytes - Is that right ?

SQL> sho sga

Total System Global Area  419430400 bytes
Fixed Size                  1302224 bytes
Variable Size             417784112 bytes
Database Buffers                  0 bytes
Redo Buffers                 344064 bytes 

Tom Kyte
August 30, 2005 - 12:58 am UTC

use db_cache_size, not db_block_buffers as you are.

Packaging Oracle

Sanji, September 08, 2005 - 5:22 am UTC

Hello Tom,

We are in the process of developing an application that uses ORACLE as the database.
We would be using Embedded Server Licenses to package Oracle along with our application.
Now the requirement is such that the client should not be able to explore ORACLE Binaries and install/ deinstall as a separate component.
Oracle should only be called from the application installers.
What is the best way to achieve this scenario.
(We are using Oracle 10g on Win XP)

Thanks
Sanji

Tom Kyte
September 08, 2005 - 8:44 am UTC

likely by creating a response file for OUI (Oracle Universal Installer) and have the installer "auto run" using the response file.

Packaging Oracle

Sanji, September 08, 2005 - 8:54 am UTC

But Tom the issue is that the Oracle Binaries shouldn't be exposed. I have developed the OUI response file and the application installer is indeed working as desired.
The problem is Oracle Binaries, they are not to be exposed or used directly by the client. Is there any way we can "stealth" the Oracle S/W binaries ?

Regards
Sanji

Tom Kyte
September 08, 2005 - 8:56 am UTC

I don't know what you mean by "not exposed", it is not like the zeros and ones go by -- just the messages from OUI

A reader, September 08, 2005 - 8:59 am UTC

The Oracle Enterprise Edition CD contents for instance....
That's what i meant by Oracle Binaries. Now when we package the application along with the Oracle CD contents, we do not want the client to explore the Oracle component of the package or install/ deinstall Oracle on their own. I intented to know if it's possible.

Rgds
Sanji

Tom Kyte
September 08, 2005 - 9:02 am UTC

You'll want to talk with support to see if such a thing is possible -- but it is doubtful (it would lead to support issues of your own that you don't even want to think about).

They would have to pop in the CD to install. And they would be able to get the software right from OTN so it is not like you are hiding anything they couldn't get otherwise.

thanks

A reader, September 08, 2005 - 9:03 am UTC

serves my purpose :) thanks

regards

Upgrade from 9ir2 to 10gr2

jp, September 22, 2005 - 10:36 am UTC

Dear Tom,
This is the first time that I will proceed with an upgrade, I have a pc running 9ir2, and I want to upgrade to 10gr2, Do I need to run the installation for 10g and it will detect that there is already a 9i version installed and then will proceed to upgrade the DB automatically? or there is a special software for upgrade?
thanks


Tom Kyte
September 22, 2005 - 5:46 pm UTC

You want to read this:
</code> http://docs.oracle.com/docs/cd/B19306_01/server.102/b14238/toc.htm <code>

it tells you everything you need to know about this process - everything.

10g client installation issue

Deepak, September 28, 2005 - 7:33 am UTC

Hi Tom,

Trying to install 10G R2 client (Administration) on W2K box. It is hanging at the statement

D:\oracle\product\10.2.0\client_1\jdk\jre\\bin/java.exe -Dsun.java2d.noddraw=true -Duser.dir=D:\oracle\product\10.2.0\client_1\bin -classpath ";D:\oracle\product\10.2.0\client_1\jdk\jre\\lib\rt.jar;D:\oracle\product\10.2.0\client_1\jlib\ldapjclnt10.jar;D:\oracle\product\10.2.0\client_1\jlib\ewt3.jar;D:\oracle\product\10.2.0\client_1\jlib\ewtcompat-3_3_15.jar;D:\oracle\product\10.2.0\client_1\network\jlib\NetCA.jar;D:\oracle\product\10.2.0\client_1\network\jlib\netcam.jar;D:\oracle\product\10.2.0\client_1\jlib\netcfg.jar;D:\oracle\product\10.2.0\client_1\jlib\help4.jar;D:\oracle\product\10.2.0\client_1\jlib\jewt4.jar;D:\oracle\product\10.2.0\client_1\jlib\oracle_ice.jar;D:\oracle\product\10.2.0\client_1\jlib\share.jar;D:\oracle\product\10.2.0\client_1\jlib\swingall-1_1_1.jar;D:\oracle\product\10.2.0\client_1\jdk\jre\\lib\i18n.jar;D:\oracle\product\10.2.0\client_1\jlib\srvmhas.jar;D:\oracle\product\10.2.0\client_1\jlib\srvm.jar;D:\oracle\product\10.2.0\client_1\oui\jlib\OraInstaller.jar;D:\oracle\product\10.2.0\client_1\lib\xmlparserv2.jar;D:\oracle\product\10.2.0\client_1\network\tools" oracle.net.ca.NetCA /orahome D:\oracle\product\10.2.0\client_1 /orahnam OraClient10g_home1 /instype custom /inscomp client,oraclenet,ano /insprtcl tcp,nmp /cfg local /authadp NO_VALUE /nodeinfo NO_VALUE /responseFile D:\oracle\product\10.2.0\client_1\network\install\netca_clt.rsp /silent

Any clues regarding the problem?



Tom Kyte
September 28, 2005 - 10:01 am UTC

nope, install/configuration - i suggest support.

"order by null" behaves differently in 10g than 9i

ht, September 30, 2005 - 1:51 pm UTC

Hi Tom,
I recently upgraded to 10.2.0.1 from 9.2.0.5 and I'm wondering if the test case below demonstrates a bug in the way "order by=null" is handled.

I have a few hundred PSPs in my app and most of them call "?orderby=null" at a top level because the ordering of columns in the PSP is driven by "?orderby=xyz".

As you can see below, selecting from a 9i view with "order by=null" displays results using the view's "order by " clause. However, selecting from a 10g view with "order by=null" displays results using the table's order.

If there isn't a workaround, I'll have to edit my PSPs to explicitly state "order by " clauses.

As always, thanks for you assistance.
ht

Connected.

-----------------------------------------------------
-- connect to 9i, create a view, select from the view
-----------------------------------------------------

select * from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
PL/SQL Release 9.2.0.5.0 - Production
CORE 9.2.0.6.0 Production
TNS for Solaris: Version 9.2.0.5.0 - Production
NLSRTL Version 9.2.0.5.0 - Production

5 rows selected.


drop view emp_v1;

View dropped.

create view emp_v1 as select * from emp order by ename;

View created.

select ename from emp;

ENAME
----------
SMITH
ALLEN
WARD
JONES
MARTIN
BLAKE
CLARK
SCOTT
KING
TURNER
ADAMS
JAMES
FORD
MILLER

14 rows selected.

select ename from emp_v1 order by null;

ENAME
----------
ADAMS
ALLEN
BLAKE
CLARK
FORD
JAMES
JONES
KING
MARTIN
MILLER
SCOTT
SMITH
TURNER
WARD

14 rows selected.


set echo off;
Connected.

-----------------------------------------------------
-- connect to 10g, create a view, select from the view
-----------------------------------------------------
select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for Solaris: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production

5 rows selected.


drop view emp_v1;

View dropped.

create view emp_v1 as select * from emp order by ename;

View created.

select ename from emp order by null;

ENAME
----------
SMITH
ALLEN
WARD
JONES
MARTIN
BLAKE
CLARK
SCOTT
KING
TURNER
ADAMS
JAMES
FORD
MILLER

14 rows selected.

select ename from emp_v1 order by null;

ENAME
----------
SMITH
ALLEN
WARD
JONES
MARTIN
BLAKE
CLARK
SCOTT
KING
TURNER
ADAMS
JAMES
FORD
MILLER

14 rows selected.



exit;
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options


Tom Kyte
September 30, 2005 - 2:33 pm UTC

wha???

that is

select ename from ( select * from emp order by ename ) order by null;


you sort by ename, then you sort by NULL (basically you are saying "please randomize")

It was purely an "accident" it "worked" in 9i -- order by null is a data scrambler.

10gr2 has an optimization that recognizes "this data need not be sorted, since you are sorting by a constant" and removes the sort step from the plan all together.

Every release, the optimizer gets 'smarter'


(waiting for the flood of "group by broke in 10g" too. group by doesn't sort, never had to and does much less often in 10gr2)


The bug here is that 9i and 10g r1 could at any time they like scramble your data up as your inner order by is overridden by the outer order by.

A reader, September 30, 2005 - 2:46 pm UTC


Oracle 10g express edition install

A reader, October 31, 2005 - 4:11 pm UTC

Sorry Tom I didn't knew where to ask it.
I can't istall oracle 10g express edition, because I get a message I don't have enough memory. And process stopped
Is there some way to skip that ( I has 256 MB, I thinks this is enough for express edition)

Tom Kyte
November 01, 2005 - 5:38 am UTC

I just verified what you say is true - with 256m of RAM, linux reported back "250m" of ram to top and other applications - the 6m of ram went missing.

Looking for a workaround, will let you know if I find one.

-----------------------------------------------------------------------------

I talked to the installer people and they will be relaxing the check by a couple of megs to accommodate for memory taken up by video cards and other things - they are thinking about 16meg or so (down to 240). So, no current workaround but they will be changing this.

Sqlplus : libsqlplus.so permission denied

NR, October 31, 2005 - 5:17 pm UTC

I just upgraded from 9i to 10g everthing went smooth meaning no issues, but when I am trying to connect to database instances as a different OS user. Prior to upgrade (with oracle 9i) this was not an issue but now I am getting the following error message
$ sqlplus username/password@instance_name
ld.so.1: sqlplus: fatal: /ora/app/oracle/product/10.2/lib/libsqlplus.so: Permission denied
Killed

I can connect to the database instances as an OS user.

Tom Kyte
November 01, 2005 - 5:41 am UTC

does the OS user have the privilege (in the right group on that system) to see that library? The permissions on the server are much more "locked down".

Dave, November 01, 2005 - 7:55 am UTC

Tom said:

"I just verified what you say is true - with 256m of RAM, linux reported back
"250m" of ram to top and other applications - the 6m of ram went missing.

Looking for a workaround, will let you know if I find one.
"

perhaps you / they have a video card which uses 6 Mb RAM (for those which dont have on board memory on their video cards)

Tom Kyte
November 01, 2005 - 10:57 am UTC

I was using vmware....

A reader, November 01, 2005 - 11:19 am UTC

Thanks Tom

How do you minimize downtime ?

Randy, November 01, 2005 - 12:14 pm UTC

We are running Oracle9i RAC and planning to upgrade to 10g. We have 3 node Sun cluster and redundant/mirrored storage. But I am having hard time to convince my management why I still have to bring database down even though there are so much redundancy in place. I wish Oracle has some better way to upgrade to 10g without shutting down a 24x7 operation. Tom, don't you think the purpose of RAC is defeated if we still need to bring database down for maintenance like upgrade ?

Tom Kyte
November 02, 2005 - 4:46 am UTC

rolling upgrades from major release to major release using dataguard is a new 10g sort of feature.

To me, an upgrade is a huge thing, it'll always involve some amount of downtime, even when you do a rolling upgrade (you have to migrate users from production to the upgraded standby - and then back - the switchover involves some downtime)

A reader, November 03, 2005 - 7:30 am UTC

Thanks Tom

10g express edition free ?

A reader, November 04, 2005 - 10:41 am UTC

Is production use of 10g express edition free ?

Tom Kyte
November 04, 2005 - 5:15 pm UTC

it will be, yes.

A reader, November 07, 2005 - 10:43 am UTC

Thanks! So can we convert the existing production 9i enterprise edition of oracle database to oracle 10g express edition if its becoming free?

Tom Kyte
November 08, 2005 - 10:21 pm UTC

do you really have an EE database with

a) less than 4gb of data
b) using a single cpu
c) with only one instance on the machine (this should be true :)
d) and using 1gb or less ram

????

I think Tom made a Typo: is it not up to 1 GB RAM ?

pasko, November 08, 2005 - 11:28 am UTC


Tom Kyte
November 08, 2005 - 10:20 pm UTC

that I did, I've corrected it right above so as to not cause future confusion. thanks!

10g Express Edition Licensing

Luis Soto, November 17, 2005 - 9:28 am UTC

The Fine print of the licensing agreement tells no to use the beta version for more than 3 months and to stop using it after that it also revert any other right over the software. Is it safe to plan an inexpensive basic training on Oracle DB using the software as basis, and using it too in and small comercial POS or reatail APP. I have many customer with other databases, but being very small they can't afford some software cost and use a less powerfull free DB, I like to move then to ORACLE in order to have a better and more stable system

Thanks

Tom Kyte
November 18, 2005 - 7:14 am UTC

when the production comes out - it'll be more of a "use as you like" license. They do not want the beta to be used "forever"

64-Bit install

Yogesh, November 21, 2005 - 11:52 am UTC

I’m facing one problem when upgrading from 8.0.4 to 10.2.

Currently I have a production AIX 4.3 / 8.0.4 box. This box will be replaced by a new physical AIX 5.2/10.2 box. For migrating the 8.0.4 database, I was trying to install 8.0.4 on AIX 5.2 box. But it came out giving version error.

Looks like oracle 8.0.4 can’t be installed on 64-bit AIX. What path should I take to upgrade?


Tom Kyte
November 21, 2005 - 12:25 pm UTC

</code> http://docs.oracle.com/docs/cd/B19306_01/server.102/b14238/preup.htm#sthref52 <code>

You could upgrade to 8174 on the old box, and then move it over to the new box and upgrade that database to 10gr2. (you need really only move the datafiles - not the software)


You could use export/import as well

Yogesh, November 21, 2005 - 1:50 pm UTC

<quote>
(you need really only move the datafiles - not the software)
</quote>

In this case will I require the datafiles of 8.1.7 or datafiles of 8.0.4 will work?



Tom Kyte
November 21, 2005 - 2:24 pm UTC

8174 - see the above link, upgrade to 10gr2 is possible only from the terminal releases of the old software (and not from 8.0 at all)

Yogesh, November 22, 2005 - 5:49 am UTC

But my problem remains same. I can’t upgrade existing box because of downtime and box hardware configuration constraints.

I want to find out a way of upgrading 804 to 8174, without installing 804 software. (From your quote, I think you are suggesting this, but not sure if this is applicable to these versions)

Is there any way of doing this? Or will I have to use some intermediate box to upgrade to 8174 and then finally to 10.2?


Tom Kyte
November 22, 2005 - 8:39 am UTC

you don't have to install 804 software, but you do have to install 8174 software.


You would upgrade the 804 to 8174 on one machine, move datafiles, upgrade 8174 database to 10g on another machine.

or export/import
or dump/reload

Yogesh, November 22, 2005 - 11:09 am UTC

So if I want to upgrade from 804 to 8174 without installing the 804 software, what should I do?

As I will not have any database or instance created in 8174, how can I use the data and control files from 804? Can you please give me steps for performing this upgrade? As this approach is not documented I guess.

I cannot use exp/imp as db size is 200G.


Tom Kyte
November 22, 2005 - 11:55 am UTC

you would install the 8174 software on the machine with 804, upgrade 804 to 8174, move that newly upgraded database to the new machine and proceed to upgrade that set of 8174 datafiles to 10gr2

If you want to upgrade you WILL have a database in 8174.

Else, you will not be upgrading, you will be doing something else, like export/import.


You can certainly export/import 200g, just requires a bit of "do it yourself parallelism". 200g is pretty common these days, not huge.

eg:

export these five SCHEMAS in parallel.
then for the last schema, export these list of tables in this process, this list in that process and so on.


Then, import them in parallel (in fact, you can be importing into the target database while still exporting parts of the old database, a "do it yourself data pump" if you will)

Yogesh, November 22, 2005 - 12:52 pm UTC

Just to confirm, if I'm using large filesystem on AIX, where max file size is allowed to be 32G, will exp support file creation upto 10g (average size of tables)?

I can use mknod as well to compress the files, but just want to confirm if there is any limitation for exp in 8.0.4 for file size.

Tom Kyte
November 22, 2005 - 4:23 pm UTC

if you have 64bit - which you don't. Your export with 804 likely doesn't have the large file support.


but exp can export to multiple files (using parameters, don't need mknod).




Yogesh, November 23, 2005 - 8:36 am UTC

I can't see any parameter for exporting to multiple files..

I found one thread on metalink, talking about FILESIZE attribute. But I got following error, when I was using that.

TABLES=LRC
LOG=export2g.log
FILE=(LRC1.dmp, LRC2.dmp, LRC3.dmp, LRC4.dmp) filezise=800m

I got following error

LRM-00112: multiple values not allowed for parameter 'file'
LRM-00113: error when processing file 'exp.par'

I think these are not alloed in 804. How should I do this?

Can I use your script in following link ?

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:122612348063 <code>

Tom Kyte
November 23, 2005 - 10:04 am UTC

oh, ok, that parameter must have been added in 8.1, you have 8.0. Been a very very very very long time since then.

yes, you can test any script to see if it functions.

TNS-12518: TNS:listener could not hand off client connection

Shailesh, November 30, 2005 - 11:27 am UTC

Hello Tom,

RDBMS Version:: 10.2.0.1.0
Operating System and Version:: Windows 2000 SP4
Client Operating System and Version:: Windows XP SP2
Client Net Version:: 8.1.6/8.1.7

Today we observed this error.

30-NOV-2005 17:44:28 * (CONNECT_DATA=(SERVICE_NAME=qo762)(CID=(PROGRAM=D:\ITBMed4\i1W.exe)(HOST=PC-RUPALI-XP)(USER=rupali))) * (ADDRESS=(PROTOCOL=tcp)(HOST=10.83.3.67)(PORT=2180)) * establish * qo762 * 12518
TNS-12518: TNS:listener could not hand off client connection
TNS-12560: TNS:protocol adapter error
TNS-00530: Protocol adapter error
32-bit Windows Error: 233: Unknown error

We did set following parameter in listener.ora and restarted listener.

DIRECT_HANDOFF_TTC_LISTENER=OFF

But this workaround doesn't work. Could you please help us to solve this issue.

Thanks & Regards,

Shailesh


10g & Winfows NT

Yogesh, December 06, 2005 - 11:36 am UTC

Can I install 10g client on windows NT 3.51?

Tom Kyte
December 06, 2005 - 3:00 pm UTC

no idea.

likely "no"

Yogesh, December 07, 2005 - 9:40 am UTC

As a part of 10G upgrade process, I received pack of 8170 CDs. How do I go to 8174 terminal release from here? Do I have to again ask for 8174 media from oracle? or I can download this patch from metalink?

Tom Kyte
December 08, 2005 - 1:02 am UTC

metalink

Yogesh, December 08, 2005 - 11:54 am UTC

Can you plase tell me the patch number, I can'r find on eon metalink

Tom Kyte
December 08, 2005 - 12:05 pm UTC

just search patchsets for your product family/os... I don't have the patch number handy myself.

Yogesh, December 08, 2005 - 12:09 pm UTC

Got the patch number - 2376472

Yogesh, December 09, 2005 - 3:38 pm UTC

When I was upgrading from 804 to 817, upgrade failed when listener upgrade was going on. When I again tried upgrading it came out saying "Database is already 817" can't upgrade same version.

When I query v$version, it is says

Oracle8i Enterprise Edition Release 8.1.7.0.0 - Production
PL/SQL Release 8.1.7.0.0 - Production
CORE 8.1.7.0.0 Production
TNS for IBM/AIX RISC System/6000: Version 8.1.7.0.0 - Developmen
NLSRTL Version 3.4.1.0.0 - Production 5 rows selected.

But there are many invalid SYS owned packages and views. Is this a successful migration?

Even after changing the compatibility parameter to 817 it is still showing 804!!

Can you give me some tip to handle this problem?


Tom Kyte
December 10, 2005 - 5:06 am UTC

please utilize support for something like this.

they will be able to figure out where you "were".

Installation Problem for Oracle 10g Express Edition Beta Version 2

Dulal, December 20, 2005 - 1:08 am UTC

Tom,
I am in problem to install Oracle 10g Express Edition Beta 2 on Windows 2000 Pro. I have download OracleXE.exe but when I am double clicking on OracleXE.exe, it is not working/starting installation. OracleXE.exe is downloaded completely. What are the problems?
Note:
1. Earlier I have delete Oracle10g R1 but still Developer 6i is installed on my machine.
2. I am interested to check Oracle 10g express edition, so deleted 10g Res 1 Enterprise edition.
Please help me.

Tom Kyte
December 20, 2005 - 8:45 am UTC

define "it is not working/starting" - what are the symptons - does the install shield work at all?

Installation Problem for Oracle 10g Express Edition Beta Version 2

Dulal, December 20, 2005 - 10:59 pm UTC

Thanks Tom,

Symptons - the install shield does not work at all.

When I double click on OracleXE.exe, a black window is trying to start and also closing that window suddenly i.e within one second it is visiable.

Tom Kyte
December 21, 2005 - 7:15 am UTC

what is the byte count on the downloaded file - sounds a lot like a corrupt binary.

Installation Problem for Oracle 10g Express Edition Beta Version 2

Dulal, December 21, 2005 - 11:15 pm UTC

Size: 150 MB (157,482,880 bytes)
Size on disk: 150 MB (157,483,008 bytes)

Oracle Express - (local) connection issues using MDAC/ASP applications

Phil, December 22, 2005 - 6:15 am UTC

Hi Tom

Oracle Express is going to make life great for providing a demo version of systems on sales folks laptops. I have been tasked with looking at this for an ASP application but there is one small thing I appear to be missing.
The application (objects and SPs) and SQL*Plus access is all fine and the install was a breeze, however access to the database from the ASP application is via MDAC. My question is, how or what extra do I need to install/configure so that access is possible from this route? The ASP IIS application is installed on a Windows XP Professional machine which is the same box as the Oracle Express database.

Many, many thanks in advance,

Phil

ORA-04030 & ORA-04045

Yogesh, January 13, 2006 - 11:02 am UTC

I was upgrading oracle 8174 to 10201. At the end of upgrade I got database upgrade status as INVALID. In the spool log file I got following errors

ORA-04045: errors during recompilation/revalidation of SYS.DBMS_SWRF_REPORT_INTERNAL
ORA-04030: out of process memory when trying to allocate 4128 bytes (PLS non-lib hp,pdzdM80_Allocate_Block)

I've made the changes in /etc/security/limits, according to metalink article 1017016.102 (ORA-04030: On IBM RS6000). This solved my problem.

I've a question on this. We define parameters in init.ora file to restrict oracle to use specific amount of memory for different areas. Then for compiling the PL/SQL package why it should go beyond defined parametes and ask for OS memory?

Tom Kyte
January 13, 2006 - 11:21 am UTC

ora-4030 is out of PROCESS memory - you OS refused to give us memory in your case because the ulimits were set too low.


The OS is the only place we can GET memory from. You limited us to less than was truly needed to perform your task.

ORA-04030

Yogesh, January 13, 2006 - 12:26 pm UTC

Output of show SGA gives me details of memory allocated for diffenet memory areas. If oracle could not allocate memory for itself how is it starting the database?

Are we takling about memory used on top of SGA utilization?

I'm confused !!

Tom Kyte
January 13, 2006 - 12:30 pm UTC

ora-4030 is PROCESS memory, it is not SGA memory.

There is an SGA
There is a PGA

ora-4030 is process memory, not from the SGA.

The sga is for "shared" things, not private memory things in the process.

[tkyte@me ~]$ oerr ora 4030
04030, 00000, "out of process memory when trying to allocate %s bytes (%s,%s)"
// *Cause: Operating system process private memory has been exhausted
// *Action:


Where is Pro*C in Oracle10g Express?

ry, February 07, 2006 - 2:35 pm UTC

Hi Tom,
I installed Oracle10g Express, but I not found Pro*C.
is It available in Oracle10g Express?

Thanks

Tom Kyte
February 08, 2006 - 1:42 am UTC

no, it is not.

Anything desupported in 10gR2?

Ben, February 08, 2006 - 8:14 am UTC

Tom,
Several of our databases are migrating from 32-bit 10g Release 1 to 64-bit 10g Release 2, on Windows 2003.
Anything we should be wary of? Is anything desupported between those two releases, or features that are now obsolete? I couldn't find anything in the documentation.
Thanks!!

sqlplus issue in 10g?

Reader, March 03, 2006 - 10:32 pm UTC

Tom,
We just installed Oracle 10.2.0.1 on HP-UX 11i. A non-oracle user is trying to use sqlplus and gets following error.

$ sqlplus username/password
SP2-1503: Unable to initialize Oracle call interface
SP2-0152: ORACLE may not be functioning properly

Looked up on the metalink and Oracle is saying it is a base bug 4516865. Permissions at the OS level aren't set properly after the Oracle 10gR2 install. One of the solution given is to do "chmod -R 755 $ORACLE_HOME", a recursive permission command. Do you think this is a good solution from a security perspective?

Metalink says, "There is a one-off patch for the base bug 4516865, but this fixes the permissions problem only on the server side. Currently, there is still no patch for the client side." They don't say what the one-off patch is :-(

Have you come across this and what do you suggest us to do?

My last question is if this is a bug for which there isn't any patch out yet then how come everybody is running their production databases in Oracle 10gR2 without coming across this problem for more than 2 years now?

Thank you.


Tom Kyte
March 04, 2006 - 7:07 am UTC

please utilize support for this.

Most people would not let a "normal" user onto their database server.

that chmod would give r-x to everyone, that is basically what you are asking for - by anyone to be able to run this stuff (not just oracle, not just the dba group)

Reader, March 04, 2006 - 10:29 am UTC

Tom,
You said, "Most people would not let a "normal" user onto their database server".

I don't understand what you mean. The developers are in this case a "normal" user? They are not part of dba group. Hence they can't use sqlplus, sqlldr etc. Are you saying this would work if they are connecting from a different server using tnsname with the same permissions? Right now, they login using ORACLE_SID and ORACLE_HOME variables on the same server as the database is installed.

Also, do you think giving a r-x to world is okay? I don't mind the 'x' part but what about the 'r' part? Do you think r-x is a proper solution? Okay... I am going to call support. I just wanted to know your opinion.

Thank you for your help!


Tom Kyte
March 05, 2006 - 1:30 pm UTC

the developers are the most normal people in the world and the last I would let onto the server itself. They don't need that. In fact, the fewer people that can actually log into the OS (DBA's included, no reason most of them need that sort of access either) the better.

I'll reiterate: please work with support on this, my response is - I don't want people logging into my server. You can do far fewer files with r-x (some libraries and sqlplus itself), but I'd rather have the developers accessing using their own clients on their own machines.

10g r2 Bug?

steve, March 06, 2006 - 11:27 pm UTC

I have the following stored procedure. It works fine
on 10.1, but on 10.2(Oracle XE), it will generate a NULL for date_expired if I run the procedure.


procedure update_expired_date(domain_id varchar2, period number) is
begin

update t
set date_expired = date_expired + ( period * 365)
where t.id = domain_id;

commit;

end update_expired_date;

thanks!

Tom Kyte
March 08, 2006 - 4:23 pm UTC

need example.

as always.



suggest you really start prefixing your plsql variables with p_ and l_ - be a bummer if your table has a column period or domain_id wouldn't it.


none the less, I cannot reproduce:

ops$tkyte@XE> create table t ( date_expired date, id varchar2(1) );

Table created.

ops$tkyte@XE>
ops$tkyte@XE> insert into t values ( sysdate, '1' );

1 row created.

ops$tkyte@XE>
ops$tkyte@XE> create or replace
  2  procedure update_expired_date(domain_id varchar2, period number) is
  3      begin
  4
  5         update t
  6         set date_expired = date_expired + ( period * 365)
  7         where t.id = domain_id;
  8
  9        commit;
 10
 11      end update_expired_date;
 12  /

Procedure created.

ops$tkyte@XE> select * from t;

DATE_EXPI I
--------- -
08-MAR-06 1

ops$tkyte@XE> exec update_expired_date( '1', 4 );

PL/SQL procedure successfully completed.

ops$tkyte@XE> select * from t;

DATE_EXPI I
--------- -
07-MAR-10 1



Also, suggest you stop putting "commit" into stored procedures - only the invoking client KNOWNS when the transaction is over. 

Precision

Michel Cadot, March 08, 2006 - 5:03 pm UTC

Tom,

The question was asked on comp.databases.oracle.server with all the needed informations:
</code> http://groups.google.fr/group/comp.databases.oracle.server/browse_frm/thread/14cfcc3284bc07d7?hl=fr <code>
Regards
Michel

Su Baba, March 08, 2006 - 5:36 pm UTC

Questions regarding upgrading 9iR to 10g2:

- The database has a large volume of BLOBs. During the databae upgrade, are there any special considerations as regards to BLOBs?

- As mentioned above, the database is BLOB heavy. The size of the database is 1 TB and the database server does not use logical volume manager. Do you recommend the use of ASM? Would implementing ASM be part of the upgrade or post-upgrade task?

Tom Kyte
March 09, 2006 - 12:54 pm UTC

- no

- entirely up to you - are you having issues managing that much storage?

Su Baba, March 09, 2006 - 5:30 pm UTC

No trouble managing 1TB, but customers who bought our software get terrified by the size because they have inexperienced DBAs. Would use of ASM help in this case?

Tom Kyte
March 10, 2006 - 11:46 am UTC

ASM is just a database file system

you still have 1tb of data to "manage", backup, restore, and so on.

10gR2 upgrade - Impact on "Select DISTINCT" et al

Steve Hankin, March 14, 2006 - 12:31 pm UTC

Here's a new little feature I've just come across when upgrading to 10gR2.

Try this SQL:

select distinct object_name from dba_objects;

Now, ignore the content since it's going to be different between releases.
But notice that in 9i the output is ordered and in 10gR2 it is random.

This appears to be a change in execution plan from "SORT (GROUP BY)" to "HASH UNIQUE"

Appears to be a performance boost in terms of LIOs, but a bit of a nightmare if you'd _accidentally_ had code dependent on the sorted output.

Is there a list of any other such new features in 10gR2 ? I didn't see this listed under the New Feature section

Tom Kyte
March 15, 2006 - 9:08 am UTC

it could have always been random in ALL RELEASES OF ORACLE, FOREVER.

If you have no order by, you have quite simply

NO REASON TO EXPECT THE DATA TO BE IN ANY ORDER

PERIOD.

nothing else can be said. no order by = no order.


this is not new with 10g, this has ALWAYS BEEN TRUE FOREVER.


One of my very first "internet usenet flamefests" way way back in time was about this very subject.

distinct != order by
group by != order by
/*+ INDEX(...) */ != order by

order by is the ONLY THING equal to order by

information required

Raktim, April 18, 2006 - 6:42 am UTC

hi tom,
just wanted to know what is the distribution of space required for the installation of Oracle release 11.5.10.0 on SUN SOLARIS SPARC platform 'vision' instance.

kindly let me know the total space required for the same.

Tom Kyte
April 18, 2006 - 9:43 am UTC

I would check out the installation guide for Oracle Applications - have you tried that resource?

some more information

Raktim, April 18, 2006 - 11:02 am UTC

hi tom,
yes i have tried it out but it does not mention anything about the space distribution for ORACLE_HOME,COMMON_TOP,APPL_TOP,DATA_TOP etc.seperately.

can u please help.

Tom Kyte
April 18, 2006 - 2:21 pm UTC

"u" isn't around, I can never find them - do you have contact information for them?

I'm not sure it breaks it out - I'm not an apps person (never have I installed or configured it).

Installation question,

A reader, April 18, 2006 - 4:31 pm UTC

I am trying to Install Oracle 10g Release 2 on a 64 bit Windows 2003 server. The server does not contain any oracle version so far.

When I started to run "autorun.exe", I get the following error message: "Unable to load Resource DLL".

Do you know what is this error and how to overcome?

Thanks,


Oracle 10g XE

Narendra, April 19, 2006 - 1:04 am UTC

Tom,

Installed oracle 10g XE last week on my laptop (Windows XP, 512 MB RAM). To my surprise, I found out that it does not support bitmap indexes, partitioning etc.
Is it true? I intend to use it for learning oracle (especially new features of 10g)

Tom Kyte
April 19, 2006 - 8:31 am UTC

why to your surprise? Neither does SE (standard edition).

XE <= SE <= EE


XE is a subset of standard, standard is a subset of Enterprise.


Trust me - there are thousands, hundreds of thousands of things to be learned about Oracle using XE. If you believe you've so totally mastered XE beyond being able to learn anything new - let me know.

information required

Raktim, April 19, 2006 - 3:34 am UTC

hi tom,
sorry, had no idea that u r not an APPS person......can u give me some information regarding installing multiple instances on a single server.

Tom Kyte
April 19, 2006 - 8:35 am UTC

"u" isn't here, when they show up (I've been waiting for a long long time for them to show up - they never seem to come to work) I'll ask them.


In my opinion, the maximum number of instances on a single server is 1.

However, you do not "install" instances, you use a tool like DBCA to configure and create them. A single install of Oracle Database supports the creation of more than one instance.

need some more information

Raktim, April 19, 2006 - 9:47 am UTC

hi tom,
thanx for the important piece of information.will get back to u for further assistance.

Tom Kyte
April 19, 2006 - 10:28 am UTC

"u" has never come into the office, so likely best not to try to follow up with them here. I don't know how to contact them.

XE Features

Narendra, April 20, 2006 - 12:51 am UTC

Tom,

I just wanted to know whether XE supports it or if I need to change any settings. As far as mastering goes, I know I am far away from mastering even writing a SELECT.
But I could not get any document (on internet) that described what is there and what is not there in XE. Also, tried twice to register to XE forum but failed.

Tom Kyte
April 20, 2006 - 7:57 am UTC

can you describe "what failed" exactly ? "but failed" is "very vague"

goto where you got XE from:

</code> http://otn.oracle.com/xe

and then "See all documentation" and the licensing information, specifically:

http://docs.oracle.com/docs/cd/B25329_01/doc/license.102/b25456/toc.htm#BABJBGGA <code>



URL incorrect

Lance Hoover, April 20, 2006 - 11:36 am UTC

How odd. When I click on the URL you provide I get a 404 error. I figured you made a typo on the URL and I was going to correct it. When I pasted in the URL, it ended up looking just like your URL. For some reason, upon submission the '#' is being converted into a "". I can't seem to find a way to get the "correct" URL to display because it keeps getting converted.

RE: Incorrect URL

Lance Hoover, April 20, 2006 - 11:51 am UTC

Sorry, make that the "# B" (with no space) is being replaced with "". (If I type in "# B" with no space, it displays empty double quotes, like this "#B")

Tom Kyte
April 20, 2006 - 12:33 pm UTC

indeed there should be a pound-sign-B there in that url, but my code removes it ;)

Oracle 10.2.0.2

Su Baba, May 09, 2006 - 2:23 am UTC

Is 10.2.0.2 the latest version of Oracle? I tried to do a search on metalink but wasn't sure what exactly to search for.

On otn, the latest Oracle Database download is for 10.2.0.1. Is there no download for 10.2.0.2? Do they only come in a patch set?

Another question I have is regarding upgrading 9iR2 to 10gR2. The 9iR2 database has the latest Oracle Critical Patch Update April 2006. When the database is upgraded to 10gR2, do I need to apply this patch again?

Tom Kyte
May 09, 2006 - 7:57 am UTC

You would utilize support to determine the latest patchset available on your platform. (and you found it already ;)

the x.x.x.N patches are just that - patches. you get the base install and patch it up.


I suggest you use support to ask these sort of (very much) support questions for your software on your platform. I don't use every platform/release/etc myself.

Upgrade to 10g....gave hard time

Syed F. Hashim, May 19, 2006 - 11:59 am UTC

Hi Tom,

Can't upgrade after 3 tries.

esisting env: Oracle 9.2.0.4 on Solaris 9
upgrade to: Oracle 10.2.0.1 on Solaris 10

1st try: metalink: Complete checklist for manual upgrades to 10gR2

2nd try: dbua

3rd try: dbua without OEM option

and all the three instances stucked at this location:

-- Load all the Java classes
begin if initjvmaux.startstep('CREATE_JAVA_SYSTEM') then
2 initjvmaux.rollbacksetup;

and alert.log looks like this:

SERVER COMPONENT id=CATPROC: timestamp=2006-05-19 09:55:35
Fri May 19 09:56:02 2006
Incremental checkpoint up to RBA [0x4f2.13999.0], current log tail at RBA
[0x4f2.1a114.0]
Fri May 19 09:56:40 2006
SERVER COMPONENT id=RDBMS: status=VALID, version=10.2.0.1.0,
timestamp=2006-05-19 09:56:40
Fri May 19 09:57:32 2006
create rollback segment MONSTER2 storage (initial 2 m next 2 m maxextents
unlimited)
Fri May 19 09:57:32 2006
Completed: create rollback segment MONSTER2 storage (initial 2 m next 2 m
maxextents unlimited)
Fri May 19 09:57:32 2006
alter rollback segment MONSTER2 online
Completed: alter rollback segment MONSTER2 online

still couldn't get a good support from Oracle support too,
communicating with them since:

SR Number 5436801.992 Open Date 18-MAY-06 14:35:03
Support Identifier 1547029 Name Syed Hashim
Severity 4 Last Update 19-MAY-06 08:42:23
Product Oracle Server - Enterprise Edition Product Version 10.2.0.1.0
Platform Solaris Operating System (SPARC 64-bit) Detailed Status Modified via MetaLink
SR Reference n/a BUG Reference n/a

Need your help, enable me to successfully upgrade to 10gR2 on Solaris 10 box.

Thanks a lot...

Tom Kyte
May 19, 2006 - 12:24 pm UTC

please utilize support for something like this.

If I may be so unbold

A reader, May 19, 2006 - 1:22 pm UTC

#b ah, that's better.

oracleXE

A reader, June 19, 2006 - 3:47 pm UTC

on oracleXE download page it says you run a forum for OracleXE

what is the URL to register/use that ?

Please let us konw.

Thanks,

Tom Kyte
June 19, 2006 - 6:49 pm UTC

download XE and you'll have that!

9ir2 on UNIX HP to 10g Linux RedHad

Yoav, July 13, 2006 - 3:40 pm UTC

Hi Tom
we plan to move our data warehouse db from unix 9iR2 to 10gR2 on Linux.

The data warehouse size is 500gb.
1.Is there a problem to move the data between those
version/platforms using exp/imp
2.Could you suggest a faster way to move the data ?
3. Do you recommand to upgrage to 10g and then change
platform , or not ?

Regards

Tom Kyte
July 13, 2006 - 5:27 pm UTC

1) speed.... of course

2) perhaps to upgrade the HP database to 10g and use datapump OR crossplatform transports, Heck, depending on the HP/Linux you are talking about you might be able to transport the ENTIRE database (including system)


PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
----------- ---------------------------------------- --------------
4 HP-UX IA (64-bit) Big
1 Solaris[tm] OE (32-bit) Big
16 Apple Mac OS Big
3 HP-UX (64-bit) Big
9 IBM zSeries Based Linux Big
6 AIX-Based Systems (64-bit) Big
2 Solaris[tm] OE (64-bit) Big
18 IBM Power Based Linux Big

17 Solaris Operating System (x86) Little
12 Microsoft Windows 64-bit for AMD Little
13 Linux 64-bit for AMD Little
8 Microsoft Windows IA (64-bit) Little
15 HP Open VMS Little
5 HP Tru64 UNIX Little
10 Linux IA (32-bit) Little
7 Microsoft Windows IA (32-bit) Little
11 Linux IA (64-bit) Little


You can transport within the endian sets using a FULL database transport (not just tablespaces)

3) perhaps...

10g and update view from another schema

Renat, July 31, 2006 - 4:50 am UTC

I am get some problems when upgrade to Oracle 10.2.0.2.0

this is an example:

> connect schema_1

schema1> create table test_t (n number);

schema1> create view test_v as select n from test_t;

schema1> grant select,update on test_v to schema2;

schema1> connect schema2

schema2> update schema1.test_v set n = 1 where n = 2;

0 rows updated.

schema2> create or replace function test_f return number is begin return 2; end;
/

schema2> update schema1.test_v set n = 1 where n = test_f;
update o_mdb.test_v set n = 1 where n = test_f
*
ERROR at line 1:
ORA-01732: data manipulation operation not legal on this view

There is two questions:
1) why?
2) what can i do for update view from schema2 with function call in where clause?

Tom Kyte
July 31, 2006 - 8:26 am UTC

This is a "feature" now - it is not possible to update a non-mergable view and it is not 'safe' to merge a plsql function into this view (or any view for that matter anymore). This used to work (your example) but will not in 10gr2.

you will have to bind that

variable n number
exec :n := test_f;
update v set n = 1 where n = :n;



Non mergable view

Michel Cadot, August 04, 2006 - 7:59 am UTC

Tom,

I understand the previous view can't be updated due to function but why is this possible with the owner of the view?

MICHEL> create table t (id number);

Table created.

MICHEL> insert into t values (1);

1 row created.

MICHEL> commit;

Commit complete.

MICHEL> create view v as select * from t;

View created.

MICHEL> create function f return number is begin return 2; end;
2 /

Function created.

MICHEL> update v set id=2 where id=f;

0 rows updated.


MICHEL> @v

Version Oracle : 10.2.0.2.0

MICHEL> grant select, update on v to scott;

Grant succeeded.

MICHEL> grant execute on f to scott;

Grant succeeded.

MICHEL> connect scott/tiger
Connected.
SCOTT> update michel.v set id=1 where id=michel.f;
update michel.v set id=1 where id=michel.f
*
ERROR at line 1:
ORA-01732: data manipulation operation not legal on this view

Can you point me to where this new feature is documented or where I can get some information about it?

Regards
Michel

Tom Kyte
August 04, 2006 - 8:32 am UTC

it has to due with a fix for a "security related issue", that is why the ownership is relevant.

Aaargh!

Michel Cadot, August 04, 2006 - 9:43 am UTC


Read, thought, tested and been scared for my oldest db.
Thanks for having corrected that!
Michel



dbms_assert annoyance

khairil, August 04, 2006 - 11:02 am UTC

Hi Tom,

Just on the enhanced "security features" in 10gR2, I've read about dbms_assert being introduced to tighten things up. Sounds great. 

However, I've since found out that one of the supplied packages/procs affected by this change is dbms_stats.export_table_stats, which I found to be rather inconvenient. 

See, the parameter statid is declared as a varchar2, but if one passes in a parameter declared as '123', we receive an error. Why the extra validation now ? 

We raised a SR to get more info, and were told not a bug. work around it by passing the parameter in like this '"123"'. We caught this when we testing the upgrade and (lucky for us) it doesn't affect much code at all, but who knows how many other supplied packages/procs could dbms_assert be affecting in this way. Has dbms_assert been written into everything like this ? 

Test script...

SQL> create table test1 as select * from all_objects;

Table created.

SQL> 
SQL> exec dbms_stats.gather_table_stats(USER, 'TEST1');

PL/SQL procedure successfully completed.

SQL> 
SQL> exec dbms_stats.create_stat_table(USER,'MY_STATS');

PL/SQL procedure successfully completed.

SQL> 
SQL> BEGIN
  2  --this works
  3  dbms_stats.export_table_stats(
  4      ownname => USER
  5     ,tabname => 'TEST1'
  6     ,stattab => 'MY_STATS'
  7     ,statid  => 'A123');
  8  
  9  END; 
 10  /

PL/SQL procedure successfully completed.

SQL> 
SQL> BEGIN
  2  --this doesn't
  3  dbms_stats.export_table_stats(
  4      ownname => USER
  5     ,tabname => 'TEST1'
  6     ,stattab => 'MY_STATS'
  7     ,statid  => '123');
  8  
  9  END; 
 10  /
BEGIN
*
ERROR at line 1:
ORA-20001: 123 is an invalid identifier
ORA-06512: at "SYS.DBMS_STATS", line 6515
ORA-06512: at "SYS.DBMS_STATS", line 7157
ORA-06512: at line 3

SQL> 
SQL> BEGIN
  2  --this does
  3  dbms_stats.export_table_stats(
  4      ownname => USER
  5     ,tabname => 'TEST1'
  6     ,stattab => 'MY_STATS'
  7     ,statid  => '"123"');
  8  
  9  END; 
 10  /

PL/SQL procedure successfully completed.
 

Tom Kyte
August 04, 2006 - 12:17 pm UTC

dbms_assert is being widely used, yes.

after upgrade to 10g, CPU usage went up

J, August 08, 2006 - 1:15 am UTC

I recently upgraded database to 10g. One thing is noticed that the total cpu usage for the database is going up, with similar work load.

I know that partial of this might be caused by query itself, as some of working query in 9i blew out especially with subquery with group by; is it true that with 10g itself, it will cost more cpu as there are more new processes are put in place?

I asked this question as we bill customer with CPU seconds and they got bigger bill now, when people are expecting less CPU consumption with 10g new technology.

Thanks!


Tom Kyte
August 08, 2006 - 7:40 am UTC

actually, 10g is designed to use it - it is a resource, use it up. Things like HASH GROUP BY - designed to make use of the larger memory and faster CPU (instead of sort group by).

I would not expect it to go down necessarily unless you got a query plan to change in some wonderful way taking a query that ran for a long time down to running in no time.

Alexander the ok, August 10, 2006 - 9:25 am UTC

Tom,

Do you know how I can find out what version of Oracle client someone has install on their machine? Thanks.

Tom Kyte
August 10, 2006 - 9:42 am UTC

run the installer

RAC 10.1 to 10.2 on Linux

Charlie Zhu, August 11, 2006 - 12:42 pm UTC

We have been running 4 nodes 10.0.1.4 RAC on Linux x86 for one year.
upgrade clusterware to 10.2 and plus patchset is really a problem.

How to make down time as less as possible?

What do you think of it?

Thanks for your time.

Tom Kyte
August 11, 2006 - 12:53 pm UTC

guess I'll refer you to support.

so they can gather relevant information. About you, about your configuration, about why you think it is "really a problem".



Any way to get list of known bugs for a particular release?

John, August 22, 2006 - 1:40 pm UTC

Tom,
Is there any way to get a list by release of known bugs? Our dba group says that this is not possible on Metalink (I do not have access to it). We recently upgraded to 10.1.4 (I know, not the latest release - but our corporate IT guys are always behind in "allowed" releases) and ran into the TABLE function and BULK COLLECT issue. Now we have to upgrade again to 10.1.5 to address. It would have saved me a lot of time if I had a list of known bugs - would have avoided 10.1.4. Any help is appreciated.

Tom Kyte
August 27, 2006 - 3:07 pm UTC

there is no "10.1.5"

There is no list of "bugs by version", there is a bug database.

Any way to get list of known bugs for a particular release?

John, August 30, 2006 - 12:01 pm UTC

Sorry,
I meant to say we went from 10.1.0.4 to 10.1.0.5 (left out a zero :-) )

Anyway, I am surprised there is no way to search by release, as it could have helped me as application owner to be aware of a potential problem being introduced. Do you know of any third party info that could have helped here? I appreciate any info you can provide. Thanks.

Tom Kyte
August 30, 2006 - 5:43 pm UTC

metalink is completely searchable - with each patch release, there is a document of "what it fixed", but the bug database is searchable (and in fact, you could probably say "10.1.0.4" - and search on it - but you would miss bugs filed against 9.2.0.7 *that apply to 10.1.0.4* that way.

therein lies the problem - every bug would have to be updated with a "list of affected releases".

Alexander the ok, September 12, 2006 - 2:51 pm UTC

Hey Tom,

Do you know where we can find the differences between releases within a full release of Oracle? I want to know what's new in 10g r2 from 10gr1?

Tom Kyte
September 12, 2006 - 5:41 pm UTC

in the new features guide?

comes with every release.

all are available on otn.oracle.com -> documentation

Precompiler pro*C in Oracle 10g Release 2

Sergio, September 19, 2006 - 7:49 am UTC

Hi Tom

I have as S.O Linux and bd Oracle 10g R2. I am trying precompilar one *.pc with the command proc and it's gives me a mistake:
I execute:
proc iname=ima_client.pc sqlcheck=full userid=$USER/$PASS@$BBDD

Exit I command:
proc: error while loading shared libraries: libclntsh.so.10.1: cannot open shared object file: No such file or directory

Prodrias to orientate myself on the above mentioned mistake?
A greeting and graces(thanks).


Tom Kyte
September 19, 2006 - 2:34 pm UTC

is your LD_LIBRARY_PATH set?

what is S.O?
what is bd?

listener and parameter troubles

Dan, September 28, 2006 - 1:25 pm UTC

I was using this pl/sql block to set cache_size variables in 9i:

DECLARE
v_cache_size NUMBER (10, 0);
BEGIN
SELECT SUM (to_number (VALUE)) / 2
INTO v_cache_size
FROM v$parameter
WHERE NAME LIKE '%cache_size';

IF v_cache_size = 0 THEN
v_cache_size := 159383552;
END IF;

EXECUTE IMMEDIATE 'ALTER SYSTEM SET db_cache_size = ' || v_cache_size || ' SCOPE=SPFILE';

EXECUTE IMMEDIATE 'ALTER SYSTEM SET db_16k_cache_size = ' || v_cache_size || ' SCOPE=SPFILE';
END;
/

Now, in 10g, it draws 0's from the v_cache_size table, and defaults to the IF statement,
which it looks like, has set it too high.

Following this statement was a shutdown immediate, then startup.

When I ran it, I had a couple problems. 1. The listener wasn't finding the database. In order to
get it working, I added the database name to SID_LIST_LISTENER in listener.ora, which I've seen
you advise against. For some reason, the automatic registering wasn't working, and it looks like
there's nothing else I can do to get it working.
2. Now when I try to start the db, I get this:

idle> startup
ORA-00821: Specified value of sga_target 316M is too small, needs to be at least 320M
idle> alter system set sga_target = 320M;
ERROR:
ORA-01034: ORACLE not available

idle> startup mount;
ORA-00821: Specified value of sga_target 316M is too small, needs to be at least 320M
idle> startup nomount;
ORA-00093: _shared_pool_reserved_min_alloc must be between 4000 and 0

Then I tried pointing it to the pfile, like this:
idle> startup pfile=C:\oracle\admin\dansdb\pfile\init.ora

ORACLE instance started.

Total System Global Area 331350016 bytes
Fixed Size 1289304 bytes
Variable Size 100664232 bytes
Database Buffers 226492416 bytes
Redo Buffers 2904064 bytes
Database mounted.
ORA-01092: ORACLE instance terminated. Disconnection forced

At this point, I am stuck; I can't even delete the database.

Help!


Tom Kyte
September 29, 2006 - 7:52 am UTC

I have no idea why you would even think of doing that in plsql. but anyway.


given I don't have any idea what your pfile might actually have, no comment from me other than

a) read your alert log, it'll have something useful
b) utilize support, they'll gather the needed information from you to unravel what you've done.

Will there be a problem....?

A reader, October 05, 2006 - 6:38 am UTC

Hi tom,
We are going to be upgrading our database to 10g soon. The application here is heavily dependent on rule-based optimization (don't ask!). They do things like <col_name> || '%' to make sure an index is not used for example!
Can you think of any problems we may encounter with this scenario?

Thanks

Tom Kyte
October 05, 2006 - 8:20 am UTC

laughing out loud. Yes, I can. Given the RBO is not officially "supported" (eg: run into a problem, the answer will be "please try it with the cbo")

I agree

A reader, October 05, 2006 - 8:31 am UTC

That it is laughable. Unfortunately, that is how things work here at the moment. And though I've told them the problems reg. desupport of Rule and recommended we move to cost-based optimizers, they are of the opinion that 'if it ain't broken, don't fix it!' And are adamant that they aren't going to change it.
How would you get them to change their views in this situation Tom?
Looking forward to your reply
Thanks



Tom Kyte
October 05, 2006 - 9:30 am UTC

tell them not to upgrade, as they would not be in a supported configuration (using the rbo).

I mean, what I can say? They are of set mind and it would seem nothing would change it. so...

great thread

A reader, October 05, 2006 - 8:56 am UTC

"How would you get them to change their views?"

do nothing
it will break


must upgrade to 10g esp for datawarehouse - performance is superb

A reader, October 11, 2006 - 5:14 pm UTC


Compatiblity

Anurag, October 25, 2006 - 4:29 pm UTC

Is the 9iR2 client supported with 10g server?

Tom Kyte
October 25, 2006 - 4:40 pm UTC

yup

No downtime

Gabriel, November 24, 2006 - 11:36 am UTC

Hello Tom,

Is it possible to have a migration from 9i to 10g without any downtime? In case we have to migrate a life support application to 10g but we cannot take the db offline, what do you suggest we select as an approach? Of course in cases like this where lives are in balance money, resources are not an issue?


Tom Kyte
November 24, 2006 - 6:47 pm UTC

not 9i to 10g, there are techniques to MINIMIZE, but not "remove 100%"

Thank you,

A reader, November 27, 2006 - 11:13 am UTC


OUI Upgrade to 10.1.0.5

ron, November 29, 2006 - 2:50 pm UTC

tom...

I Have 9.2.0.6 DB running on solaris Sparc64. I am trying to Install the Patch to patch the machine upto 9.2.0.8. README doc says that " You must use OUI release 10.1.0.5 or later to install this patch".

I checked my OUI and it is 10.1.0.3. How do we upgrade the OUI to 10.1.0.5 before installing this patch ?...

thanks
Ron

Tom Kyte
November 30, 2006 - 9:23 am UTC

please utilize support for something like this.

Oracle 10g rel1 to rel2 upgrade

sunil kumar, December 27, 2006 - 11:20 am UTC

Hi,

This is usefull we are also planning to migrate oracle10g rel1 to rel2. Pls let me know can we create separate home directory for clusterware instead of existing CRS home.


Upgrade oracle10g rel1

sunil, January 12, 2007 - 9:59 am UTC

Hey tom,

I want to know what should be the value of compatible parameter for oracle10g 10.2.0.1.0 in init.ora file.

As i have upgrade my oracle10g rel1 to rel2 but still i have the earliear compatible value in init.ora file


Thanks

datapump dependencies

Wayne Allison, February 07, 2007 - 4:14 pm UTC

Tom

We are trying to use a datapump import in our migration from release 1 to release 2.*

When we did a convetnional import, we experienced errors pertaining to unused oracle enterprise add on components.

When we did a datapump import, we recieved errors regarding our logon function.

I remember in the old days we could run an import rows=n to cause dependent 'code' to be imported. Is there a process like that we can use for release 2 with datapump?

Datapump is FAST. I just gotta find the parameters we need.

Thanks

Wayne

* we are running impdp / as sysdba full=y

We imported once with errors, and then began a second import and still got dependent errors:

ORA-39083: Object type PROFILE failed to create with error:
ORA-28004: invalid argument for function specified in PASSWORD_VERIFY_FUNCTION FTDTL_VERIFY_FUNCTION Failing sql is:
CREATE PROFILE "FTDTL_DEFAULT" LIMIT COMPOSITE_LIMIT UNLIMITED SESSIONS_PER_USER UNLIMITED CPU_PER_SESSION UNLIMITED CPU_PER_CALL UNLIMITED LOGICAL_READS_PER_SESSION UNLIMITED LOGICAL_READS_PER_CALL UNLIMITED IDLE_TIME 60 CONNECT_TIME UNLIMITED PRIVATE_SGA UNLIMITED FAILED_LOGIN_ATTEMPTS 3 PASSWORD_LIFE_TIME 7776000/86400 my_password

Tom Kyte
February 07, 2007 - 7:16 pm UTC

I don't see how "rows=n" would fix anything? (in the past)

question: would that function actually create in 10gr2? the failing statement - would it ever execute?

followup

Wayne Allison, February 07, 2007 - 5:03 pm UTC

should we be using:

system/manager full=y ignore=y

Follow up on upgrade with datapump

Wayne Allison, February 08, 2007 - 11:03 am UTC

Tom

Because the function failed up front, users and other objects did not create during the datapump import.

As fast as it is, datapump is the way to go.

But, since it is not exporting the sys objects, whereever there is dependency in the code on objects built in sys I need to 'find' those objects.

Of course, it would have been nice if they had been stored orderly in source safe.

I have searched for a code snipett that would bring back our 'new' objects in sys, but have not found one yet. Do you have one?

Thanks

Wayne

Object type as input/output parameter

A reader, February 12, 2007 - 10:26 am UTC

Dear Tom,

Sorry to ask my quesion here. I have been searching this site for "object type" and not find answer to my question.

We have developed a package with an object type as in/out parameter in oracle 8.1.7.

One application in oracle 10G wants to call our package and exploit the data in the output object type parameter.

Our DBAs told us that exchanging object types between two different data bases in not possible. Is this true?

Could you please advise me how we can make this 10G application access our data in another data base.

Thanks in advance


object type

Wayne Allison, February 12, 2007 - 11:52 am UTC

if you will execute

sql> select object_type from user_objects where rownum < 10;

I think you will see that object_type will include tables, packages, etc.

So, if you are selecting from a table using a database link, the data will return.

This article will cover the restrictions for a database link: http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96521/ds_concepts.htm#16056

Transport tablespace from 8174 to 10g?

JL, March 08, 2007 - 1:09 pm UTC

We are planning to upgrade our 8174 DB running on Solaris 7 to Oracle 10gr2 running on Solaris 10.

Is it possible to transport tablespace from 8174 running Solaris 7 to Oracle 10g DB on Solaris 10?
Tom Kyte
March 08, 2007 - 1:50 pm UTC

that is not a cross platform transport, just a cross oracle version - so yes, it should work. Have I done it? No - but the read only tablespace from 8i should be attachable to the 10g database.

9.0.1.4 to 10g using RMAN

Steve, March 23, 2007 - 12:51 pm UTC

We're planning on upgrading from 9i to 10g and upgrading our server at the same time. It will still be running W2K however. If we install 10g on the new server could we use Rman to "recover" the (9i) database to the new server? Or do we need to upgrade 9i to 10g on the existing box or install 9i on the new box and then upgrade to 10g? It's not a big database - maybe 3/4Gb
Tom Kyte
March 26, 2007 - 6:49 am UTC

you can restore and upgrade.

9i to 10g2,

A reader, April 02, 2007 - 6:10 pm UTC

We are in the process of upgrading the 9iR2 2 node database to 10gR2 on Linux.

Q1. the current storage is raw file system. How can it be converted to ASM?

Thanks,
Tom Kyte
April 03, 2007 - 11:03 pm UTC

alter table T move....
alter index I rebuild ....

or - an rman restore of a backup...


are two opportunities (transportable tablespaces are another..)

Migrating 9i to 10g on Solaris

prakash, April 04, 2007 - 6:44 am UTC

HiTom,
what is the best way to migrate 9i to 10g on Solaris?
export and Import or using migrate utility,
my database is less than 5GB.
Tom Kyte
April 04, 2007 - 10:34 am UTC

5gb, you could almost type that back in these days :)

since probably 30-40% of that is index data, another 10% of that is rollback, temp... you likely have about 2gb of "data"

export would likely work just fine, albeit not blazingly fast, but it would likely be more than sufficient.

Problem with Column Default and Trigger (10.2)

Ralf, April 27, 2007 - 8:44 am UTC

Hi Tom,

i just ran into following problem. I don't know whether it's a bug or a feature but I suppose it must be a bug.
Somehow the column default for column MODIFIED_USER is not respected when a trigger exists that might update that column?!

Great site, keep it up!

SQL> SELECT * FROM V$VERSION;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE    10.2.0.1.0      Production
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production

SQL> 
SQL> CREATE TABLE test (
  2    test_id        NUMBER    (8, 0)   NOT NULL    CONSTRAINT PK_TEST PRIMARY K
  3    test_text      VARCHAR2  (40)     NOT NULL,
  4    modified_date  DATE               DEFAULT SYSDATE NOT NULL,
  5    modified_user  VARCHAR2  (30)     DEFAULT USER    NOT NULL
  6  );

Table created.

SQL> 
SQL> 
SQL> CREATE OR REPLACE TRIGGER TG_TEST
  2  BEFORE INSERT ON TEST
  3  FOR EACH ROW
  4  BEGIN
  5    IF FALSE THEN
  6      :NEW.modified_user := USER;
  7      :NEW.modified_date := SYSDATE;
  8    END IF;
  9  END;
 10  /

Trigger created.

SQL> 
SQL> INSERT INTO test(test_id, test_text) VALUES (0, 'X');
INSERT INTO test(test_id, test_text) VALUES (0, 'X')
*
ERROR at line 1:
ORA-01400: cannot insert NULL into ("NEDOCSADM"."TEST"."MODIFIED_USER")


SQL> 
SQL> CREATE OR REPLACE TRIGGER TG_TEST
  2  BEFORE INSERT ON TEST
  3  FOR EACH ROW
  4  BEGIN
  5    IF FALSE THEN
  6      --:NEW.modified_user := USER;
  7      :NEW.modified_date := SYSDATE;
  8    END IF;
  9  END;
 10  /

Trigger created.

SQL> 
SQL> INSERT INTO test(test_id, test_text) VALUES (0, 'X');

1 row created.

Tom Kyte
April 27, 2007 - 11:14 am UTC

contact support, reference Bug 4547458

Why we should upgrade?

Kalpesh, April 30, 2007 - 3:11 pm UTC

Hi Tom,
We are running on Oracle 8i. My question is should we upgrade to Higher version like 9i or 10g?

If yes then Why.
Thank You


Tom Kyte
April 30, 2007 - 3:49 pm UTC



because 8i isn't supported anymore.

because there are thousands of manageability features in 10g that didn't exist in 8i.

do not upgrade to 9i, 10g would be the only logical choice.

Which Platform?

Deepak, July 20, 2007 - 9:09 am UTC

Hi Tom,

Have a question regarding server planning.

Let's suppose an organization is planning to use Oracle database for their enterprise use and they are in the process of planning the hardware server and the OS server.

And cost of the hardware and OS s/w is not a factor. Only factor is reliability, performance etc.

Which OS they should choose from among those supported by Oracle?

Whether Linux/HP UNIX/SOLARIS/...?

What should we consider before making such decisions.

And also would appreciate if you can point to any good document or help in doing a capacity planning in terms of Hardware resources.

Please guide me.


Tom Kyte
July 20, 2007 - 4:14 pm UTC

whatever one you want - we don't really care.

Make the decision based on whatever factors matter to you.

Upgrade from 8i to 10g

shailu, August 07, 2007 - 11:42 am UTC

Hey Tom,

I have been reading lot of stuff from this site and i feel you are doing really a good job by helping others.

Anyways, we have two databases 8.1.6 and 8.1.7 and we are planning to upgrade both of the databases to 10g release 2.
I have some questions regarding this.

1) My understanding is that to upgrade 8.1.6 to 10g i have two options

a) 8.1.6 ---->8.1.7----->10g
b)8.1.6 ----->9.0.1----->10g

Is my understanding right? If yes then which one would be better.

2) Obviously i am gonna go through the document but i want your suggestion too on this. What are things i need to take care before upgrade. Since there will be lot of parameters which wuld be deprecate in 10g.

3) Right now we are using dictionary managed tablespace. After upgrading to 9i then 10g would it create any problem. Since LMT is default in 9i, i think.

Thanks in Advance

Shailu
Tom Kyte
August 07, 2007 - 2:08 pm UTC

1) http://docs.oracle.com/docs/cd/B19306_01/server.102/b14238/preup.htm#sthref53


2) I would suggest by using the MINIMAL set of non-default parameters - clean out that init.ora, empty it out. control files, memory settings and not too much else.

3) if you upgrade, you'll end up with exactly what you started with - dictionary managed tablespace, no problem

Upgrade from 8i to 10g

shailu, August 07, 2007 - 2:51 pm UTC

Hey Tom,

Thanks You very much for your response.

for first question's anwer i have gone through that link and read the document. But that document doesn't say anything about version 8.1.7.0(my database version).

1) It talks about 8.1.7.4....initially it says 8.1.7 only so does it mean if i have 8.1.7.0 i can directly upgrade to 10g?

2) Also same in case of 8.1.6 ...do i have to upgrade it to 8.1.7.0 or 8.1.7.4?

3) I have read one document in metalink according to which i can directly upgrade 8.1.6 to oracle 9i release 1. can you some inputs on that. In this case i thin it would be better idea to directly upgrade to oracle9i then 10g.

thanks
shailu
Tom Kyte
August 07, 2007 - 3:54 pm UTC

1) 8174 was the terminal release, get the 8i upgrade material (you'll need the 8i install and the 8174 patchset).

2) see document - 8174

3) so, you don't want 9i, you want 10g



what media can you get your hands on, that'll determine what you do ultimately.

performance after 10g upgrade from 9.2

Lisa, August 14, 2007 - 9:11 am UTC

Hi Tom,

My application uses PRO*C for all database work. We upgraded to 10g from 9.2 and our performance has gone down. I tested my application, creating an oracle 9 and oracle 10 instance and processing the same data. I had the linux box to myself and had 1 instance up at a time. Oracle 10g was consistently slower.

I then wrote a PRO*C and PL/SQL program that does 20 million of the same select stmt using 2 host variables. The select was for an existing row.( same query for both tests ).

I found that PL/SQL performance increased in 10g, but PRO*C decreased.

Results from 3 tests:

PRO*C PL/SQL
Oracle 10: 16:52 16:35 16:46 6:43 6:45 6:40
Oracle 9: 15:38 15:48 15:53 8:04 7:36 8:13

This was my query (unique index on key_field_id, key_value, typeid,partid) :
select EID, PID into eid, pid from ENTMAP
where KEY_FIELD_ID = :fieldId and KEY_VALUE = :keyValue and TYPEID=1 and PARTID=2 and ROWNUM<2;

Have you heard anything about PRO*C performance in 10g?


Tom Kyte
August 20, 2007 - 11:01 am UTC

tkprof it - see if the difference is visible there.


(if you have a statement you actually execute that many times - I would say your performance issues are mostly impacted by the design of the code - as I like to say "JUST JOIN", seems you might be doing lots and lots of procedural code where a single SQL statement should have been used)

Upgradation

Sam, October 22, 2007 - 11:56 am UTC

Hi tom,

I have created a 10G database and now I want my tables and other content from 9i to 10g. Please suggest me any method to so. I am new to it.

Thanks a lot!!!!!!!!!!!!!!!!!!

Alexander the ok, November 27, 2007 - 10:16 am UTC

Hi Tom,

We have a vendor that requires the Oracle 10g client to be eligible for support. We have not done any 10g client upgrades yet. Could you recommend the closest to a plain vanilla 10g client install from OTN? I looked and all the instant client packages list different features, I don't know which is more inclusive than the other.

9i to 10r2

ibnadam, February 04, 2008 - 12:56 am UTC

Hi Tom,

Thanks for such a wonderful stuff.

Q. I want to migrate my database from 9.2.0.4 on Solaris to 10gr2 on HP-UX. My application is Oracle EB 11i and this is also supposed to be upgraded to 12i.
Now as a DBA i am responsible for database migrate and update. I have read the upgrade guide to oracle erp.
we can upgrade our database in separate downtime but what are the things i should care about related to application
or will it be simple database upgrade?
Q2. Can I directly import an exported dump file from 9i on solaris to 10gr2 on HP-UX or shall upgrade it first to 10g then use cross platform tablespace to move to HP-UX.

Please Clarify
Tom Kyte
February 04, 2008 - 4:01 pm UTC

q1) I am the wrongest person on the planet to ask about an applications upgrade - you'll want to converse with someone that does "apps" which is not me :)

q2) you'll be working in the apps infrastructure - you'll be restricted by them to some degree.

about materialized views on prebuilt tables

James Su, February 11, 2008 - 2:45 pm UTC

Dear Tom,
You mentioned these steps:
a) create a new 10g database with the schema (empty)
b) create mv's on these prebuilt tables
c) refresh them complete, then refresh incrementally (that'll keep them in sync)

Is it possible to replace the complete refresh step by using imp? Thanks.
Tom Kyte
February 11, 2008 - 10:40 pm UTC

you can do offline instantiation yes, search for that in the documentation.

about offline instantiation

James Su, February 13, 2008 - 11:33 am UTC

Hello Tom,
I read this document:
http://download-uk.oracle.com/docs/cd/B10501_01/server.920/a96568/rarmanmv.htm#12817

My question is:
1. It requires creating a temporary MV at the master site. If tables are big this will cost some overhead.
2. It doesn't use the prebuilt method. What I want is when the migration is done I can drop the MVs and let them become tables, then the new DB can replace the old one.

Is it possible to use exp/imp and prebuilt MV in data migration? Thank you.
Tom Kyte
February 13, 2008 - 1:13 pm UTC

1) overhead is by definition "something that can removed without loss of functionality". eg: having a personal assistant is probably overhead. having employees that make your product work is probably not overhead. but both cost money.


2) just because the example did not use prebuilt table does not mean you cannot - did it explicitly say "you cannot do that"

My suggestion: test this procedure on a small table, get comfortable with it, learn it, see how it works, what it does and what you can accomplish with it.

Oracle 10g upgrade client

baskar, March 18, 2008 - 1:06 pm UTC

Tom,
We have Oracle 10g database and forms 6(character based) running against that.We now want to migrate our client to 10g .My doubt is character based forms are still there in Oracle Forms 10g client.

Thanks
Tom Kyte
March 24, 2008 - 9:12 am UTC

please utilize support for upgrade and configuration issues. Forms 6 is not a 10g client, you'll need to use the client software that forms was compiled and linked against.

slower performance in 10g

J, July 15, 2008 - 7:36 pm UTC

Hi Tom,

When we all expect 10g database will be better in performance, in reality, we experience slowness after upgrade. In OLTP environment, application batch processing of 2500 records used to take 30 seconds in 9i database, it takes over 1 min or 2 min for processing, which application user is seeing increased idle time in waiting for processing result. I tried everything I could think of, plus suggestion from Oracle support (enable 9i optimizer), none works so far:

1. cache one of the table reported in AWR.
2. increase open_cursors (from 300-> 400) and session_cached_cursors (from 50 -> 100) as
suggested in ADDM report during running of the test group (1).
3. enable 9i optimizer on instance level (during this testing, for first 3 batch of 2500 records, it is done within 1 min 10 seconds, and next 3 batch of 2500 records increased to 2 min 5 seconds. result is not stable.)

Business user is thinking to go back to 9i database due to this performance issue. The database is quite small (< 15GB), but is heavily in update/insert when find records match in database. Application is located on same server as database server, so it should eliminate the network issue. Oracle support can't give us more feedback.

Is there anything you could think of to speed it up in OLTP from your experience?

Thanks!
Tom Kyte
July 15, 2008 - 8:17 pm UTC

did you try the simpliest of approaches?

what changed between 9i and 10g, did you take a tkprof to see what is DIFFERENT between the two runs.

that will almost certainly point to a single sql statement, which you can look at.

I had similar issues ... but YMMV

Khalid Rahim, July 16, 2008 - 12:16 am UTC

I faced similar issues, but overall glad I didn't revert back to 9i. It appeared to be some optimizer bug in 10203 or 10202, in my case trace revealed a lot of spinning on Mutex waits, which of course I didn't know how to deal with.

For the most part, the queries were much faster but there were a few queries that were inexplicably slow after upgrade to 10gr2. Almost all issues went away after I disabled the automatic scheduler job, and replaced with my own custom gather_stats job per schema.

select owner,job_name,enabled from dba_scheduler_jobs
where job_name='GATHER_STATS_JOB';

As job owner, I disabled the job
1) exec dbms_scheduler.disable('GATHER_STATS_JOB');
2) Deleted the existing stats on all related tables or set optimizer_mode=Rule for the session
3) Tested the queries again with success
4) Replaced the default gather_stats with my own, tested then it worked fine.

Talking to support they had suggested 10204 doesn't have the same problem, so you might want to check if 10204 is available for your platform


slower performance in 10g

J, July 16, 2008 - 1:09 am UTC

Hi Tom,

When we all expect 10g database will be better in performance, in reality, we experience slowness after upgrade. In OLTP environment, application batch processing of 2500 records used to take 30 seconds in 9i database, it takes over 1 min or 2 min for processing, which application user is seeing increased idle time in waiting for processing result. I tried everything I could think of, plus suggestion from Oracle support (enable 9i optimizer), none works so far:

1. cache one of the table reported in AWR.
2. increase open_cursors (from 300-> 400) and session_cached_cursors (from 50 -> 100) as
suggested in ADDM report during running of the test group (1).
3. enable 9i optimizer on instance level (during this testing, for first 3 batch of 2500 records, it is done within 1 min 10 seconds, and next 3 batch of 2500 records increased to 2 min 5 seconds. result is not stable.)

Business user is thinking to go back to 9i database due to this performance issue. The database is quite small (< 15GB), but is heavily in update/insert when find records match in database. Application is located on same server as database server, so it should eliminate the network issue. Oracle support can't give us more feedback.

Is there anything you could think of to speed it up in OLTP from your experience?

Thanks!

7.3.4 to 10G

A reader, July 16, 2008 - 9:04 am UTC

Tom above someone asked lot of questions regarding export of oracle 7.3.4 to 10g but still I have a dought.

I have read the upgrade documentation that import/export from 7.3.4 to 10G/11G is possible without going to any intermedite versions in the "Oracle database upgrade guide 10g/11g"

But in metalink note "Subject: Compatibility Matrix for Export And Import Between Different Oracle Versions
Doc ID: Note:132904.1

This defines that FULL=Y export and import is not supported in case of 7.3.4. Below is the extract of metalink.

7.6. Example: export from 7.3.4 and import into 11.1.0

use the 7.3.4 Export client to export the data from the 7.3.4 source database,
use the 11.1.0 Import client to import the data into the 11.1.0 target database,
Note that a migration with export FULL=Y & import FULL=Y is not supported.

My question is for 7.3.4 to 10G with FULL=Y for import and and export is supported or not.

This is for cross platform from Digital UNIX to HP-UX

Tom Kyte
July 16, 2008 - 10:58 am UTC

well,
..
Note that a migration with export FULL=Y & import FULL=Y is not supported.
....

seems unambiguous - typically, I would be doing schema level exports - in the 11g database, create the users and tablespaces anew (using locally managed tablespaces, automatic segment space management, system allocated extents). Create the users, grant them the MINIMUM about of privileges they need (use this opportunity to refactor your privileges, to reduce them)

them export each user in turn from 7 (in parallel if you like) using version 7 export and then import them into 11.

7.3.4 to 10G

A reader, July 16, 2008 - 11:00 am UTC

Tom,

I asked above question for some purpose because I am not sure and I need to provide the project plan otherwise I will take the approach of 7.3.4 --> 8i --> 10G
Tom Kyte
July 17, 2008 - 11:17 am UTC

the note was un-ambiguous.

first - I would never use full=y for an upgrade. You want to do what I described if you are going to use exp/imp

....
typically, I would be doing schema level exports - in the 11g database, create the users and tablespaces anew (using locally managed tablespaces, automatic segment space management, system allocated extents). Create the users, grant them the MINIMUM about of privileges they need (use this opportunity to refactor your privileges, to reduce them)

them export each user in turn from 7 (in parallel if you like) using version 7 export and then import them into 11.
........


full=y is going to pull a bunch of junk across versions you don't want/won't be able to deal with. schema level exports are the way to go since you want to move everything to locally managed tablespaces and you need to get a good handle on the privileges assigned to users which has almost certainly gotten out of hand in the 10 years this database was around.

A reader, July 17, 2008 - 4:44 am UTC


7.3.4 to 10G

A reader, July 17, 2008 - 4:45 am UTC

But is this supported for Oracle 8i with FULL=Y ?

TKPROF from OLTP

j, July 22, 2008 - 2:48 pm UTC

Hi Tom,

Last time I asked the question on performance after 9i upgrade to 10g on OLTP. Similar application only takes 30 seconds while in 10g, it goes above 2 min, which causes delay on record processing.

I did SQLTRACE for all application processes during one batch run (process of 2500 records), and check into the trace file which generates the most log (as application uses pre-spawned processes). during 3 mins batch run, one of the trace file indicates the longest CPU time 43seconds (see below), in which, top ten select sql uses 24 seconds overall. Those tables are small. The top CPU query is listed below. even we cache the table, it won't help much as the cpu is in EXECUTE.

Application is written in perl 5.6. do you think it might have communication issue between Perl and DB? as there is many wait in SQL*Net message from client.

I am creating 9i database now for testing as well.

If you see any tuning we could do, please let me know. Thanks!

----partial of trace file (the top CPU query and overall)

select NUMBER_OCCURENCES
from
CT_C3T_RCP_STATS where ACCOUNT_ID = :p1 and PHONE_POSITION = :p2 and
RESULT_CODE_ID = :p3 and DIALER_TYPE = :p4


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 8040 0.36 0.28 0 0 0 0
Execute 8040 5.15 5.21 0 0 0 0
Fetch 8040 0.28 0.37 0 16118 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 24120 5.79 5.86 0 16118 0 1

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 38 (ONQ)

Rows Row Source Operation
------- ---------------------------------------------------
0 TABLE ACCESS BY INDEX ROWID CT_C3T_RCP_STATS (cr=3 pr=0 pw=0 time=210 us)
0 INDEX RANGE SCAN CT_C3T_RCPSA_IDX (cr=3 pr=0 pw=0 time=199 us)(object id 3347037)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 16080 0.00 0.02
SQL*Net message from client 16080 0.00 11.89
************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 44179 2.01 1.85 0 0 0 0
Execute 49746 39.22 39.47 32 15148 90323 12053
Fetch 37693 2.23 2.13 0 79910 0 4997
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 131618 43.46 43.45 32 95058 90323 17050

Misses in library cache during parse: 172
Misses in library cache during execute: 142

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 84363 0.00 0.13
SQL*Net message from client 84363 12.14 108.18
log file sync 91 0.09 1.18
SQL*Net more data to client 27 0.00 0.00
db file sequential read 32 0.10 0.46
latch: shared pool 1 0.00 0.00


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 564 0.04 0.04 0 0 0 0
Execute 852 0.55 0.50 0 191 440 103
Fetch 1474 0.14 0.10 3 2844 0 966
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2890 0.73 0.65 3 3035 440 1069

Misses in library cache during parse: 3
Misses in library cache during execute: 3

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 3 0.00 0.01

44296 user SQL statements in session.
736 internal SQL statements in session.
45032 SQL statements in session.
0 statements EXPLAINed in this session.
********************************************************************************
Trace file: onqrst_ora_19352.trc
Trace file compatibility: 10.01.00
Sort options: execpu
1 session in tracefile.
44296 user SQL statements in trace file.
736 internal SQL statements in trace file.
45032 SQL statements in trace file.
218 unique SQL statements in trace file.
1782167 lines in trace file.
181 elapsed seconds in trace file.

Tom Kyte
July 22, 2008 - 6:51 pm UTC

what I see in the trace is most of the time is spent waiting for the client to tell us to do something


SQL*Net message from client 84363 12.14 108.18

eg: not the database. Of the time - it seems that 1/3 was spent in database, 2/3rds in client.

TKPROF elaspsed time

j, July 22, 2008 - 3:16 pm UTC

Hi Tom,

in previous post, I still have one question:

Understand that TKPROF will not reflect actual time for fast query; but shouldn't we expect close number for elapsed time or CPU time from TKPROF report and actual run time? I tried to add cpu/elapsed time from TKPROF from couple session auto trace file, I only get to about 1 min; though actual elapsed time is about 3 min. How to explain other 2 min which definitely would be concern for App?

Thanks!
Tom Kyte
July 22, 2008 - 6:52 pm UTC

sqlnet message from client

we were bored waiting for the slow client to tell us to do something.

that 108 seconds accounts for that.

now comes back to the question of MTS

J, July 23, 2008 - 6:10 pm UTC

Hi Tom,

thanks for pointing out the wait event from client side. Now the question comes: how can we reduce the wait from client?

here is the situation:

Database once was in 9i, with MTS enabled. The reason to enable MTS is to get extreme speed on batch process. It is said that it only takes 30 seconds to process 2500 call records at that time.

Not long ago, database is upgraded to 10g. MTS is disabled. Application code is recompiled with Oracle 10g lib (they need get access to lib32 specifically). No other changes on application side. However, it takes 2 min 5 sec to process 2500 call records. Trace file you reviewed indicates the wait from client.

when application processes call records, most of time, it uses bind variable in statement. It will grab one record at time, check if it exists in one sets of table; do update or insert accordingly until all records are processed. no join, and tables are small.

giving this work flow, will MTS play a role here to reduce wait from client? somehow it seem to multithread the request, (the dispatcher was set to 3), and process call records in parallel?

I throw out the question here as I am still waiting for user to test on 9i database, which I would like to get baseline from.

I appreciate all your time to look into this issue.

THANKS!
Tom Kyte
July 24, 2008 - 10:51 am UTC

... The reason to enable MTS is to get
extreme speed on batch process. ..

that is absolutely backwards - shared server is entirely inappropriate for long running batch processes.

understand that but ...

J, July 24, 2008 - 12:38 pm UTC

this particular batch process composistes of quick, short processes, like insert ..values, update, select, as it is designed to process one record by one record. The biggest thing has been changed from 9i and 10g is MTS. When elapsed time in 9i for 2500 records is 30 seconds, and over 2 minutes in 10g, it makes me think that MTS might play its 'multithread' role here.

Thanks!
Tom Kyte
July 29, 2008 - 9:21 am UTC

if you have hundreds/thousands of client connections now, and you have only a small number of shared servers before, it could be (eg: the machine could be swamped)

but I rather think "one plan changed and is now taking 0.036 seconds longer than it used it" (that is all it would take, 0.036*2500 = 90 seconds".

this is more an artifact of your slow by slow processing in my opinion (knowing what I know about your process, your setup, which is very little)

upgrade from 10.2.0.1 to 10.2.0.3 with physical standby database in place

A reader, September 08, 2008 - 4:25 pm UTC

Current environment:
production database is 10.2.0.1
physical standby database is 10.2.0.3

I'm trying to upgrade our production machine to 10.2.0.3. Is there any known issue about this upgrading?

Thanks
Tom Kyte
September 09, 2008 - 7:21 am UTC

please utilize support

TNS Error

prashant, September 24, 2008 - 8:11 am UTC

We are facing same error on WinXp

TNS-00530: Protocol adapter error 32-bit Windows Error: 55: Unknown error

If you have any solution please send me .

Prashant
Tom Kyte
September 24, 2008 - 8:01 pm UTC

interesting "we are facing the same error" yet you are the only one to ever have used the number 55 in the error message?

same as what error (and you do have your tar open right, with support)

What is the best approach

Chinni, February 13, 2009 - 2:18 am UTC

Hi Tom,
I have a database of 750 GB , 9i with 4K block size.
I would like to upgrade this to 10.2.0.4 with 16K block size.
Could you suggest what could be the best way to do this?

Is export / import the only option?
Tom Kyte
February 16, 2009 - 10:18 am UTC

There are other options.


9i support multiple blocksizes, 10g does to. You could transport and then alter table T move new_10g_tablespace, alter index i rebuild new_10g_tablespace (move the datafiles from 9i, to a 10g database with 16k blocks and then using direct path operations move the data from the 4k non-default size tablespace to the new tablespaces and then drop the transported tablespaces)

or upgrade 9i to 10g and then use data pump to move the data *faster* and in parallel.



Another would be just to upgrade and let it stay at 4k (any reason in particular for going from a non-standard 4k block to a non-standard 16k block??)


I'd probably be going with the last option.

Thank you

Chinni, February 18, 2009 - 9:02 am UTC

Thanks Tom ..

Not any specific reason for moving to higher block size ..but we have big tables in my db ( DW ). And many reports run on these tables which go for full table scans. Then I was thinking that i might be helpful to go for higher block size..

And you know .. we actually adopted the last option you have listed ..!!
Thanks a lot

Confused

anupam pandey, February 25, 2009 - 7:00 am UTC

Hi Tom,
I have a basic question ,I dont know whether this thread is good for that or not .
Question is Oracle has two different installation exes for 32 bit linux and 64 bit linux . So how does it matter to oracle whether OS is 32 bit or 64 bit .What all think depend on this data line capability.

Thanks And Regards,
Anupam Pandey
Tom Kyte
February 25, 2009 - 7:18 am UTC

tell you what

if you have 32bit linux, use the 32bit install
if you have 64bit linux, use the 64bit install

do not use 64bit on 32bit
do not use 32bit on 64bit.


The extra 32bits do sort of matter, there are different binary versions for each. They are completely different operating systems.

Question

anupam pandey, February 26, 2009 - 4:21 am UTC

Hi Tom,
Surely i will do what u said .But I just wanted to know how it matters to ORACLE if its linux 32 or linux 64.

Thanks And Regards,
Anupam Pandey
Tom Kyte
March 03, 2009 - 7:28 am UTC

kidding right?

You do know there is a bit of a difference in generated code and the libraries and all that must be used if you change the size of a word on the machine?

a 32bit pointer, passed to a routine expecting 64bits, would cause a bit of a problem - going the other way would be an issue too.

They are *different* operating systems entirely.

Upgrade from 10.2.0.3 to 10.2.0.4

Ankit, March 03, 2009 - 3:17 pm UTC

Hi Tom

we have to upgrade a DB server from existing version 10.2.0.3 to 10.2.0.4.

The process we have planned is
1.Apply required patchs on 10.2.0.3 in order to apply 10.2.0.4 patch set

2.After applying 10.2.0.4 we aply required patches on that followed by Jan CPU patch

3.Then followed by upgrading the DB by running scripts (Manual Method)


Could you please tell me if there are any basic errors in the above plan.

Thanks
Tom Kyte
March 03, 2009 - 9:16 pm UTC

step 0: read the patch set directions from start to finish and make sure you follow them.


not sure what your step 1 is. 10.2.0.3 is 'patchable' to 10.2.0.4 out of the box.

not sure what your step 2, part 1 is. I get the jan cpu, but what are you considering the 'required patches'


I would

a) apply the 10.2.0.4 patchset in its entirety (upgrade scripts and all)
b) apply the cpu's in order that you want to apply, following to the letter the directions.


Re: 10.2.0.3 to 10.2.0.4

Ankit, March 04, 2009 - 8:45 am UTC

To be precise i first plan to :
>upgrade the Oracle Binaries to 10.2.0.4 ,

>apply Jan CPU patch and

>then Upgrade the Database manually by running catupgrd.sql and utlrp.sql

The doubt i have is should i apply CPU Jan patch before upgrading the DB (like i described above)
Or
Should i upgrade the DB first and then apply Jan CPU patch

Does such a change of order makes any difference ?


Hope i make my problem clear
Tom Kyte
March 04, 2009 - 1:29 pm UTC

I've already given you my response, it is right above.

The Reason

Ankit, March 05, 2009 - 6:16 pm UTC

With all due respect Tom

I also asked if it makes any difference... i believe it is better to know what can be result of something.

I m sorry if this sounds rude but i would like to know
"If it would work this/that way"


Tom Kyte
March 05, 2009 - 8:09 pm UTC

You asked me "what would you do"

I answered? very clearly.

with all due respect.

Why would you do things out of natural order?


You want to

a) patch 10.2.0.3 to 10.2.0.4
b) apply CPU

Well, do (a) then do (b). Do not do half of (a) and then (b) and then other half of (a). Why would you want to do that?


I don't care if it "would or could work", the steps are: you do (a) then you do (b).



I told you - "step 0: read the patch set directions from start to finish and make sure you follow them. "


If you do that, you'll see that patching 10.2.0.3 to 10.2.0.4 inludes

1) installing patchset (binaries)
2) upgrading database(s)

Then you have something to apply the CPU to.

Thankyou

Ankit, March 06, 2009 - 10:16 am UTC

Thanks Tom
I saw a upgrade from 10.1.0.5 to 10.2.0.4 in which the DBA prepared new Oracle Home of 10.2.0.4(ORACLE Bianries i.e) and applied all the patches including Jan CPU and then he upgraded the DB manually.
That is what prompted me to ask this question.

I will do as you suggest.

Ankit, March 21, 2009 - 8:06 pm UTC

Tom,

You said upgrade steps for 10.2.0.3 to 10.2.0.4 are

1) installing patchset (binaries)
2) upgrading database(s)

Then you have something to apply the CPU to



I thought that Patching is updating oracle binaries, so my thinking is if we upgrade the 10.2.0.3 Oracle home binaries to 10.2.0.4 and apply latest CPU patch (Jan 2009)

And

upgrade the Database manually using script catupgrd.sql. and utlrp.sql then it should provide us with same result.


The reason i was thinking of using above as is it i would have to run utlrp.sql only once for recompiling.

Please tell me if there is any gap in above strategy.



P.S
I know i have dragged it way to long, but i have a upgrade coming, it would be my first upgrade and i really want to be sure about it.


Tom Kyte
March 24, 2009 - 10:58 am UTC

you can do whatever you have thoroughly tested. I have already responded and I won't really change my mind


I never run utlrp - it is completely optional, the code will recompile itself as it is used.

upgrade

A reader, April 12, 2009 - 11:16 am UTC

Tom:

How do you usally make a decision on:
a) upgrade existing server from 9.2.0.2 to 11g R2
b) install a new server of 11g and export/import full data from the 9i database

I thing option b is much moire reliable in this case. but one comment above says you also cant do full export/import for database and you have to do one schema at a time.
Tom Kyte
April 13, 2009 - 4:50 pm UTC

I would not use export/import to perform an upgrade.

JUST UPGRADE.


and if you were to add "well, I want to change platforms too", I would still not export/import (or data pump). I'd upgrade on the old platform and transport it to the new platform.

upgrade

A reader, April 13, 2009 - 11:03 pm UTC

Tom:

Most DBAs seem to favor INSTALL NEW SERVER 11g and EXPORT/IMPPORT the 9i database to the new 11g database

INSTEAD OF

running UPGRADE script

since UPGRADE may cause a lot of issues. do you agree
Tom Kyte
April 14, 2009 - 10:03 am UTC

not the DBA's I know.


I entirely and vehemently DISAGREE. Wasn't that blatantly obvious from my previous followup?????? I tried to be very unambiguous.


what "issues" do you foresee??????????


with export/import - I see hours and hours and hours of wasted time exporting, and then importing (about the slowest way to do an upgrade, I cannot think of a slower way). I see lots of opportunities for mistakes, an index goes missing, a grant gets lost, whatever). I see no "upside" to this, only drawbacks.



upgrade

A reader, April 15, 2009 - 1:47 pm UTC

Tom:

you are a common sense guy. Does Option a) sounds reasonable to you.

I want to upgrade from 9 to 11g.

a) I set up a test machine. I install 9iR2 and then export my prodcution DB to it. Then i run one upgrad script from 9iR2 to10g and another script from 10g to 11g ( iassume you need two unless oracle has one script from 9i to 11g)

b) I install a new 11 g server on the test machine. Export out the 9i database from prod machine and import into 11g on test server.

Also, this is 5 G bytes database. the export/import does not take much time.
Tom Kyte
April 15, 2009 - 2:21 pm UTC

you missed step 0

read the upgrade guide.

come back after you've done that.


and you know, you know because I've already told you (SMK), that I would not consider using export, regardless.

Oh wait, I said that right above.



option a) ouch, why would you even consider that???? If you exported a 9i database and the goal was "get to 11g" why would you import it into a 9i database and then upgrade? why wouldn't you just import into 11g??

but - don't do that, just upgrade.


http://docs.oracle.com/docs/cd/B28359_01/server.111/b28300/toc.htm

upgrade

sam, April 15, 2009 - 3:46 pm UTC

Tom:

Thanks i will read the upgrade manual.

I lost you on this one:

<<option a) ouch, why would you even consider that???? If you exported a 9i database and the goal was "get to 11g" why would you import it into a 9i database and then upgrade? why wouldn't you just import into 11g?? >>

Because you want to test the process of upgrading an existing 9i production database. You want to test
a) if the oracle scripts/upgrade process work.
b) if the application works properly under 11g

So first you have to setup a similar 9i environment on the test server (diff machine) by installing 9i server (not installed now) and then export/import the production database to it. After that, you have identical environments on PRODUCTION SERVER and TEST SERVER.

When we do the upgrade in pRODUCTION, we are not going to install a new 11g server. We are upgrading to 9.2.8 and then to 11g based on oracle path. so If i import directly into 11g, i am not really testing what i will do in production.

To do what you just said, i have to install a new 11g server on production machine when we go live (same way we do it in TEST) instead of running upgrade scripts.

Tom Kyte
April 15, 2009 - 4:41 pm UTC

you didn't tell me what your goal was there. I didn't know you wanted to test the upgrade. Why would you export/import. Why would you not just restore your backups.

Forget about export/import, you don't want them.

upgrade

A reader, April 15, 2009 - 7:57 pm UTC

Tom:

I assumed it was pre-known. You do not migrate before you test the process and the application. You always say that.

We will restore from backup instead of exp/imp.

I think you see now that a brand new 11g install is easier than applying patches to 9.2.0.2. Plus you have to reinstall all the products that come with 11g (not sure if those will be isntalled with upgrade scripts)
Tom Kyte
April 15, 2009 - 9:52 pm UTC

I did not know you were testing "upgrade from 9i to 11g", I naturally assume you were testing "export from 9i, build a new database"

which would be *wrong*

just restore the backup, I cannot imagine why you would involve export/import - you were not clear in your intentions, you never stated "what I want to test is...", you stated "this is what I'm going to do...."

So, I had to assume you were testing your 9i to 11g upgrade
AND IT INVOLVED EXPORT AND IMPORT
and that is wrong.


... I think you see now that a brand new 11g install is easier than applying
...

why do you make this stuff up? where did that come from?????

stop doing that SMK, really - do not put words in my mouth. If you have 9.2.0.2 you would (according to the upgrade manual)

a) patch up to 9.2.0.4
b) upgrade to 11.1

no clue what you mean by "plus you have to reinstall...", no idea.

Is it necessary to upgrade oracle 10.2.0.1?

tony wang, June 11, 2009 - 1:39 pm UTC

Is it necessary to upgrade from oracle 10.2.0.1 to oracle 10.2.0.4? advantages? disadvantages?
Tom Kyte
June 11, 2009 - 3:18 pm UTC

nothing is necessary.

somethings are advisable.

advantage: you would be on a supported release

CRS

A reader, June 19, 2009 - 9:13 am UTC

Tom,

Thanks for your time.


we have oracle client installed in application cluster. Application cluster has many cluster resources apart from Oracle client resources. and cluster management is done using VCS.


a) Can we use CRS to replace VCS and manage application cluster resources?
any doc/refrences would be a great help.

Tom Kyte
June 19, 2009 - 3:57 pm UTC

you'd have to list out what features of VCS you are using as the two things intersect and are not subsets of each other.

start here:

http://docs.oracle.com/docs/cd/B28359_01/rac.111/b28255/intro.htm#insertedID0

10g upgrade

Deba, June 24, 2009 - 6:56 am UTC

Hi Tom,

After upgarding to 10g , I can find some DB_HIST tables are not getting populated like as below :

SQL> select * from DBA_HIST_LATCH_CHILDREN;

no rows selected

SQL> 
SQL> select * from DBA_HIST_LATCH_PARENT;

no rows selected

SQL> 
SQL> select * from DBA_HIST_SESSMETRIC_HISTORY;

no rows selected

SQL> 
SQL> select * from  DBA_HIST_SYSMETRIC_HISTORY;

no rows selected

SQL> 
SQL> select * from DBA_HIST_WAITCLASSMET_HISTORY;

no rows selected

SQL> select name from v$database;

NAME
---------
STPROD1

SQL> select * from dba_hist_wr_control;

      DBID
----------
SNAP_INTERVAL
---------------------------------------------------------------------------
RETENTION
---------------------------------------------------------------------------
TOPNSQL
----------
3121470426
+00000 01:00:00.0
+00365 00:00:00.0
DEFAULT


SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
PL/SQL Release 10.2.0.3.0 - Production
CORE    10.2.0.3.0      Production
TNS for Solaris: Version 10.2.0.3.0 - Production
NLSRTL Version 10.2.0.3.0 - Production

SQL> select platform_name from v$database;

PLATFORM_NAME
--------------------------------------------------------------------------------
Solaris[tm] OE (64-bit)

Is there any bug or I missed out some settings ?

Thanks
Deba

Tom Kyte
June 26, 2009 - 9:48 am UTC

it would mean you are not experiencing waits in the referenced areas, nor exceeding any metrics (as set in enterprise manager).

10g upgrade

Deba, June 24, 2009 - 10:56 am UTC

Hi Tom,

In addition to the above thread ( problem with some dba_hsit tables getting not populated ) , value of statistics_level is set as TYPICAL.

Thanks
Deba

10g with other clusterware

A reader, June 24, 2009 - 9:50 pm UTC

Tom,

Currently we have :
Oracle 9i RAC ( 02 Node) with VCS as clusterware software.

When we upgrade to 10g.

a) Can CRS provide all the clustering capability ( all - i mean here oracle and non-oracle resources) ?

Seen in metalink document (397460.1) that Oracle 10g supports CRS with Symantec SFRAC. so

(b) if CRS can manage everything then why there is a need to have other clusterware installed and use?




Tom Kyte
June 26, 2009 - 10:34 am UTC

a) you would have to list out what services you need - is CRS identical to VCS - no, not at all.

b) probably there is none, that is the point, in 10g everything you need is something you already own, no need to spend money on other bits.

Upgrade to 10g

A reader, July 07, 2009 - 6:55 am UTC

Tom

Thanks for helping entire oracle community across the world.

I know you wont be doing upgrade on every platform, every release , but would like to know your expert comment on the approaches below & fesability etc.

Current state:

Oracle - 9.2.0.8
OS - Hp Ux 11.23
02 Node RAC ( hostA, hostB)
Cluster Management : VCS 4.1
File System/Disk Management : VxFS/VxVM 4.1


Upgrade Plan :
Oracle : 10.2.0.4
Cluster Management : CRS
File System/Disk management : VxFS/VxVM 4.1


Listing the different options available ( will prefer the plan with
minimum downtime)


Approach A :
( say hostC and hostD are new server available)
1. Integrate disk storage equal to the current disk size on hostC
2. Install 9i on hostC
3. Create physical standby on hostC from hostA.
4. Install 10g with CRS on hostC
5. Create "shell" of an Oracle 10g database on the standby system
using the same metadata as the primary but without data. ( export
Metadat from live 9i database using rows=N)
6. When switchover planned, activate the standby database on the
hostC.
7. Import the tablespaces in hostC from 9i database to 10g databases
( TRANSPORT_TABLESPACE)
8. Install 10g & CRS on hostD
9. Add hostD into cluster.
10. hostA & hostB now becomes dormant and hostC & hostD as live.

Questions :
a) is this approach workable?
b) Can VCS and CRS coexist together on host ( step#4). VCS supporting
9i database and CRS 10g "shell" database ? ( at this stage since there
is single node hostC )
c) How to add the second host in the clsuter Step#9,#10,#11 can be
clubbed together?


Approach B :
(Removing the second node from the live cluster and use it for
upgarde process. In this scenario if there are no new server hostC &
hostD). We can run all the servies on hostA itself.
i.e. Live database will have only hostA . hostB will free which will
be used for step#1 to #7 of approach A.

Questions :
a) will this approach will be workable?
b) COmment on adding the hostA back on the cluster with 10g & CRS? at what stage?


Approach C:
( Plane approach - Upgrade as it exists. directly upgarde database on
cluster of hostA & hostB )
There will be more downtime here. and steps would be:
a) Install 10g on separate oracle home. Install CRS
b) Upgarde the existing databases to 10g

Questions
a) any comments ?


Regards

Tom Kyte
July 07, 2009 - 6:31 pm UTC

approach c would not have "more downtime"

the steps would be (could be, should be)

a) install 10g etc
b) then downtime for upgrade

and just understand that this will be the last time you need to take downtime for major upgrades since you can (as of 10.1.0.4) perform rolling upgrades with a standby database.


The other approaches are far too convoluted if you ask me.

Approach A is missing bits (you say to create a 'shell' with rows=n, but you have sequences that are advancing in production that aren't in this shell, you have tables that are empty that you'll have to drop before you move over, the entire shell will be filled with invalid code that you'll probably want to compile and so on and the transport takes a measurable amount of time - I would not suggest this approach)

Approach B - not sure where you are going with that. You only have one database - I don't see it getting upgraded.

Upgrade to 10g

A reader, July 09, 2009 - 6:36 am UTC

Thanks Tom
for your valuable inputs.

I understand it now the complexity involved in the approaches other then C.

....Approach A is missing bits (you say to create a 'shell' with rows=n, but you have sequences that are advancing in production that aren't in this shell, you have tables that are empty that you'll have to drop before you move over....

We house 10 databases in 02 NOde RAC. ( 05 are bigger databases and 05 are small JMS databases)

There are:
a) around 650 tables which has num_rows=0 out of total 1010 tables.
b) there are 02 sequences on each big databases.

So approach A is certainly going to be complex.


On approach B, i was referring was :
if we dont have additional server then spare 2nd host from the RAC ( comperomising HA!) and then add new set of disks to hostB. use that hostB to create physical standby & "shell" db as described in approach A.

On approach C:
Yes we will follow this.

Regards

Tom Kyte
July 14, 2009 - 2:22 pm UTC

Approach A won't work. Forget complex, it doesn't work.


And since B seems to incorporate A, B won't work.

Database upgrade query

Jayadevan, July 20, 2009 - 7:52 am UTC

Dear Tom,
Our client has a few Oracle 9i databases running on a server (same Oracle path). We have to upgrade just one of those databases to 10g on the same machine. Which is the best approach to manage this?
Thanks
Jay
Tom Kyte
July 24, 2009 - 11:35 am UTC

you will need a new oracle home to store the new software, so you'll now have "two paths"

the only correct number of production instances on a host is one, you have already violated the basic principle....

Ian, July 24, 2009 - 12:36 am UTC

You will need to install the Oracle 10g software into a new ORACLE_HOME, and then upgrade your 9i database using this new ORACLE_HOME.

Refer to the Oracle 10g Datatbase Upgrade Guide for detailed instructions.

Oracle peroformance issue

Paul Yim, August 03, 2009 - 2:46 pm UTC

Dear Tom,

We're running Oracle Rac 10g 10.2.0.3 with SLES 9 SP4.

After applying the SP4 patches from SP3, Oracle performance was down 2 to 3 times. Based on 3000 rows delete, it used to take less than a minute with SP3, but with SP4 now takes anywhere from 2 to 3 minutes.

Any idea why Oracle slower with SLES 9 SP4 patches?

Thanks,
Paul
Tom Kyte
August 04, 2009 - 1:56 pm UTC

No, I do not have any knowledge of anything outstanding.

Hopefully you have performance/profiling information from before the change so you can quickly and immediately define what is different (IO, CPU, whatever) and drive down that path.

Else, you will be stuck trying to tune "everything" as it could be anything.



upgrade from 9.2.0.7 to 10.2.0.4

reader, August 24, 2009 - 10:30 pm UTC

The concern that our application team has is that 10g optimizer will change the execution plans and performance will be horrible. However, they want to upgrade to 10.2.0.4. They want to ensure that execution plans that we see in 9.2.0.7 should be the ones that they want to see after upgrade. Does setting optimizer_features_enable = 9.2.0 on 10g would help? Thanks.
Tom Kyte
August 25, 2009 - 9:55 am UTC

... Does setting
optimizer_features_enable = 9.2.0 on 10g would help? ...

not really.

they might consider generating stored outlines in 9i and using them in 10g if that is what they really think they want.

they might be interested in using SPA (sql performance analyzer) rather than "freeze plans" to see what plans change, what changes for the better, what stays the same and what plans need investigation before going production.

Oracle Forms 10g

TH, November 05, 2009 - 12:14 pm UTC

First, Thanks for the effort.

I am a software developer with around 10 years of experience - Basically in Microsoft platforms. Since the entire technology there is changing often with them, I decided to check the Oracle Forms.

I know Oracle database extensively(SQL & PL/SQL)
I know VB, VB.Net and Delphi, (Don't know Java)

If I want to start learning Oracle Forms 10g, how should I start? Is there any video tutorials in Oracle web site? Any step by step instruction either? Or what is the best way?
Tom Kyte
November 11, 2009 - 1:09 pm UTC

Not sure I'd be starting out to learn forms today - mostly it is used in maintenance mode.

But if you wanted to - I'd start out over in otn.oracle.com -> forums, there is a developer forum for forms there.

Migration of execution plans

Gaurav, December 17, 2009 - 8:06 am UTC

Hello Tom
we have here a well-tuned OLTP database running on 9i R2 that is scheduled to be upgraded to 11gr2 next month. This is a database used for business-critical applications and response times are extremely stringent.

My concern is, should there be a performance regression due to change of execution plans, is there any way we can save / store our current execution plans in 9i and use them (if it comes to that) in the database migrated to 11.2 ? How do we go about this, if there is a way ? An Oracle white paper says
"In Oracle 9i, all statements including their execution plans can be collected and stored into a SQL Tuning Set (STS). However, the SQL Tuning Set cannot be loaded directly into the SQL Plan Baseline in the upgraded database. "

So how do we go about this, then ? If you think this is unworkable, can you suggest an alternative approach ?
Tom Kyte
March 10, 2011 - 10:38 am UTC

you can now migrate stored query outlines (collectable in 9i) to sql plan baselines (what 11g uses)


http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:18064818873846#3145155100346004071

Shared pool impact during oracle upgrade from 9i to 10g

Reader, December 18, 2009 - 7:09 am UTC

Hi Tom,

When the oracle database is upgraded from 9i to 10g what exactly is the state of the shared pool. Is it flushed/empty. If yes that means all the queries would have to go through a hard parse. Can this be one of the performance issue during the earlier stages. Please enlighten me. Thanks

Tom Kyte
December 18, 2009 - 12:41 pm UTC

it would be the same as any shutdown/startup would be. The shared pool is ALWAYS empty upon instance startup and emptied when you shutdown the instance.

So, even in 9i you were doing this every time you shutdown and started up. Yes, the shared pool, buffer cache, in fact the ENTIRE SGA will be empty upon startup

created_by for an oracle user account

A reader, December 18, 2009 - 4:15 pm UTC

Tom,
I couldn't find any existing topic for my question..hence landed here in 10g world.

I was posed with a question in audit review, if it is possible to find the created_by for a user account in Oracle 10g?

My straight feeling was no.Later I searched some dictionary tables which also did not help.

Do you have any idea to help me with this?
Just wondering, is that not a necessary information for Oracle.Why is it not available straight away in dba_users?

Regards,



Tom Kyte
December 18, 2009 - 6:11 pm UTC

... is that not a necessary information for Oracle....

no, it isn't. public things (public database links, rollback segments, public synonyms, uses, etc) are not tracked by creator.

You would need to enable auditing and or use a system event trigger if you wanted to monitor this information yourself.

Migrate from SE One to SE or EE

Marat Tolgambayev, December 06, 2010 - 5:55 pm UTC

Dear Tom,

If I had my database running on 11g SE One, and were planning to migrate to SE or EE (the database were in production environment and couldn't be stopped) - what was the easiest way to do that? Is there any "upgrade tool", except installing new database>export>import>synchronise etc., which allow me to just install new edition (the same version) on top of existing without much effort?
Thank you!
Tom Kyte
December 07, 2010 - 10:47 am UTC

Hi Sir

shankar, December 07, 2010 - 1:20 pm UTC

Hi Tom Sir, i am shankar.

sorry for posting this in this thread.

I know you are familier with Oracle Discoverer reporting tool , if my guess is right then please help me

i am a Discoverer admin (newbie)

I have a small doubt . i have a user who cannot run his reports (when he login into DIscoverer using discoverer username) while he login into any of the machine using his network id ,
and when he login into discoverer using his discoverer username in any machine in which others have already logged into the machine using their network id, he can run the reports successfully.

please help me to resolve this issue

your help is highly appreciated

Thanks




Thank you!

Marat Tolgambayev, December 13, 2010 - 9:39 pm UTC

Thank you, Tom!

minimizing upgrade downtime

A reader, March 05, 2011 - 11:50 am UTC

How would one achieve minimal (ideally 0) downtime upgrade from 10G to 11G without using dataguard/RAC/EBR?

Assume you can customize OUI to call your own scripts during the upgrade.
Tom Kyte
March 07, 2011 - 10:53 am UTC

Ok, so this is akin to asking "how can I juggle with both arms tied behind my back and a blindfold on?"

One question for you: why? Why do you have these restrictions.

A logical sensible way might be:

a) create a logical standby database (if you have enterprise edition, you have this capability)

b) upgrade the logical standby

c) switch over to it (seconds in duration)

And you are done...


Short of doing something like that (golden gate, streams, logical standby) - you are looking at an outage.

I'm not sure why calling a script from OUI (Oracle Universal Installer) would help.

minimal downtime

A reader, March 12, 2011 - 7:49 pm UTC

Just trying to explore if there are other alternatives to achieve minimal downtime!

Concerned.

Aru, March 17, 2011 - 11:29 pm UTC

Hi Tom,
We have chosen to use expdp/impdp for our 200GB, 10gR2 to 11gR2 database upgrade as we are migrating from one server to another and upgrading the OS at the same time.

My question is:
1) If system and sys schemas are not exported and imported how will the new 11gR2 database get all the information that is existing in the sys schema in the system t/s, in the 10g database. For example: statistics, AWR baselines etc...(I may be missing something obvious here).

2) We can also just export the few necessary users and import them. Which is a better way? full=y or schema based?

Are we unduly worried? What really happens to the sys and system objects during an expdp and impdp then?
Regards,
ARU.


Tom Kyte
March 18, 2011 - 7:42 am UTC

If it is the same base OS - even if you are upgrading it - I'd just do an in place upgrade. Lots less pain.


1) statistics should be gathered in the new database - although data pump can bring them along, but if you are rebuilding everything - just gather afresh in the new instance - the data will be organized differently, it deserves new statistics. There are database packages to export/import AWR information as well.

But this does get me back to my original point, just in place upgrade the database and you don't even have to THINK about this stuff.

2) see my original point.


we don't move sys/system stuff with expdp. I'd recommend a regular old fashioned upgrade.

Install 11g
Restore the datafiles
Upgrade the database
Open it for business.

Thanks

Aru, March 20, 2011 - 4:45 pm UTC

Thanks Tom,

Am giving your suggestion of 'good old upgrade' a go in my test environment now.

Also: it was a true privilege to attend your seminar in Auckland.

Regards,
Aru.

A reader, March 21, 2011 - 11:49 pm UTC

Install 11g
Restore the datafiles
Upgrade the database
Open it for business.

When you say restore data files, do you mean copy 10G data files into 11G? Don't you first need to transform them to be 11G compatible?
Tom Kyte
March 22, 2011 - 7:26 am UTC

If you were to upgrade the database on the existing machine -what would you do?

A) install 11g
B) use 11g to upgrade it


All we are doing here is

a) put datafiles in place so it LOOKS like the database has always been there
b) install 11g
c) use 11g to upgrade it


The datafiles are already 11g, 12g, etc compatible, they are Oracle datafiles - the higher version knows how to upgrade them.

A reader, March 22, 2011 - 10:38 am UTC

Hello Sir,

sorry for asking dump question

"put datafiles in place so it LOOKS like the database has always been there"

So where we have to put datafiles? in different location other than it is currently located?

Thanks


Tom Kyte
March 22, 2011 - 11:33 am UTC

it would be easiest if the named file locations where the same, less to change.

If you put them in a new place, follow the directions of your backup tool to do a restore to a different location. RMAN documentation has a chapter on doing that if you are using that.

where could I post new question?

lila, September 15, 2011 - 9:17 pm UTC

Thanks a lot for the valuable information you provide in philadelphia conference but brought question that I could not find answer easily:

what sql command could be used to find if the oracle upgraded from 10 to 11 or oracle 11 is installed without upgrade

Tom Kyte
September 16, 2011 - 2:12 pm UTC

I don't know of a reliable way, we don't really keep track of that.

Alexander, September 16, 2011 - 3:24 pm UTC

Try DBA_REGISTRY_HISTORY.
Tom Kyte
September 16, 2011 - 4:39 pm UTC

thanks - new views every release - that one was added in 11g :) appreciate the followup.

OK

Rama, September 19, 2011 - 6:05 am UTC

Hi Tom,
currently I am using Oracle 10g rel.2 in windows 7,64 bit OS. I want to upgrade to 11g rel.2.
Could you please point me to the OTN link for downloading the upgrade software for 11g rel.2?
Thank you
Tom Kyte
September 19, 2011 - 5:57 pm UTC

go to my oracle support - whatever URL is in fashion these days - that is where you get upgrades.

if you just want to download the base - unpatched - software from otn, the way to downloads is pretty clearly marked. there is a big old "downloads" tab right on the front page?

oracle

KUMAR, September 20, 2011 - 5:34 am UTC

I CREATED ONE TABLE ON ORACLE.I INSERT THE DATA ONE TIME IN TABLE.BUT THE CONTAIN FOUR RECORDS.WHY?
Tom Kyte
September 20, 2011 - 6:33 pm UTC

MAYBE BECAUSE YOUR CAPSLOCK KEY IS STUCK ON?

in any case, you get back from a table only that which you put in, so - given you do not show your work - I'll just have to say "you have made a mistake"

To: Kumar from India

A reader, September 20, 2011 - 12:07 pm UTC

Please do not use all uppercase. It does not raise the severity level of your question. In fact, it makes it harder to read your question and might actually reduce the chances of its getting answered.

Thanks...

Oracle DB Creation stop

Tariq, December 06, 2011 - 1:13 am UTC

Hi Tom,

I'm creating a database instance on 10g and Windows 2003 64it standard Edition. The Database Creation Assistant hangs at the 2% mark. It creates the db service and then can't connect to it.
I tried to install oracle 11g on same server and it worked successfully .

One more note I have 32 GB ram on this server , is this can effect ???.


I'd appreciate any help. Thanks!

Oracle releases

Mark, January 11, 2012 - 3:55 pm UTC

Tom, I recall you once said something to the effect of "10gR1 is to 10gR2 as 9iR2 is to 10gR1". Do I have this correct? I ask because there seems to be some confusion among coworkers of mine.

In my view, 10gR2 should not be considered an "update" that replaces 10gR1 any more than 10gR1 did for 9iR2. In other words, for all intents and purposes (especially application testing), one should think of all instances of "xRy" as distinct, major releases. They each have a big set of new features, after all.

This page in the Oracle docs SEEMS to disagree with what I'm saying - http://docs.oracle.com/cd/B19306_01/server.102/b14238/intro.htm#i1008567 - but in my opinion, it is only a superficial disagreement in terminology. They say 10 is the major release number, 2 is the maintenance number, which makes it SOUND like 10gR2 is just a bunch of bug fixes to 10gR1. But we all know it's much more than that.

Do you agree? Thanks.
Tom Kyte
January 11, 2012 - 9:53 pm UTC

well, I don't remember saying exactly that - not saying I did or did not - I'd need a bit more context.


But it sounds about right to me.


In the past - we had 8.1.5, 8.1.6, 8.1.7 - and I'd call them all major releases (we called them 8i release 1, 8k release 2, 8i release 3). they had big differences in features.

Nowadays when we have a change in the 2nd digit - 11.1 to 11.2 - i would call that a major release. One only needs to look at the new features guide to see that.


Mark, January 12, 2012 - 10:42 am UTC

I had it mentally filed away as one of those useful tidbits I had learned from you once upon a time, but I could not find a source. It's possible my memory is wrong. :) At any rate, thanks for your opinion.

upgrade from 8i to 11g on diff OS

Nikhilesh, January 27, 2012 - 5:11 am UTC

Dear Tom,
We are upgrading from 8i (win 2000) to 11g (HP Unix) and server BOX is also diff. What we are planning is on new Hardware and OS install 11g and then export full 8i database using exp and import it on LIVE using imp.
I would be grateful if you can suggest if this is the best possible way to do it or they is another way to do this. Is it safe to use EXP/IMP on production.

Thanks in advance.
Tom Kyte
January 31, 2012 - 4:54 pm UTC

how big is this database?

you would use the 8i exp tool and the 11g imp tool - that is a fully supported operation.

upgrade from 8i to 11g on diff OS

Nikhilesh, January 27, 2012 - 5:11 am UTC

Dear Tom,
We are upgrading from 8i (win 2000) to 11g (HP Unix) and server BOX is also diff. What we are planning is on new Hardware and OS install 11g and then export full 8i database using exp and import it on LIVE using imp.
I would be grateful if you can suggest if this is the best possible way to do it or they is another way to do this. Is it safe to use EXP/IMP on production.

Thanks in advance.

upgrade from 8i to 11g on diff OS

Nikhilesh, February 03, 2012 - 12:33 am UTC

Dear Tom,
Thanks a lot for your confirmation. DB size is almost 50GB. I tried it on dev BOX and it worked perfectly. Just a query. Full export is exporting SYSTEM tables also.

. about to export SYSTEM's tables via Conventional Path ...
. . exporting table DEF$_AQCALL 0 rows exported
. . exporting table DEF$_AQERROR 0 rows exported
. . exporting table DEF$_CALLDEST 0 rows exported
. . exporting table DEF$_DEFAULTDEST 0 rows exported
. . exporting table DEF$_DESTINATION 0 rows exported
. . exporting table DEF$_ERROR 0 rows exported
. . exporting table DEF$_LOB 0 rows exported
. . exporting table DEF$_ORIGIN 0 rows exported
. . exporting table DEF$_PROPAGATOR 0 rows exported
. . exporting table DEF$_PUSHED_TRANSACTIONS 0 rows exported
. . exporting table DEF$_TEMP$LOB 0 rows exported
. . exporting table HELP 817 rows exported
. . exporting table SQLPLUS_PRODUCT_PROFILE 0 rows exported
. about to export OUTLN's tables via Conventional Path ...
. . exporting table OL$ 0 rows exported
. . exporting table OL$HINTS 0 rows exported

Is it OK? I read somewhere that export excluded SYSTEM tables.

Thanks for you help.
Tom Kyte
February 03, 2012 - 10:00 am UTC

50gb is pretty tiny.

it is just exporting stuff that needs to be moved over.

Duplicate object names in 10.2.0.4

Suresh. R, May 31, 2012 - 12:45 am UTC

Hi Tom,

Good morning:)
Thanks for your support..
I have been encountered a bug in oracle 10.2.0.4.

I have seen one package body (PK1_BODY) in INVALID Status.
So i tried recompiling it.

alter package PK1_BODY recompile body;

After issuing this command it became to VALID status but it created another object(package body with same name) pk1_body.

Out of two objects one is in Valid status and another is in Invalid status.
The only difference i found is upper and lower case of the object names.

Original object was in Upper case and duplicate(second created) was in lower case.

Please find the below details:

Oracle RDBMS
Version: 10.2.0.4
RAC: No

Could you please help me how to fix this issue?
Kindly revert back if any concern.
Thanks Very much for your support.

Regards
Suresh.R
Tom Kyte
May 31, 2012 - 2:09 am UTC

Out of two objects one is in Valid status and another is in Invalid status.
The only difference i found is upper and lower case of the object names.


there is no bug here, you or someone you work with created that second package.


someone used a quoted identifier and created that second object in lowercase


create package body pk1_body ...

create package body "pk1_body" ....


the first will create PK1_BODY, the second will create pk1_body.


But, there is no bug here - you or someone you work with did this, not us.

query up the created time for these objects and you can see when they were created, you'll see that the lower case one was not created at the instant you did the alter.



You can get rid of the lower case one just by quoting it.

drop package body "pk1_body";


Oracle upgrade to 11g

Jayadevan, June 22, 2012 - 12:42 am UTC

Hi Tom,
"Oracle Database 11gR2 Upgrade Companion (Version 2.70)" provides the query to find out the non-default initianlization parameters -
col name format a30
col value format a60
set linesize 130
set pagesize 2000
SELECT KSPPINM "Name", KSPFTCTXVL "Value"
FROM X$KSPPI A, X$KSPPCV2 B
WHERE A.INDX + 1 = KSPFTCTXPN
AND KSPFTCTXDF <> 'TRUE'
ORDER BY 2;

I get, along with a few other parameters,
Isn't 8192 for db_block_size a default value?
Tom Kyte
June 22, 2012 - 7:17 am UTC

if you set it, it isn't "default" even if you set it to the default value. You didn't let it default.




sys%ORA11GR2> startup force pfile=initora11gr2.ora
ORACLE instance started.

Total System Global Area  267825152 bytes
Fixed Size                  1344344 bytes
Variable Size             213912744 bytes
Database Buffers           46137344 bytes
Redo Buffers                6430720 bytes
Database mounted.
Database opened.
sys%ORA11GR2> select name||'='||value from v$parameter where isdefault='FALSE';

NAME||'='||VALUE
-------------------------------------------------------------------------------
memory_target=268435456
control_files=/home/ora11gr2/app/ora11gr2/oradata/ora11gr2/control01.ctl, /home
/ora11gr2/app/ora11gr2/fast_recovery_area/ora11gr2/control02.ctl

<b>db_block_size=8192</b>
compatible=11.2.0.0.0
db_recovery_file_dest=/home/ora11gr2/app/ora11gr2/fast_recovery_area
db_recovery_file_dest_size=4196401152
undo_tablespace=UNDOTBS
optimizer_features_enable=11.2.0.2
db_name=ora11gr2
diagnostic_dest=/home/ora11gr2/app/ora11gr2

10 rows selected.

<b>now you see it....</b>
sys%ORA11GR2> !cat initora11gr2.ora
*.compatible='11.2.0.0.0'
*.control_files='/home/ora11gr2/app/ora11gr2/oradata/ora11gr2/control01.ctl','/home/ora11gr2/app/ora11gr2/fast_recovery_area/ora11gr2/control02.ctl'
*.db_block_size=8192
*.db_name='ora11gr2'
*.db_recovery_file_dest_size=4196401152
*.db_recovery_file_dest='/home/ora11gr2/app/ora11gr2/fast_recovery_area'
*.diagnostic_dest='/home/ora11gr2/app/ora11gr2'
*.optimizer_features_enable='11.2.0.2'
*.memory_target=256m
*.undo_tablespace='UNDOTBS'










sys%ORA11GR2> startup force pfile=initora11gr2_2.ora
ORACLE instance started.

Total System Global Area  267825152 bytes
Fixed Size                  1344344 bytes
Variable Size             213912744 bytes
Database Buffers           46137344 bytes
Redo Buffers                6430720 bytes
Database mounted.
Database opened.
sys%ORA11GR2> select name||'='||value from v$parameter where isdefault='FALSE';

NAME||'='||VALUE
-------------------------------------------------------------------------------
memory_target=268435456
control_files=/home/ora11gr2/app/ora11gr2/oradata/ora11gr2/control01.ctl, /home
/ora11gr2/app/ora11gr2/fast_recovery_area/ora11gr2/control02.ctl

compatible=11.2.0.0.0
db_recovery_file_dest=/home/ora11gr2/app/ora11gr2/fast_recovery_area
db_recovery_file_dest_size=4196401152
undo_tablespace=UNDOTBS
optimizer_features_enable=11.2.0.2
db_name=ora11gr2
diagnostic_dest=/home/ora11gr2/app/ora11gr2

9 rows selected.

<b>and now you don't....</b>

sys%ORA11GR2> !cat initora11gr2_2.ora
*.compatible='11.2.0.0.0'
*.control_files='/home/ora11gr2/app/ora11gr2/oradata/ora11gr2/control01.ctl','/home/ora11gr2/app/ora11gr2/fast_recovery_area/ora11gr2/control02.ctl'
*.db_name='ora11gr2'
*.db_recovery_file_dest_size=4196401152
*.db_recovery_file_dest='/home/ora11gr2/app/ora11gr2/fast_recovery_area'
*.diagnostic_dest='/home/ora11gr2/app/ora11gr2'
*.optimizer_features_enable='11.2.0.2'
*.memory_target=256m
*.undo_tablespace='UNDOTBS'

sys%ORA11GR2> 

Upgrade to 11g

Jayadevan, June 22, 2012 - 12:44 am UTC

Here is a list of all non-default parameters fetched by the query
log_archive_format             %t_%s_%r.dbf
dispatchers                    (PROTOCOL=TCP) (SERVICE=petracXDB)
db_create_file_dest            +DATA3
control_files                  +DATA3/petrac/controlfile/current.260.767505761
spfile                         +DATA3/petrac/spfilepetrac.ora
db_recovery_file_dest          +DATA4
control_files                  +DATA4/petrac/controlfile/current.256.767505761
__oracle_base                  /project/oracle/server/product
diagnostic_dest                /project/oracle/server/product
audit_file_dest                /project/oracle/server/product/admin/petrac/adump
__shared_io_pool_size          0
instance_number                1
thread                         1
processes                      1000
__db_cache_size                1023410176
db_recovery_file_dest_size     10485760000
compatible                     11.2.0.0.0
sessions                       1528
__large_pool_size              16777216
__shared_pool_size             2550136832
open_cursors                   300
__streams_pool_size            33554432
__sga_target                   3875536896
__pga_aggregate_target         6526337024
__java_pool_size               67108864
memory_target                  6878658560
db_block_size                  8192
audit_trail                    DB
remote_login_passwordfile      EXCLUSIVE
cluster_database               TRUE
undo_tablespace                UNDOTBS1
remote_listener                pet-cluster-scan:1521
db_name                        petrac

default value for parameters

Jayadevan, June 25, 2012 - 10:22 pm UTC

Hi Tom,
Thank you for clarifying that.
Jay

Upgrade/Migration from the OLD 9i with GG?

Andre, April 10, 2013 - 10:33 am UTC

Hi Tom,

I NEED YOUR ADVICE:
-------------------

In this thread above you had advised:

>>>
Followup March 7, 2011 - 10am UTC:

-- Ok, so this is akin to asking "how can I juggle with
-- both arms tied behind my back and a blindfold on?"

-- One question for you: why? Why do you have these
-- restrictions.

-- A logical sensible way might be:

-- a) create a logical standby database (if you have
-- enterprise edition, you have this capability)

-- b) upgrade the logical standby

-- c) switch over to it (seconds in duration)

-- And you are done...

-- Short of doing something like that (golden gate,
-- streams, logical standby) - you are looking at

<<<

+ + +

Well,

We have an old Win-2003 with 9i installed in 2006.

The local disks are failing and we got ORA-01578 / 0110 corruption errors and had to recover.

There is no Data Guard or Stand-by installed.
Just hot backups done with SAP scripts.

With 9i not being supported (just extended until Jul) I do not think that we have an option to install 9i on a new machine to create a standby DB ... right?

The App had been already tested with 11gR2 and the TEST-Training platform is working OK.

Is the following a correct approach..?
(a) Install 11gR2 on a supplied new server which will most likely be AIX rather than Win-200x (this AIX is also a bit old decommissioned box)
(b) Install GG on both
(c) take SCN consistent schema export and import into 11gR2
(d) Sync the two with GG
(e) Perform extensive tests to ensure App users sign-off
(f) Switch over to the new platform

or:

(f2) Leave both platforms running as Primary + DR
(g) Deploy new platform - when it is available
(h) Replicate into this new target PROD with GG
(i) Switch over

With the above extended scenario - the old AIX box would only be used in the interim as a breathing space and serve as a DR platform should we get another disk failure.

Is this OK - or you would approach this in a different way?

Thanks
Andre
Tom Kyte
April 22, 2013 - 2:42 pm UTC

you could install 9i on another machine and create a standby HOWEVER, you cannot do rolling upgrades in 9i, that was new in 10g.


I would prefer to see you at 11gr2 personally - not 9i.

I'm confused by the two ideas above though - you cannot really do primary + DR with version 9 to version 11 - you can only do DR from like hardware/software to like hardware software.

upgrade 9i into 11gR2 - cross platforms

Andre, April 23, 2013 - 9:46 am UTC

Hi Tom
- welcome back

You said
you could install 9i on another machine and create a standby HOWEVER, you cannot do rolling upgrades in 9i, that was new in 10g.


I would prefer to see you at 11gr2 personally - not 9i.

I'm confused by the two ideas above though - you cannot really do primary + DR with version 9 to version 11 - you can only do DR from like hardware/software to like hardware software.

+

Let me clarify

(1) If users agree to several days down-time which will be used to migrate the database from 9i Win-2000 to 11gR2 AIX to test that all is OK (BTW such tests is their standard requirement) then:
(a) we could set up RMAN to clone the 9i database into a new platform
(b) upgrade DBF files into 11gR2
(c) connect Apps server to the new environment and let users test

(2) If however users would not agree to a downtime longer than within one hour - AND they would want to be able to test the new 11gR2 while still using the old 9i Win2000
then the solution seems to be with GG
(a) deploy GG on both servers - 9i Win and 11gR2 AIX
(b) clone the PROD 9i database into the new platform - this could easily be done with SCN consistent export as the data content is under 25GB and export takes 55mins + import on AIX takes a bit over an hour
(c) synchronize the two databases
(d) allow users to test to the fullest - and as long as they want
(e) execute switch over when there is a sign-off

My point about the interim - i.e. using a decomissioned AIX box was that with GG deployed it coule server as a DR platform. In this case GG would be deployed instead of Oracle Data-Guard logical standby - as DG could not be deployed in this heterogenoeus environment

Am I missing something - or is it a good approach..?

Thanks
Andre

Tom Kyte
April 23, 2013 - 1:14 pm UTC

1a) no, you are crossing endianess - you can transport tablespaces, but not databases. but even then, 9i didn't do cross platform transport yet so that isn't possible unless you upgrade 9i to 11g FIRST and then do tablespaces (but not entire database)

2) would work.




if you ask me, DR is never ever done in a mixed environment. You want that DR day to go smoothly, do you *really* want to switch platforms and have everything that will go wrong go wrong *that day*????

DR costs money, it just does. It isn't something you can do on the cheap.

and to me, replication is not DR, DR is supposed to be easy, failure proof, simple. failing over to a different platform with replication software isn't that.

GG is not DR

Andre, April 24, 2013 - 10:30 am UTC

Hi Tom,

In your response
>>>>
-- 2) would work.

-- if you ask me, DR is never ever done in a mixed
-- environment. You want that DR day to go smoothly,
-- do you *really* want to switch platforms and have
-- everything that will go wrong go wrong *that day*????

...

-- and to me, replication is not DR, DR is supposed to be
-- easy, failure proof, simple. failing over to a different
-- platform with replication software isn't that.

<<<<

YES - I do understand and I am sorry for not explaining it better.

Let me try to do it better:

The most likely scenario is that the Group-IT infrastructure team will opt for AIX platform to host the newly upgraded Oracle 11gR2 from the existing 9i on Win2000

The rationale behind using GG is:
(a) very simple database - as it is hosting a tiny SAP App
(b) this single schema (under 25GB) can be transfered into AIX 11gR2 via exp+imp within a short period (under 2 hours)
(c) GG sync + on-going replication can be done easily (POC done without any issues)
(d) When the 2 environments are in sync - users can do various tests on AIX - reports and queries only to justify the sign-off
(e) If users want to use the target AIX to do some UPDATES via the SAP App then the GG replication would need to stop - and upon completion of such tests the process could be repeated (b) + (c)

(f) SWITCHOVER could then be done at any point when sign-off is obtained from the users

NOW - comes DR component - providing the budget is there

(1) A NEW AIX 11gR2 platform is deployed as the DR
(2) Oracle Data-Guard is then used - most likely as High performance configuration

To clarify: GG will NOT be used here - only Data-Guard

If there is no budget for a DR then we will just use RMAN and need to consider some spare platform that can take any of the six SAP-Oracle Apps to recover into - should there be any major server meltdown - otherwise DB recovery in case of data corruption or other problems.

GG would be used ONLY as an interim phase to facilitate migration from 9i into 11gR2 with minimal down-time.

Are you OK with this plan = or you still see some issues.?

Thanks
Andre

P.S.
Yes - RMAN could not be used to clone the entire database, but it could be used for these 2 tablespaces that store the actual SAP App data.

However - I thought that simple exp-imp would work just fine.


Tom Kyte
April 24, 2013 - 1:06 pm UTC

golden gate would be perfect for a conversion/migration. yes.

GG to migrate / upgrade

Andre, April 25, 2013 - 9:24 am UTC

Thank you Tom for your confirmation

Andre


configure tools for grid failed

leo, June 27, 2016 - 9:09 am UTC

Hi Tom,

Sorry that I can not search out a similar thread as my problem, so I posted here.

Basically my problem is about configToolAllCommands(The plug-in Automatic Storage Management Configuration Assistant has failed its perform method) after a successful installation.

I googled and most of them are about /dev/shm, which is not my case. Actually looks like my log file did not say much, just told me 'failed'.

Could you please help me out?

Thank you very much.


Below is my scenario steps.


I use below command to install.
su - grid -c “bash  /iso_rsp_repository/grid/runInstaller  -silent -ignoreSysPrereqs -waitforcompletion -ignoreSysPrereqs -showProgress -ignorePrereq -responseFile /iso_rsp_repository/grid_silent.rsp"


Below is the content of log file

Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 38274 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 4031 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2016-06-26_07-31-28PM. Please wait ...You can find the log of this install session at:
 /u01/app/oraInventory11g/logs/installActions2016-06-26_07-31-28PM.log

Prepare in progress.
..................................................   9% Done.

Prepare successful.

Copy files in progress.
..................................................   14% Done.
..................................................   21% Done.
..................................................   26% Done.
..................................................   31% Done.
..................................................   36% Done.
..................................................   42% Done.
..................................................   50% Done.
..................................................   56% Done.
..................................................   61% Done.
..................................................   66% Done.
..................................................   71% Done.
........................................
Copy files successful.

Link binaries in progress.
..........
Link binaries successful.
..........
Setup files in progress.
..................................................   76% Done.
..................................................   89% Done.

Setup files successful.
The installation of Oracle Grid Infrastructure 11g was successful.
Please check '/u01/app/oraInventory11g/logs/silentInstall2016-06-26_07-31-28PM.log' for more details.
..................................................   94% Done.

Execute Root Scripts in progress.

As a root user, execute the following script(s):
 1. /u01/app/oraInventory11g/orainstRoot.sh
 2. /u01/app/grid/11.2.0/root.sh


..................................................   100% Done.

Execute Root Scripts successful.
As install user, execute the following script to complete the configuration.
 1. /u01/app/grid/11.2.0/cfgtoollogs/configToolAllCommands RESPONSE_FILE=<response_file>

  Note:
 1. This script must be run on the same host from where installer was run.
 2. This script needs a small password properties file for configuration assistants that require passwords (refer to install guide documentation).


Successfully Setup Software.


I use below command to run the script
su - grid -c /u01/app/grid/11.2.0/cfgtoollogs/configToolAllCommands RESPONSE_FILE="/u01/iso_rsp/grid/cfgrsp.properties”


Below is what the console output

Setting the invPtrLoc to /u01/app/grid/11.2.0/oraInst.loc

perform - mode is starting for action: configure


perform - mode finished for action: configure

You can see the log file: /u01/app/grid/11.2.0/cfgtoollogs/oui/configActions2016-06-26_07-55-39-PM.log



Below is the content of the log file
###################################################
The action configuration is performing
------------------------------------------------------
The plug-in Update Inventory is running


/u01/app/grid/11.2.0/oui/bin/runInstaller -nowait -noconsole -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true "CLUSTER_NODES={}" ORACLE_HOME=/u01/app/grid/11.2.0
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4021 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory11g

The plug-in Update Inventory has successfully been performed
------------------------------------------------------
------------------------------------------------------
The plug-in Oracle Net Configuration Assistant is running


Parsing command line arguments:
    Parameter "orahome" = /u01/app/grid/11.2.0
    Parameter "orahnam" = Ora11g_gridinfrahome1
    Parameter "instype" = typical
    Parameter "inscomp" = client,oraclenet,javavm,server,ano
    Parameter "insprtcl" = tcp
    Parameter "cfg" = local
    Parameter "authadp" = NO_VALUE
    Parameter "responsefile" = /u01/app/grid/11.2.0/network/install/netca_typ.rsp
    Parameter "silent" = true
    Parameter "silent" = true
Done parsing command line arguments.
Oracle Net Services Configuration:
Profile configuration complete.
Listener "LISTENER" already exists.
Oracle Net Services configuration successful. The exit code is 0

The plug-in Oracle Net Configuration Assistant has successfully been performed
------------------------------------------------------
------------------------------------------------------
The plug-in Automatic Storage Management Configuration Assistant is running


The plug-in Automatic Storage Management Configuration Assistant has failed its perform method
------------------------------------------------------
The action configuration has failed its perform method
###################################################

Connor McDonald
June 28, 2016 - 1:03 am UTC

That's one for Support.

configure tools for grid failed - 2

leo, June 27, 2016 - 9:14 am UTC

sorry, forgot to mentioned that i also successfully executed below 2 scripts:

/u01/app/oraInventory11g/orainstRoot.sh
/u01/app/grid/11.2.0/root.sh