Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, Tony.

Asked: November 06, 2002 - 7:37 am UTC

Last updated: August 04, 2011 - 7:56 am UTC

Version: 8.1.7

Viewed 100K+ times! This question is

You Asked

Hi,
1. I use Auto Trace/ Explain Plan to tune SQLs. During this process, Do I have to flush out the shared pool evertime before executing my query to find the correct response time?
( Alter system flush shared_pool )? what will be the impact if I don't do so?

2. My production database is in dedicated mode and has hundreds of concurrent users. Will the peroformance improve if its changed to MTS mode?








and Tom said...

1) no, in the real world -- queries WILL be soft parsed -- never hard parsed once the system is running.

In fact, you need to run the query once without tracing to get the accurate response time -- the real world one!

2) doubt it -- if you have my book "Expert one on one Oracle" -- all of the details are there. Basically MTS is *slower* then dedicated server. The code path is alot longer. MTS is used when you run out of resources (processes, ram, whatever) on the machine and can no longer get people logged in. Then you drop down to MTS to let more people get connected at the same time.

Rating

  (67 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

What about the data? Is it fetched afresh everytime?

Tony, November 07, 2002 - 12:03 am UTC

When I first execute a query, data for the query is fetched and kept in the buffer. Now I slightly modifiy the same query and run. This time the query appears fast as the required data is already kept in the buffer and no need to fetch afresh from the disk. Hence, to know the correct performance of a query, we need to flush out the data from the buffer. I think you'll agree with my argument.




Tom Kyte
November 07, 2002 - 7:35 am UTC

Hence in the real world you'll have many users and data will be buffered or not.

Your goal is to make the query

o use as little cpu time
o do as few consistent gets

as possible -- beyond that, it doesn't matter for testing if the data is buffered or not buffered. In the real world -- it may well (or may well not) be buffered. If you minimized the cpu and consistent gets (logical io's), there you go, you have it tuned.


(besides, flushing the shared pool would have no effect on physical io with respect to the query in question -- it would just make you hard parse again)

flush

mo, December 12, 2002 - 5:37 pm UTC

TOm:

Do I need to log in as a system manger to flush. I am using a developer account?

SQL> alter system flush shared_pool;
alter system flush shared_pool
*
ERROR at line 1:
ORA-01031: insufficient privileges 

Tom Kyte
December 13, 2002 - 7:26 am UTC

you need to use an account that has the ALTER SYSTEM privilege, yes. I never use system...

alter system

mo, December 13, 2002 - 11:57 am UTC

Tom:

would this privilege be risky for a developer. It seems dba is worried about it. can you make it only for flushing pool in a development instance?

Tom Kyte
December 13, 2002 - 1:00 pm UTC

DBA is correct to be worried.

Developer should be using this only on their own personal test instance they have running on their own personal machine (desktop).

This command should not be used on a system others are using -- no point in doing that.



flush

mo, December 13, 2002 - 3:10 pm UTC

Tom:

well how do you do query optimization and tuning if you can not flush. Let us say as in your book you want to time a pl/sql program with bind variables and without bind variables. You run the query, which puts it in shared pool. so second time your timing test would not be accurate because it is running it from cache(soft parse) rather than doing a hard parse which is what happens first time they use it in a production environemnt.


Tom Kyte
December 13, 2002 - 3:38 pm UTC

You do it on your OWN personal machine is how.

You do not do destructive testing like that on a shared machine.

It is as I said:


Developer should be using this only on their own personal test instance they
have running on their own personal machine (desktop).



flush

mo, December 13, 2002 - 4:02 pm UTC

Tom:

You misunderstood me. I want to learn from your book and apply the examples in real life systems.

What I am saying is that let us say I have a query in real-production system that is taking too long. So I set timing on in my development instance to exactly measure how long it will take the user first time he runs it. I can not do it unless I flush the shared pool because all subsequent tests all would be soft parse?

BTW, I do not know why you use "SET TIMING ON" and "dbms_utility.get_time" to get two time counts?

Thanks,


Tom Kyte
December 13, 2002 - 4:11 pm UTC

so, the hard parse will be a small, irrelvant fraction of the total runtime of the query.

Besides, in the real world, your end user is also doing a soft parse. In fact, you should run the query -- ignore the results and then run it again. That second run is what you want to work with.

You, the developer (anyone really) shouldn't be flushing the shared pool on a shared machine. flushing the shared pool isn't necessary here -- ESPECIALLY in the case you describe. It would introduce artificial work that quite simply is not being done in the real world (that guys shared pool in production -- it is full of all kinds of useful stuff, if you empty yours, you have just influenced the test in the wrong way)

The examples in the book -- all done on single user machines. 99% of them do not have a flush shared pool. The ones that do -- showing you the effect of parsing thousands of literal sql statements without binds.


I generally do not set both timing on and use dbms_utility get time. get_time returns timing in hsecs which is nice. TKPROF is the only source of truth for timing though.


flushing

mo, December 13, 2002 - 4:46 pm UTC

Tom:
<the hard parse will be a small, irrelvant fraction of the total runtime of the query>

If this is correct why then use bind variables. I thought you always said to use bind variables to get better performance. Is it irrelevant?

<Besides, in the real world, your end user is also doing a soft parse. In fact, you should run the query -- ignore the results and then run it again. That second run is what you want to work with>

Not true if you are not using bind variables. Query could be hardcoded so you are always hard parsing. Why also not worry about the first run since user will run it first time. The shared pool can also get fulshed by starting the instance or oracle processes, so user may be running it for first time several occasions?

thanks,


Tom Kyte
December 13, 2002 - 6:28 pm UTC

sigh, you haven't read the book then...

or, I didn't explain it very well.

don't know which one.

Think about it -- look at your question. You said "I have a query in real-production system that is taking too long."... Too long to me means more then 1 second. A query should parse in less then 1 second, your query is taking many seconds. As a pct of the run time -- that hard parse is NOTHING in your case (that is simple math, just think it thru. simple math)....

Now, parsing -- especially hard parsing -- takes LATCHES on the library cache, the shared pool (you know, those things I keep pointing out time and time again with runstats). Latches are locks, locks are serialization devices, serialization devices inhibit scalability.... Ok.


In the real world -- your end user IS doing a soft parse -- why? because you read my book, you followed the cardinal rule "use bind variables" -- hence, your user IS doing a soft parse -- ok?....

Don't mince words. Just remember this:

o you can flush the shared pool on your own personal instance running on your own personal machine.

o you cannot flush the shared pool on a shared instance, why -- because i said so. that is something for the DBA to do and I cannot even imagine why they would. You flush the shared pool to do runstats type testing but that CAN ONLY be done on a single user machine -- else all of the other users influence the latch counts and the test is USELESS

o in an system where you run the same query over and over and response times for that query are small (eg: 99% of all systems -- you are OLTP'ish) -- you WILL USE BIND VARIABLES. How many articles on this site have you personally read mo that start with "my system is going really bad, lots of latching, ora-4031's, i have to flush the shared pool 15 times an hour, whats wrong". The answer is always, constantly, consistently BIND VARIABLES. so, if you run many queries per second -- BIND VARIABLE = YOUR ONLY CHANCE OF SUCCESS.

o if you are building a data warehouse where you take MANY SECONDS per QUERY (the total REVERSE) you don't have to and may not WANT to use bind variables. Here you have used histograms and other features and you want the optimizer to come up with different plans for the query:

select * from t where x = 5;

then for

select * from t where x = 6;

one should use an index, the other a full scan. if you use bind variables, that isn't going to happen.

Hard Parse/Soft Parse

Sormi, December 14, 2002 - 6:00 am UTC

Would you please explain Hard Parse and Soft Parse?

Is it practical?

Tony, December 14, 2002 - 6:24 am UTC

Tom, you say that we need to use personal, single user database to tune queries. Here, in our company, around 50 developers are working on a project. Is it advisable/feasible to install oracle in all 50 machines?. what if centralized development database and client machines for developers have different configuration? Difference in configuration won't give different result? just curious to know.





Tom Kyte
December 14, 2002 - 8:28 am UTC

Ok, let me be a little more clear.


I have a database on every computer I work on. It is a scratch database. If it gets toasted, big deal. I run DBCA and create a new one. This database is what I use to test different approaches on (mini snippets of code). I use things like runstats
</code> http://asktom.oracle.com/~tkyte/runstats.html <code>
on it (something that you CANNOT use on a multi-user machine). I do destructive things on it that I would not want to do on a development instance (the "what if's" of the world). I feel free to play with any system setting I want. I can do whatever.

(as a side note, I've been taking polls at seminars. did you know -- informally -- almost every DBA started as a developer. Wonder how they got to know the workings of Oracle if they didn't run Oracle - just a thought for the developers aspiring to be DBA's some day, how much of a leg up would it be if you actually ran the database, even in this modest fashion....)


Do I develop on this machine? Typically no. I use it to test ideas, sometimes I export the system I'm working on and import it locally to work on it or tune some really gnarly queries (you know -- if you can get them fast on your laptop -- they generally go even faster on your big box). But at the end of the day, the developed code does exist until it is running on the development box with all of the other code written by the other people in that environment.

Then there is the TEST box which must be upgrade from the upgrade scripts that are provided from the DEV box. Then there is PROD that uses these fully tested upgrade scripts....


So, the point I think most are missing in this thread is the database on the developers machine doesn't count, it is just there for learning, testing, trying out different approaches, finding out what works - what doesn't and breeding the next generation of DBA's


Licence issue

Tony, December 15, 2002 - 2:24 am UTC

I agree with your argument Tom, I've some practical issues.

1. My company doesn't allow developers to install Oracle in all developers' machine due to licence issues. Is there really any licence issue for Oracle used for development purpose?

2. All databases in all devepoers' machine should be in sync with centralized development database as large number of objects are added frequently.



Tom Kyte
December 15, 2002 - 10:23 am UTC

If you goto otn.oracle.com and click on download/database and read the license (as you must before you download), you'll find:

<quote>
License Rights
We grant you a nonexclusive, nontransferable limited license to use the programs only for purposes of developing and prototyping your applications, and not for any other purpose. If you use the applications you develop under this license for any internal data processing or for any commercial or production purposes, or you want to use the programs for any purpose other than as permitted under this agreement, you must contact us, or an Oracle reseller, to obtain the appropriate license. We may audit your use of the programs. Program documentation is either shipped with the programs, or documentation may accessed online at </code> http://otn.oracle.com/docs <code>

Ownership and Restrictions
We retain all ownership and intellectual property rights in the programs. The programs may be installed on one computer only, and used by one person in the operating environment identified by us. You may make one copy of the programs for backup purposes.

You may not:
·use the programs for your own internal data processing or for any commercial or production purposes, or use the programs for any purpose except the development and prototyping of your applications;
·use the applications you develop with the programs for any internal data processing or commercial or production purposes without securing an appropriate license from us;
·remove or modify any program markings or any notice of our proprietary rights;
·make the programs available in any manner to any third party;
·use the programs to provide third party training;
·assign this agreement or give or transfer the programs or an interest in them to another individual or entity;
·cause or permit reverse engineering or decompilation of the programs;
·disclose results of any program benchmark tests without our prior consent; or,
·use any Oracle name, trademark or logo.
</quote>

grab it, use it. just don't deploy with it.


2) the database on the developers machine would NOT be the database to do your primary development on -- it is there to test with, play with, do things like runstats with. Sure, you'll export DEV objects every now and again to test/add new functionality -- but the code isn't code until it is checked into the development environment.

FIRST_LOAD_TIME

Pushparaj Arulappan, July 16, 2003 - 2:45 pm UTC

Tom,

We have flushed the shared pool today early morning
at 3.00am when there is no activity on the database.

But today I have queried the V$SQLAREA view and found that
many SQLS had first_load_time from the earlier dates.

I had thought the "alter system flush shared_pool", will erase all the SQLs from the shared pool and hence V$SQLAREA view will only show SQLS that are loaded after the flush.

SELECT DISTINCT FIRST_LOAD_TIME FROM V$SQLAREA
/
2003-07-13/15:11:41
...
2003-07-14/07:40:49
...
2003-07-15/07:40:49
...
2003-07-16/07:40:49

Please correct me.

Tom Kyte
July 16, 2003 - 4:15 pm UTC

it cannot get rid of everything. flush flushes what it can. sql being used -- stays.

flush shared pool

Venkat, October 07, 2003 - 7:01 am UTC

Tom,

When v$sqlarea will get truncated? How about the growth of this table in 24x7 enviornment? When the instance has been running for quite a long time, how to find out the latest sql statements from this table?

Regards,
Venkat

Tom Kyte
October 07, 2003 - 8:06 am UTC

it never gets "truncated", it is not even a "real table" -- it is a data structure, it is the shared pool!

suggest you use v$sql -- v$sqlarea is just a (slower performing) aggregation of v$sql.

v$sql has a first_load_time you might use to find "latest"

License Issue - Please Clarify

Sunitha, February 02, 2004 - 4:15 am UTC

Hi Tom,
You remarked:
"grab it, use it. just don't deploy with it."

Does this mean that for an organisation providing solutions by way of building applications for clients, the OTN license and software is sufficient to do the development and it is only the client (who will be hosting the application), who need to have the licensed software?(not the OTN one)?
Please correct me if I have understood wrong.

Thanks.


Tom Kyte
February 02, 2004 - 7:50 am UTC

look above the "grab it" comments. the license is available on otn as well.

New Developer License on OTN

Dale Ogilvie, March 05, 2004 - 5:31 am UTC

The otn license seems to have changed a bit since your reference, it now says:

Ownership and Restrictions
We retain all ownership and intellectual property rights in the programs. The programs may be installed on one computer only, and used by one person in the operating environment identified by us. You may make one copy of the programs for backup purposes.

You may not:
- use the programs for your own internal data processing or for any commercial or production purposes, or use the programs for any purpose except the development of a single prototype of your application;
- use the application you develop with the programs for any internal data processing or commercial or production purposes without securing an appropriate license from us;
- continue to develop your application after you have used it for any internal data processing, commercial or production purpose without securing an appropriate license from us, or an Oracle reseller;

In our development setup we have a database on a server that is shared by all the developers in the dev team, for deploying stored procs & new tables for our web application. Once testing is complete we then roll out our app and associated oracle objects to production, for which we have paid up oracle licenses. Does the dev license cover this shared server in development, or even the scratch databases on the individual developer boxes? Obviously no commercial work other than development is done on this dev server.

I find it hard to believe that we would have to purchase production class cpu licenses for every development/test server, but the plain reading of the new license would seem to suggest this, and possibly even rule out the individual scratch databases you have mentioned on this site. I certainly would have difficulty matching up my own scratch database with the requirement for "a single prototype". Now, I don't know exactly how our oracle setup is licensed by management, but I'm trying to justify going to a second development db server so we can mirror production (rac) more closely, but if we have to fork out extra licence fees for a two-node dev setup I'm going to have a hard road.


Oracle development licensing

Dale Ogilvie, March 10, 2004 - 6:31 pm UTC

Here's what I think the situation is regarding the current OTN license and development. This is just my take on the situation, talk to Oracle for the real story as it applies to YOU.

You can only develop "a single prototype of your application" under the otn license. I think this relegates the otn license to evaluation only. For general development work you need to go to Oracle Personal License which gives you and only you a carte blanche on a single machine. This is the dev sandbox license I think.

A development database server used by a team of individuals is not covered under the otn or the oracle personal licenses and therefore would have to be covered under an Oracle processor or named user license.


first_load time

A reader, April 15, 2004 - 12:15 pm UTC

Hi

I have to do flush quite frequently because the application provider is not using bind variables

The problem is after flush shared pool I still see SQL from 2 months ago!!!! How can that be??? The database is almost idle at night and I flush during that time!

Also the database uses cursor_sharing=FORCE but in v$sql there are tons of child cursors with same parsing id and parsing schema... How so???

Oracle 8.1.7.4 on Solaris 64 bits

Tom Kyte
April 15, 2004 - 12:56 pm UTC

You can use v$sql_shared_cursor to see why you have child cursors.

if you are using cursor_sharing=force, why are you still flushing?

os flush

A reader, April 15, 2004 - 10:46 pm UTC

can we flush shared pool from os command.


Tom Kyte
April 16, 2004 - 7:18 am UTC

echo 'alter system flush shared_pool;' | sqlplus -s /


but, if you are doing that, you really have a problem.....

first_load time

A reader, April 16, 2004 - 3:17 am UTC

You asked why i flush shared pool when I am using cursor_sharing to FORCE. I would like to know why too! I do it but it does not free shared pool at all! After flushing I query v$sql I still see sql statements from two month ago when I was expecting to see a clean library cache (and I do this when instance is idle)

If I dont flush I get ORA-04031 constantly (yes even when cursor_sharing is set to FORCE and that is why I asked why I have so many child cursors even with this parameter set)



Tom Kyte
April 16, 2004 - 7:34 am UTC

flush flushes only that which is able to be flushed.

you will never see a totally clean library cache. those "two month old" sql statements must be still "in use". time of first parse is meaningless -- they are still being used now, today.


have you checkes v$sql_shared_cursor to see WHAT is different about them (and have you sized your shared pool for the concurrent workload you are doing)

first_load time

A reader, April 16, 2004 - 7:57 am UTC

What can be "still in use"? I mean there is only my session in the database when I issued flush shared pool, no other users are executing queries (the application server is shutdown)

I am looking v$sql_shared_cursor, to see what are the differences

cheers

Tom Kyte
April 16, 2004 - 9:06 am UTC

look in v$sql, there are lots of columns there telling you about what is there and why. also, could be someone pinned those objects (kept them)

echo 'alter system flush shared_pool;' | sqlplus -s /

A reader, April 27, 2004 - 11:16 am UTC

what do u meant by having a problem

Tom Kyte
April 28, 2004 - 12:54 pm UTC

(your keyboard appears to be failing, vowels have gone mysteriously missing)

if you are flushing the shared pool.

on a recurring basis.

you have a bug in your code..... it is not using bind variables and you are parsing like mad and you are the cause of your own problems....

so, if you need a way to to it from the OS, you must be trying to script a recurring flush, therefore you have no bind variables.

Flush pool runs every 30 minutes

Branka, May 27, 2004 - 2:46 pm UTC

Senior DBA is running Flush pool every 30 minutes. What documentation I can send to manager to prove that it is wrong?

Tom Kyte
May 27, 2004 - 8:38 pm UTC

ask SR DBA to educate you as to why it is "right"

we don't document things like "please don't use a lawn mower to trim hedges, you might cut your fingers off"

A reader, May 27, 2004 - 11:43 pm UTC

LOL!

A reader, May 27, 2004 - 11:47 pm UTC

To Branka from VA, USA , just send a note to your manager saying your Sr.DBA is a idiot and link to Toms website. :-)


Flush pool runs every 30 minutes

Branka, June 16, 2004 - 9:26 am UTC

Sr DBA give me folowing answer:
"
I had a situation some time ago that unused sql was fragmenting the shared_pool. After consulting with Mike Ault, one of the leading authorities on Oracle, he said it usually is better in that case to periodically flush the shared pool. The extra re-parsing required for the frequently used sql is minimal. I usually keep all the packages in memory to allay some of that.

"

Tom Kyte
June 16, 2004 - 12:55 pm UTC

the extra re-parsing is HUGE.

"some time ago", hmmmm... some time ago.... in a galaxy far far away... (star wars theme building up....)


I always say "Question Authority". Question me, question them. Question everyone.

this is a standard operating procedure that

o negatively impacts performance
o is totally un-necessary
o negatively impacts performance
o could be killing your performance
o causes a large cpu spike every half hour
o causes a slow down ever half hour

the proper correction for the problem outlined would have been:

a) make the shared pool smaller
b) fix the bind variable problem you must have been having.


A reader, August 03, 2004 - 10:04 am UTC

Hi Tom...
We are using siebel CRM 7.5 version.. We got ora-4031 error and here is response from siebel support...

*************************************


When attempting to get shared buffers, Oracle does the following:

If LARGE_POOL_SIZE is set, then Oracle attempts to get memory from the large pool. If this value is not large enough, then Oracle does not try to get buffers from the shared pool.

If LARGE_POOL_SIZE is not set, then Oracle attempts to get memory from the shared pool. Fragmentation of the shared pool can lead to memory not being allocated. From the screenshot of the behavior in your environment, it does appear that memory could not be allocated from the Shared pool. This could be because the Shared Pool size is not optimal. This will depend on the Production environment Transactional requirement and should be carefully set after consultation from the DBA. A DBA should adjust this value upward based on the available physical memory of the hardware and performance, whether connections are dedicated, or run Multi-Threaded Server (MTS) and the application type. You may want to increase the Shared pool size (for example to 300000000; please note that as mentioned this will depend on your specific environment requirements)

Siebel eBusiness Applications make heavy demands on the dictionary cache for columns. In Oracle, you cannot explicitly set the size of the column cache. Instead, column cache is set as a fixed percentage of the shared pool size.


By setting a large shared pool size, you set a large column cache size. The number of repositories active in your Siebel schema also adds to dictionary overhead since Siebel eBusiness Applications maintains a record for each column in each table for each repository. As a result, if you have six active repositories, the Siebel dictionary will be six times larger than it needs to be.


I suggest to kindly monitor the Shared Pool Size and consider setting the Large pool size or increasing the Shared pool size

**********************************


Any comments..
I would appreciate your response

Thanks




Tom Kyte
August 03, 2004 - 10:09 am UTC

are you using shared server connections or dedicated server?

select server, count(*) from v$session group by server;




A reader, August 03, 2004 - 10:19 am UTC

Thanks for quick response

We are using dedicated server. Siebel is not supporting MTS configuration 

SQL> select server, count(*) from v$session group by server;

SERVER      COUNT(*)
--------- ----------
DEDICATED        235

Thanks 

Tom Kyte
August 03, 2004 - 10:22 am UTC

then I don't know why they even mentioned it.

the large pool is used:

o by parallel query, for PQ messages
o rman backup buffers
o storing the UGA of shared server connections

Oracle does not use the large pool for "shared buffers" -- there is no such thing as a "shared buffer", that is a made up term.


is the application making use of bind variables (if not, setting the shared pool higher and higher just pushes the wall back a bit -- but when you hit it, you'll be moving even faster -- meaning you really will smack into it good and hard).

Regular Flushing

Richard, April 09, 2005 - 4:39 am UTC

There are books out there, right now, which advocate (strongly advocate) flushing the Shared Pool regularly (i.e. every 30 minutes, or so) in a mixed ad hoc & reuse environment. To be more precise, the advice given in one book states that it's best to:

(a) pin the reusable packages
(b) establish a *comfort zone* above that
(c) regularly flush the Pool, to rid it of the ad hoc SQL
that (theoretically) won't be used again.

This technique is, supposedly, a real winner.
The point about pinning certainly seems sensible to me.
However, I have doubts about the regular flushing advice. Surely even if a small percentage of the ad hoc SQL turns out to be reuseable, then such regular flushing would, effectively, nullify the entire purpose of the Shared Pool, because it's far worse to parse (a big deal which could involve disk I/O) than simply *overwrite* existing SQL in memory (which I'd have thought was a super-fast operation, anyway)?

Is there *any* sanity in flushing regularly in a Production database?

Tom Kyte
April 09, 2005 - 7:47 am UTC

care to name the book? that cannot be construed as a bad thing on your part, I mean someone wrote it and published it.

that technique is a sure way to

a) increase the need to buy more hardware
b) introduce a huge spike in processing every N minutes
c) introduce large shared pool contention every N minutes, reducing response times




The Book

Richard, April 09, 2005 - 8:54 am UTC

The book is "Mike Ault's Oracle Internals Monitoring & Tuning Scripts"
ISBN 0-9727513-8-6, (Rampant Press).

I don't know if you recall (I expect not), but I posted something on AskTom a short time back, describing how I'm currently putting myself through a fairly intensive self-study program, as although I was an Oracle DBA at my last place of work, I was always very conscious of my shortcomings - and so I've taken an opportunity to become as knowledgeable as I can before applying for another Oracle DBA job.

I bought the above-mentioned book in good faith (I know all about DKB, but had respect for Mr. Ault, so it being a Rampant Press book didn't frighten me off per se) and am now slightly worried that if it contains other *gems* such as periodically *cleansing* the Shared Pool, then I may

(a) learn rubbish
(b) waste my time
(c) have wasted my money

I've not actually read the book in-full, as it was to form part of my Tuning study, in 2 weeks' time. I only came across the Shared Pool item, as I am doing a lot of cross-referencing, as part of my study.

Tom Kyte
April 09, 2005 - 9:03 am UTC

Periodically flushing a shared pool might be something you do in an emergency. But if any Oracle expert were to suggest it to me as the correct and appropriate course of long term action, well... Don't know what to say there.

I don't have the book, so I'll reserve any other comments, I cannot read the entire thing in context.

I perhaps should have added: Page 76 "Guideline 2"

Richard, April 09, 2005 - 8:57 am UTC


Tom Kyte
April 09, 2005 - 9:06 am UTC

I'm curious -- did he give any factual evidence, any "reasoning" for this?

Richard, April 09, 2005 - 9:41 am UTC

To paraphrase (rather than quote, as I'm really not too sure about the legality of quoting verbatim from a book of hard-earned *insights*):

<Pseudo Quote>
If the Shared Pool contains SQL that cannot be reused, then future SQL (similar or not to anything in the Pool) will always be compared to the SQL in the Pool (actually the hash values will be compared).

It's this comparison of hashes that can affect performance, if nothing of use is in the Pool.

i.e if the Pool is currently full of SQL that won't ever be reused, then whenever a new SQL statement is issued, all of the existing garbage SQL will still have to be examined, with no beneficial net result; just wasted resources.
</Pseudo Quote>

That's pretty much the gist of it (I have striven to be accurate in my interpretation of the Author's words).

The book does have 3 charts which are interpreted as meaning that performance increases when *flushing* is used. Also, the book states that perfomance gains of up to 50% have been observed when emplying *flushing*.

Tom Kyte
April 09, 2005 - 12:05 pm UTC

It would be fascinating to see that QUALIFIED wouldn't it.

Well, we could even measure it, tkprof would show us time spent parsing.

The time spent RE-HARD parsing the good sql (you see, you would not only have to "pin packages", you would have to then execute all of the good sql, find it, pin it) would more than offset by many orders of magnitude any perceived payback from not having to search for a hash.

The wasted resources here would be:

a) increase the need to buy more hardware
b) introduce a huge spike in processing every N minutes
c) introduce large shared pool contention every N minutes, reducing response
times

(that is provably true)

Forget tkprof - I'm taking no chances: the book's now in the bin.

Richard, April 09, 2005 - 1:38 pm UTC


Oh, dear

Doug, April 09, 2005 - 7:49 pm UTC

Gosh, I have to be careful what I post - only yesterday I said this in another thread :-

"I'm afraid there have been a couple of sites where, rather than wade through the nightmare that is a particular database's shared pool problems (application-generated, of course!), I've gone with flushing the shared pool periodically to get around those problems. I'm fully aware that this isn't the best approach, and I wouldn't write articles suggesting others do this ..."

Then I read the book's suggestions summarised above. Just to be clear, I would *never* recommend this as a strategy (!), just that I've felt forced into it on a limited number of occasions. When I said periodically, it was the old 'do it at night when it's quiet' period and was only a band-aid fix for a troublesome application.

Tom Kyte
April 09, 2005 - 8:02 pm UTC

I thought about you when reading this. I understood what you meant, that it is not the permanent solution.

I've been there, when you need do this (usually because someone kept saying 'make the shared pool bigger' so now we have a N gig shared pool and when the wall is hit, it is hit brain smacking *hard*). I'd rather have made it smaller (let them age out faster, easier to manage) while fixing the underlying issue.

but for an ad-hoc system, to "not have to look at a hash", if that is what was stated, no.

Intersting

A reader, April 10, 2005 - 12:09 pm UTC

Well, I have been through this issue.

Once I was called and told that a customer is doing a data load using "Blue Martini"'s dataload utility and when they started the process, it was loading about 8000 records per minute, but now it has come down to about 2000 records per minute. They had to load millions of records!

Well, when I checked their database and the SQLs the application is using, I found that
a) their shared pool size was 800 MB.
b) app was not using bind variables.

That made me believe that since their shared pool is full, any new SQL had to compared to 800MB full of SQLs and that's where it was taking more time than actually executing the SQL. AT that time, I did flush shared pool and immediately after that, the customer noticed that their process is loading again at 8000 records/minute speed. And after about 10 minutes, speed went down to 5000 rec/min and after about 25 minutes, speed was about 2200 rec/min.

At that time, I had two suggestions:
1. Reduce shared pool size since app is not even using it the way its supposed to be.
2. flush the shared pool periodically

Since (1) was not an option (had to bounce the instance in 8.1), I made a cron job to flush the shared pool every 20 minutes.

I think I also learnt that day that sometimes oversizing shared pool may overkill the system.

Tom Kyte
April 10, 2005 - 12:22 pm UTC

during the flushing of the shared pool what happened there as well.... (how long did the flush take)

But this is the extreme, atypical, one time case. (cursor_sharing=force for the load tool would have been infinitely better, you probably would have seen a many times INCREASE in performance. In fact, I know it.

If this had been your system to run this sort of stuff for the next two years, you would not have set up a script to periodically flush the shared pool I hope?


Test, multi-cpu machine (yup, i use them too from time to time).... varied from 1 to 10 users simultaneously doing their inserts.

Single user is probably the only thing of interest here as that is what you had.


Using NO BINDS, you would have used (on this machine) 4 times as much CPU to parse (probably even more, as you hit the wall not on comparing everything to the SQL in the 800 mb shared pool, but rather in MAINTAINING and FLUSHING stuff from an 800 mb shared pool) as you would with cursor_sharing force.

It would have run many times faster just with cursor_sharing in this case. But a data load utility that doesn't bind for *any* database (all of them like binds) is not much of a load tool.



Flush shared pool

Vinayak, April 10, 2005 - 1:29 pm UTC

Tom, you should take some rest at least on Sundays.

during the flushing of the shared pool what happened there as well.... (how long did the flush take)
-- I don't remmeber exactly, but it took a couple of seconds.

But this is the extreme, atypical, one time case. (cursor_sharing=force for the load tool would have been infinitely better, you probably would have seen a many
times INCREASE in performance. In fact, I know it.
-- Yes, you are right. I didn't know at that time.

If this had been your system to run this sort of stuff for the next two years, you would not have set up a script to periodically flush the shared pool I hope?
-- This was only one-time load job.
-- Actually in my previous job, I'd seen very similar behaviour and at that time, they were using 7.3 (it was in 1998) and since there was no cursor_sharing parameter at that time, we had to flush shared_pool and that was when my mindset was just set to use flush shared pool.


Test, multi-cpu machine (yup, i use them too from time to time).... varied from
1 to 10 users simultaneously doing their inserts.

Single user is probably the only thing of interest here as that is what you had.


Using NO BINDS, you would have used (on this machine) 4 times as much CPU to
parse (probably even more, as you hit the wall not on comparing everything to
the SQL in the 800 mb shared pool, but rather in MAINTAINING and FLUSHING stuff
from an 800 mb shared pool) as you would with cursor_sharing force.
--- just a hypothetical question. shared pool was 800MB and performance went down from 8000 rec/min to 2000/rec in about 25-30 minutes. Had the shared_pool been only 80MB (10 times less), the performance impact would have been so bad?

It would have run many times faster just with cursor_sharing in this case. But
a data load utility that doesn't bind for *any* database (all of them like
binds) is not much of a load tool.
-- Thats' true. In blue martini, I'd seen several other instances where the code was written very very bad. 'tkprof' was a real help in finding that. In fact, I was able to tune some of their processes 40 times faster.




Tom Kyte
April 10, 2005 - 1:43 pm UTC

(hey, I'm working on the 2nd edition of expert one on one and this provides a much needed distraction :)

Waiting

Vinayak, April 10, 2005 - 10:08 pm UTC

<<hey, I'm working on the 2nd edition of expert one on one and this provides a much needed distraction >>

I am eagerly waiting for that book.

does flush flush db buffers?

Doug, August 22, 2007 - 4:02 pm UTC

Tom.. I have a query that users are complaining takes 5 minutes. I run it in a test instance and sure enough, the first time it runs it's about 5 minutes. I run it again however, and it's more like 18 seconds. So, I figured it's cached data, so I flushed the shared pool and it STILL runs in 18 seconds - you mentioned a pool flush doesn't flush everything, could it be leaving a lot of data around? (Maybe this is at the OS level). I'm using 9204 on AIX 5.2
Tom Kyte
August 23, 2007 - 10:46 am UTC

shared pool doesn't have block buffers, it has query plans.


you could flush the buffer cache in 10g - but it almost always is pointless. Because most people use a buffered file system and a physical IO isn't a physical IO the second time around.

In 9i you can alter tablespace T offline/online, that'll flush that tablespace from the buffer cache - but - unless you are using RAW or forcedirectio mounted disk - it likely won't do anything as the OS will read from ITS cache.

Buffer Cache

Vikram, September 13, 2007 - 6:17 pm UTC

Hi Tom,

I could not quite understand your comment.

I am facing the same issue with the buffer cache - the first time around, the query takes lot of time (1-2 minutes). and from then on, it takes very less time (less than 3 seconds or so).

Now, can this query be considered good given the fact that it is running fine except for the first time?

Regards,
Vikram
Tom Kyte
September 15, 2007 - 8:16 pm UTC

which comment, this is a big page.

can what be considered good???

flushing shared pool when dropping partitions

Sid, September 17, 2008 - 10:55 am UTC

I have an Oracle database that has ended up with 5000 partitions on a table. There is a program to drop these partitions in a loop, but it ends in a ORA-4031 after dropping about 200 parttions.

I increased the shared_pool(stupid solution), same results.

So I flushed the shared_pool at the start, same results. Now my solution is to keep flushing shared_pool while the program is running. Any other insights?

Tom Kyte
September 17, 2008 - 11:53 am UTC

sounds like a possible leak, sounds like it might be limited to a call duration.

suggestion: instead of dropping all 5k in a single loop, break loop up into multiple calls - write a procedure that drops say 25 partitions and then exits. Then schedule this job Number_partitions/25+1 times - each will be a separate call, running in the background.

Yes..But

Sid, September 17, 2008 - 12:38 pm UTC

Of course, that is very evident. What I am interested in finding out, is why Oracle cannot flush the shared pool in between and always ends up fragmented(there is no other activity running on the system apart from a number of select statements(no bind variables)) after dropping some 200 partitions if it starts with a clean slate(else it abends even earlier).

The database in question is Oracle 10g and I was wondering why it could not have been more efficient and realized that shared pool is so fragemented, it needs a cleanup?
Tom Kyte
September 17, 2008 - 1:14 pm UTC

I told you my theory in line #1 above. I would suspect a leak, for the duration of the call. EG: it shouldn't have happened.

Looks like you were right..

Sid, September 17, 2008 - 6:14 pm UTC

After abending for more than a couple of times, the program ran for more than 2.5 hours and managed to drop 4000+ partitions, before it timed out again. But in the next run, it dropped all of the partitions.

It may have been a memory leak, as you said.

Thanks a lot, Tom. You rock !

Shared_pool flush

Balakrishna, October 02, 2008 - 5:02 am UTC

Dear Tom,

1. As per my understand when we flush the shared pool only the pinned packages will present and all the shared sql's will be aged out and next time if users execute sql's it will go for hard parsing.

2. When you bounce the database how do we make shure that already shared sql's will be present even after bouncing .. will that sql's will be shared as usual or will that be hard parsed. Can you please correct me if i am wrong.

Regards

Balakrishna.
Tom Kyte
October 02, 2008 - 7:46 am UTC

1) pinned objects (cursors, sequences, code - they all could be pinned) whether the pinning is manually done (dbms_shared_pool keep function) or implicit (we cannot flush some things because they are currently in use).

2) you don't, the shared pool warms up just like the buffer cache does. They begin life devoid of anything and fill up as you use them.

ora4031 in warehouse

JATIN, May 27, 2009 - 4:11 am UTC

Hi Tom

I am facing some ora 4031 on a warehouse environment:

/ora_edwhubti/dump/edwhub/edwhubti_j005_1866.trc
Oracle9i Enterprise Edition Release 9.2.0.7.0 - 64bit Production
With the Partitioning option
JServer Release 9.2.0.7.0 - Production
ORACLE_HOME = /ora_edwhubti/software
System name:    HP-UX
Node name:      eux981
Release:        B.11.11
Version:        U
Machine:        9000/800
Instance name: edwhubti
Redo thread mounted by this instance: 1
Oracle process number: 38
Unix process pid: 1866, image: oracle@eux981 (J005)

*** SESSION ID:(70.5071) 2009-05-25 13:00:30.042
*** 2009-05-25 13:00:30.042
ORA-12012: error on auto execute of job 14368632
ORA-04031: unable to allocate 27512 bytes of shared memory ("shared pool","unknown object","sga heap(1,0)","session param values")
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 189
ORA-06512: at line 1
*** SESSION ID:(70.5093) 2009-05-25 13:00:51.548
*** 2009-05-25 13:00:51.548
ORA-12012: error on auto execute of job 14368727
ORA-04031: unable to allocate 27512 bytes of shared memory ("shared pool","unknown object","sga heap(1,0)","session param values")
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 189
ORA-06512: at line 1
*** SESSION ID:(70.5103) 2009-05-25 13:00:53.198
*** 2009-05-25 13:00:53.198
ORA-12012: error on auto execute of job 14368686
ORA-04031: unable to allocate 27512 bytes of shared memory ("shared pool","unknown object","sga heap(1,0)","session param values")
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 189
ORA-06512: at line 1
*** SESSION ID:(70.5105) 2009-05-25 13:01:10.869
*** 2009-05-25 13:01:10.868
ORA-12012: error on auto execute of job 14368772
ORA-04031: unable to allocate 27512 bytes of shared memory ("shared pool","unknown object","sga heap(1,0)","session param values")
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 189
ORA-06512: at line 1

I looked into to resize the SGA as I can't preferably use binds in this envt. but the statspack does not recommends me doing so:

 Shared Pool Statistics        Begin   End
                               ------  ------
             Memory Usage %:   37.70   38.02
    % SQL with executions>1:   75.72   75.21
  % Memory for SQL w/exec>1:   73.98   73.61


                         Get  Pct        Pin        Pct               Invali-
Namespace           Requests  Miss     Requests     Miss     Reloads  dations
--------------- ------------ ------ -------------- ------ ---------- --------
BODY                     897    0.2            897    0.2          0        0
JAVA DATA                 64    0.0            188    0.0          0        0
SQL AREA               4,671    1.0         23,110    0.4          0        0
TABLE/PROCEDURE        9,066    0.8          8,333    0.4          0        0
TRIGGER                  262    0.0            262    0.0          0        0
          -------------------------------------------------------------



SQL> select * from v$sgastat;

POOL        NAME                            BYTES
----------- -------------------------- ----------
            fixed_sga                      745416
            buffer_cache               4294967296
            log_buffer                   10749952
shared pool errors                          88960
shared pool KGK heap                         7000
shared pool KQR L PO                      4998504
shared pool KQR L SO                       548864
shared pool KQR M PO                      3703488
shared pool KQR M SO                       570888
shared pool KQR S SO                         9280
shared pool KQR X PO                        92736
shared pool sql area                     76382584
shared pool KGLS heap                     9763440
shared pool PX subheap                     393736
shared pool parameters                     502168
shared pool free memory                 356622504
shared pool PL/SQL DIANA                  4553512
shared pool PL/SQL MPCODE                28423584
shared pool PL/SQL PPCODE                  101512
shared pool PL/SQL SOURCE                    5736
shared pool library cache                35705640
shared pool miscellaneous                44044840
shared pool MTTR advisory                  722480
shared pool PLS non-lib hp                   3368
shared pool joxlod: in ehe                 435280
shared pool joxlod: in phe                  24192
shared pool joxs heap init                   4240
shared pool partitioning d                 159096
shared pool sim memory hea                2186824
shared pool table definiti                  83840
shared pool trigger defini                1278592
shared pool trigger inform                   2680
shared pool trigger source                   1856
shared pool type object de                   7592
shared pool dictionary cache              4274432
shared pool db_block_hash_buckets         8946160
shared pool event statistics per sess     2552400
shared pool fixed allocation callback         552
large pool  PX msg pool                   1966080
large pool  free memory                  65142784
java pool   free memory                 211161088
java pool   memory in use                 6942720

42 rows selected.

So, to fix the issue, I took some measures like changed the cursor_sharing parameter from EXACT to SIMILAR and pinned DBMS_SNAPSHOT and one more package into shared pool; but this did not work. 

Later I found that SGA was huge and reduced it to some smaller values (actually it was copied from LIVE where SGA was 6 GB but in test we have only 8 GB of RMAN so I sized the SGA to 1 GB)- and this has probably resolved my issue.

Can you throw some insight into what happened? I assumed that the larger the SGA is the better it is.. but why in this case a large size was an issue?

Tom Kyte
May 27, 2009 - 11:15 am UTC

pinning packages is such a waste of time in the year 2009. I don't know why people do that (hint: if you are running the code, the code is already in the shared pool - the package you pinned, was successfully loaded)

you say "the sga was huge" and "i made the sga smaller", but you don't tell me about the shared pool - which is the only relevant thing here. I'm not going to add up the values in your v$sgastat and I don't know if that is the "before" or "after" or "what". Tell us about ALL of your memory settings (shared pool, large pool, etc) - what they *where* when you hit a problem AND tell us what you do, what features you use (parallel query, shared server, etc etc etc)

A larger sga will
a) be better
b) be the same
c) be much worse


In fact, you can generalize that to:

a <any adjective> <any noun> will
a) be better
b) be the same
c) be much worse



If something was "always best" (always better), that would be the only way to do it. Since there are many ways to size and shape the SGA, there cannot be a "best way" or a way that is universally better than any other way.

Will flushing the shared pool fix our problem?

Joe, May 27, 2009 - 3:23 pm UTC

Tom,

We have a production table which has grown very large over the years (8+ million records) and we are finding that some updates through a legacy application are taking too long (all bind-variables used).

We believe the CBO is using the wrong index b/c the unique index was very skewed. So we decided to re-order the columns of the unique index so that it is no longer skewed (first column is now a date/time and is not skewed). We also did a "gather_stats" after index was re-created.

The CBO is still not using the correct index and the DBA thinks this is b/c the wrong plan is still cached. So when ever the "update" statement is executed, it is using the cached plan instead of the determining that a better plan exists.

Would flushing the cache fix this problem by forcing the CBO to regenerate a new plan? Is there a better way to fix this?


Thanks,
Joe
Tom Kyte
May 27, 2009 - 4:12 pm UTC

if you gathered stats, and the dba did not say "do not invalidate cursors", then

in 9i and before the cursors would have been invalidated right away
in 10g and above they would have been invalidated by now - we flush them shortly after the gather finishes.

but, the dropping and recreating of the index would have DEFINITELY changed things - adding the new index, dropping the old index - these actions would 100% of the time invalidate the plan! It cannot be what you dba says, not possible.


Joe, May 27, 2009 - 5:54 pm UTC

Thanks for the response, I now doubt we will be flushing the shared_pool in prod system :)

Sorry if this is a little off topic but do you have any suggestion? If the "update" statement is using every column from the table in the "where" clause, why doesn't the CBO use the unique index?

I just found out the same problem is happening in our UA environment. When I drop the wrong index, the plan uses the unique index. As soon as I re-create the "wrong" index, it starts using that index instead of the unique index. I also just did a another gather_table_stats and is still won't use the unique index.


Thanks again,
Joe

Tom Kyte
May 28, 2009 - 7:04 am UTC

you don't give a concrete example to work with - so, give one - need tables, creates and enough information to call dbms_stats.set_xxxxx to reproduce.

ora 4031

JATIN, May 28, 2009 - 1:08 am UTC

Hi Tom,

Before fix, the problemetic settings were:

SQL> show sga

Total System Global Area 6453956552 bytes
Fixed Size 745416 bytes
Variable Size 2147483648 bytes
Database Buffers 4294967296 bytes
Redo Buffers 10760192 bytes

SQL> show parameter pool

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
buffer_pool_keep string
buffer_pool_recycle string
global_context_pool_size string
java_pool_size big integer 218103808
large_pool_size big integer 67108864
olap_page_pool_size integer 33554432
shared_pool_reserved_size big integer 26843545
shared_pool_size big integer 536870912

After fix (resizing SGA), the settings are:

SQL> show sga

Total System Global Area 1085239712 bytes
Fixed Size 737696 bytes
Variable Size 654311424 bytes
Database Buffers 419430400 bytes
Redo Buffers 10760192 bytes

SQL> show parameter pool

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
buffer_pool_keep string
buffer_pool_recycle string
global_context_pool_size string
java_pool_size big integer 218103808
large_pool_size big integer 67108864
olap_page_pool_size integer 33554432
shared_pool_reserved_size big integer 26843545
shared_pool_size big integer 536870912

Yes, in this warehouse, we are using parallel query.

1. Can you please let me know why reducing shared pool has helped here..?..

2. When you say that objects are already there in shared pool .. and pinning won't help.. won't they move out and move in and cause shared pool fragmentation in case they are large.. ?.. It's 9.2.0.7 I am using.

Thanks a lot for your answers - I hardly find such discussions anywhere else. You Rock.

Rgds
Jatin
Tom Kyte
May 28, 2009 - 7:26 am UTC

so, the only thing you did was to decrease the buffer cache - which wouldn't affect the shared pool (it is the same size) so you haven't "fixed" anything - you just reset the shared pool back to empty - and if you do whatever you were doing before, it could happen again.


things are (and have been for many releases) paged into the shared pool in small chunks of a relatively uniform size - just so as to avoid the issue you describe. This method of managing shared pool memory post dates the "keep" routine by a couple of releases - making it ('keep') not necessary/something to do.


by setting similar - you are using bind variables now, you have reduced the number of unique sql's you'll have in the shared pool.

since you use dbms_snapshot on a regular basis, the idea that it would move in and out is unlikely (that is what a cache does - keep frequently used things in the pool)

ora 4031

JATIN, May 28, 2009 - 1:14 am UTC

I posted a wrong entry:

The settings after the fix are:

SQL> show parameter pool

NAME                                 TYPE                             VALUE
------------------------------------ -------------------------------- ------------------------------
buffer_pool_keep                     string
buffer_pool_recycle                  string
global_context_pool_size             string
java_pool_size                       big integer                      117440512
large_pool_size                      big integer                      67108864
olap_page_pool_size                  integer                          33554432
shared_pool_reserved_size            big integer                      13421772
shared_pool_size                     big integer                      268435456

I resized shared pool to 200MB

Jatin, May 28, 2009 - 8:53 am UTC

You have missed my post, I resized shared pool to 200MB.

Secondly, what is the generic advice in warehouse environments for 4031 error..? Incerase/Decrease shared pool, use binds (no), pin objects (no), increase shared_pool_reserved_size (no).. ? I am getting out of ideas.. please drop some more hints Tom.
Tom Kyte
May 28, 2009 - 1:51 pm UTC

answer stands as before, you set cursor sharing similar -> binding -> less cursors in the shared pool -> less shared pool usage.

It is highly unusual for a true warehouse to have shared pool issues due to parsing.

tell us more - how many concurrent users, what is the average run length of your typical queries - how many parses per second are you performing in this warehouse

RE: Wrong index being used

Joe, May 28, 2009 - 12:12 pm UTC

Tom,

I am not sure how I can give you an example - whenever I try create a smaller sample table, the CBO uses the correct index.

Here is the actual update statement that is being executed:
update "CHECKLISTDTL"
   set "COMPLETEDDATE" = :v001
 where "PROVKEY" = :v002
   and "SITE_ID" = :v003
   and "PROVNUM" = :v004
   and "PROVLOC" = :v005
   and "CLGROUP" = :v006
   and "INSTANCEDATE" = :v007
   and "CLITEM" = :v008

And here are the 2 indexes:
create unique index UI_CHECKLISTDTL on CHECKLISTDTL (INSTANCEDATE, SITE_ID, PROVNUM, PROVLOC, CLGROUP, CLITEM, PROVKEY)

create index NI_CHECKLISTDTL_ITEM on CHECKLISTDTL (SITE_ID, PROVNUM, PROVLOC, CLGROUP, CLITEM)

As you can see, every column in the where clause is part of the unique index, but the CBO always uses "NI_CHECKLISTDTL_ITEM" index instead.

Looking at the plan, it shows the Cost and Cardinality are the same for both indexes, but the non-unique index is taking over 30 seconds to return.

Is there some other statistics that I could show you?

Thanks,
Joe
Tom Kyte
May 28, 2009 - 2:15 pm UTC

... I am not sure how I can give you an example ...

by using dbms_stats.set_xxxxx functions to emulate your situation... make the cbo think the table is not small.


if I had to guess right now - implicit conversion on a column near the end of your unique key. For example:

ops$tkyte%ORA10GR2> create table t
  2  ( INSTANCEDATE int,
  3    SITE_ID int,
  4    PROVNUM int,
  5    PROVLOC int,
  6    CLGROUP int,
  7    CLITEM int,
  8    PROVKEY varchar2(20),
  9    completeddate date
 10  )
 11  /

Table created.

ops$tkyte%ORA10GR2> create unique index UI_CHECKLISTDTL on t (INSTANCEDATE, SITE_ID,
  2  PROVNUM, PROVLOC, CLGROUP, CLITEM, PROVKEY);

Index created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create index NI_CHECKLISTDTL_ITEM on t (SITE_ID, PROVNUM, PROVLOC,
  2  CLGROUP, CLITEM);

Index created.

ops$tkyte%ORA10GR2> exec dbms_stats.set_table_stats( user, 'T', numrows => 1000000 );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> variable v002 number
ops$tkyte%ORA10GR2> variable v003 number
ops$tkyte%ORA10GR2> variable v004 number
ops$tkyte%ORA10GR2> variable v005 number
ops$tkyte%ORA10GR2> variable v006 number
ops$tkyte%ORA10GR2> variable v007 number
ops$tkyte%ORA10GR2> variable v008 number
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec :v002 := 2; :v003 := 3; :v004 := 4; :v005 := 5; :v006 := 6; :v007 := 7; :v008 := 8;

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> set serveroutput off
ops$tkyte%ORA10GR2> update "T" set "COMPLETEDDATE" = sysdate where "PROVKEY" = :v002 and "SITE_ID" = :v003 and "PROVNUM" = :v004 and
  2  "PROVLOC" = :v005 and "CLGROUP" = :v006 and "INSTANCEDATE" = :v007 and "CLITEM" = :v008
  3  /

0 rows updated.

ops$tkyte%ORA10GR2> select * from table(dbms_xplan.display_cursor());

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  65axh31j8wzta, child number 0
-------------------------------------
update "T" set "COMPLETEDDATE" = sysdate where "PROVKEY" = :v002 and "SITE_ID" = :v003 and
"PROVNUM" = :v004 and "PROVLOC" = :v005 and "CLGROUP" = :v006 and "INSTANCEDATE" = :v007 and
"CLITEM" = :v008

Plan hash value: 3468653475

-----------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                 | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT             |                      |       |       |     9 (100)|          |
|   1 |  UPDATE                      | T                    |       |       |            |          |
|*  2 |   TABLE ACCESS BY INDEX ROWID| T                    |     1 |    99 |     9   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | NI_CHECKLISTDTL_ITEM | 10000 |       |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(("INSTANCEDATE"=:V007 AND TO_NUMBER("PROVKEY")=:V002))
   3 - access("SITE_ID"=:V003 AND "PROVNUM"=:V004 AND "PROVLOC"=:V005 AND "CLGROUP"=:V006
              AND "CLITEM"=:V008)


24 rows selected.

ops$tkyte%ORA10GR2> drop index NI_CHECKLISTDTL_ITEM;

Index dropped.

ops$tkyte%ORA10GR2> update "T" set "COMPLETEDDATE" = sysdate where "PROVKEY" = :v002 and "SITE_ID" = :v003 and "PROVNUM" = :v004 and
  2  "PROVLOC" = :v005 and "CLGROUP" = :v006 and "INSTANCEDATE" = :v007 and "CLITEM" = :v008
  3  /

0 rows updated.

ops$tkyte%ORA10GR2> select * from table(dbms_xplan.display_cursor());

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  65axh31j8wzta, child number 0
-------------------------------------
update "T" set "COMPLETEDDATE" = sysdate where "PROVKEY" = :v002 and
"SITE_ID" = :v003 and "PROVNUM" = :v004 and "PROVLOC" = :v005 and "CLGROUP"
= :v006 and "INSTANCEDATE" = :v007 and "CLITEM" = :v008

Plan hash value: 3259813007

-------------------------------------------------------------------------------------
| Id  | Operation         | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT  |                 |       |       |     3 (100)|          |
|   1 |  UPDATE           | T               |       |       |            |          |
|*  2 |   INDEX RANGE SCAN| UI_CHECKLISTDTL |     1 |    99 |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("INSTANCEDATE"=:V007 AND "SITE_ID"=:V003 AND "PROVNUM"=:V004
              AND "PROVLOC"=:V005 AND "CLGROUP"=:V006 AND "CLITEM"=:V008)
       filter(TO_NUMBER("PROVKEY")=:V002)


23 rows selected.



see how I compare a NUMBER to a STRING (last column in your unique index). That means it could not use the entire indexed set of columns to identify the rows right off - it had to ACCESS and the FILTER. So, it used an index that it could use entirely.

getting rid of the other index makes it use the unique index.


So, look for IMPLICIT conversions caused by binding an inappropriate type (eg: comparing a timestamp to a date, a number to a string and so on)

RE: wrong index

Joe, May 28, 2009 - 4:37 pm UTC

Tom,

Why does it still use the wrong index when I change the bind variables to real values? Below are 2 plans - the first is with the wrong index and the second shows that unique index (forced by using INDEX hint).

set serveroutput off
SQL> update "CHECKLISTDTL"
  2     set "COMPLETEDDATE" = to_date('01/01/2009', 'mm/dd/yyyy')
  3   where "PROVKEY" = 123
  4     and "SITE_ID" = 'ABC'
  5     and "PROVNUM" = 'ABC'
  6     and "PROVLOC" = 'ABC'
  7     and "CLGROUP" = 'ABC'
  8     and "INSTANCEDATE" = to_date('01/01/2009', 'mm/dd/yyyy')
  9     and "CLITEM" = 'ABC';

0 rows updated.

SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
SQL_ID  6hwm8zhdxju6m, child number 0                                           
-------------------------------------                                           
update "CHECKLISTDTL"    set "COMPLETEDDATE" = to_date('01/01/2009', 'mm/dd/yyyy
')  where                                                                       
                                                                                
"PROVKEY" = 123    and "SITE_ID" = 'ABC'    and "PROVNUM" = 'ABC'    and "PROVLO
C" = 'ABC'                                                                      
                                                                                
 and "CLGROUP" = 'ABC'    and "INSTANCEDATE" = to_date('01/01/2009', 'mm/dd/yyyy
')    and                                                                       
                                                                                

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
"CLITEM" = 'ABC'                                                                
                                                                                
Plan hash value: 3160241299                                                     
                                                                                
--------------------------------------------------------------------------------
---------------------                                                           
                                                                                
| Id  | Operation                    | Name                 | Rows  | Bytes | Co
st (%CPU)| Time     |                                                           
                                                                                
--------------------------------------------------------------------------------

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
---------------------                                                           
                                                                                
|   0 | UPDATE STATEMENT             |                      |       |       |   
  4 (100)|          |                                                           
                                                                                
|   1 |  UPDATE                      | CHECKLISTDTL         |       |       |   
         |          |                                                           
                                                                                
|*  2 |   TABLE ACCESS BY INDEX ROWID| CHECKLISTDTL         |     1 |    99 |   
  4   (0)| 00:00:01 |                                                           
                                                                                

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
|*  3 |    INDEX RANGE SCAN          | NI_CHECKLISTDTL_ITEM |     1 |       |   
  3   (0)| 00:00:01 |                                                           
                                                                                
--------------------------------------------------------------------------------
---------------------                                                           
                                                                                
                                                                                
Predicate Information (identified by operation id):                             
---------------------------------------------------                             
                                                                                
   2 - filter(("PROVKEY"=123 AND "INSTANCEDATE"=TO_DATE('2009-01-01 00:00:00', '

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
yyyy-mm-dd                                                                      
                                                                                
              hh24:mi:ss')))                                                    
   3 - access("SITE_ID"='ABC' AND "PROVNUM"='ABC' AND "PROVLOC"='ABC' AND "CLGRO
UP"='ABC'                                                                       
                                                                                
              AND "CLITEM"='ABC')                                               
                                                                                

26 rows selected.

SQL> 
SQL> update /*+ INDEX (checklistdtl ui_checklistdtl) */  "CHECKLISTDTL"
  2     set "COMPLETEDDATE" = to_date('01/01/2009', 'mm/dd/yyyy')
  3   where "PROVKEY" = 123
  4     and "SITE_ID" = 'ABC'
  5     and "PROVNUM" = 'ABC'
  6     and "PROVLOC" = 'ABC'
  7     and "CLGROUP" = 'ABC'
  8     and "INSTANCEDATE" = to_date('01/01/2009', 'mm/dd/yyyy')
  9     and "CLITEM" = 'ABC';

0 rows updated.

SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
SQL_ID  dfvgdwcgpj8ux, child number 0                                           
-------------------------------------                                           
update /*+ INDEX (checklistdtl ui_checklistdtl) */  "CHECKLISTDTL"    set       
"COMPLETEDDATE" = to_date('01/01/2009', 'mm/dd/yyyy')  where "PROVKEY" = 123    
and                                                                             
                                                                                
"SITE_ID" = 'ABC'    and "PROVNUM" = 'ABC'    and "PROVLOC" = 'ABC'    and "CLGR
OUP" =                                                                          
                                                                                
'ABC'    and "INSTANCEDATE" = to_date('01/01/2009', 'mm/dd/yyyy')    and "CLITEM
" =                                                                             

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
                                                                                
'ABC'                                                                           
                                                                                
Plan hash value: 2499362926                                                     
                                                                                
--------------------------------------------------------------------------------
----------------                                                                
                                                                                
| Id  | Operation                    | Name            | Rows  | Bytes | Cost (%
CPU)| Time     |                                                                
                                                                                

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
----------------                                                                
                                                                                
|   0 | UPDATE STATEMENT             |                 |       |       |     4 (
100)|          |                                                                
                                                                                
|   1 |  UPDATE                      | CHECKLISTDTL    |       |       |        
    |          |                                                                
                                                                                
|   2 |   TABLE ACCESS BY INDEX ROWID| CHECKLISTDTL    |     1 |    99 |     4  
 (0)| 00:00:01 |                                                                

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
                                                                                
|*  3 |    INDEX UNIQUE SCAN         | UI_CHECKLISTDTL |     1 |       |     3  
 (0)| 00:00:01 |                                                                
                                                                                
--------------------------------------------------------------------------------
----------------                                                                
                                                                                
                                                                                
Predicate Information (identified by operation id):                             
---------------------------------------------------                             
                                                                                

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
   3 - access("INSTANCEDATE"=TO_DATE('2009-01-01 00:00:00', 'yyyy-mm-dd hh24:mi:
ss')                                                                            
                                                                                
              AND "SITE_ID"='ABC' AND "PROVNUM"='ABC' AND "PROVLOC"='ABC' AND "C
LGROUP"='ABC' AND                                                               
                                                                                
              "CLITEM"='ABC' AND "PROVKEY"=123)                                 

26 rows selected.
SQL> 


Thanks,
Joe

P.S. I started reading this site daily and it has been an incredible leaning tool for me.
Tom Kyte
May 28, 2009 - 5:22 pm UTC

I don't know what "example" you are working with - but I see estimated row counts of "1"

so as far as the optimizer is concerned - with your example - either one is the same, they both return ONE row.

if you give me incomplete examples (unlike the ones I give you for example that run from start to finish...) I am hard pressed to comment.

Flush shared pool

Lhyn, August 07, 2009 - 10:56 am UTC

Hi,
Please,could you explain me that I had doubted about flush shared pool as command below
==============================================
SQL> alter system flush shared_pool;

Then I got result as below.

Before FLUSH SHARED POOL:
        NAME         Allocated (M)  Free (K)   % Free
        ------------ ------------- ---------- ----------
A-->    free memory  141.492165     4501.9375    4.11
B-->    free memory  141.492165     7812.5       7.12
C-->    free memory  141.492165     19532       15.38

===================================================
After  FLUSH SHARED POOL:
        NAME          Allocated (M) Free (K)    % Free
    ------------ -------------  ---------- ----------
a-->    free memory  141.492165     70596.8672  46.73
b-->    free memory  141.492165     7812.5       7.12
c-->    free memory  141.492165     19532       15.38

That after I flush shared_pool then the value of free memory (b and c) not change.I would like to know that why it occur that.

Regrads,
New DBA
    



   

Tom Kyte
August 07, 2009 - 2:30 pm UTC

I don't know what I'm looking at here. What are a, b, and c ???

Alternative of alter system flush shared_pool

Peter, September 04, 2009 - 11:40 am UTC

Dear Tom,

Is there some expiration/timeout mechanism to auto-magically free shared_pool and enable logons again after reaching ORA-04031 for shared_pool on Oracle 10.2.0.4 preventing further logons.

I know that DBA can
- restart DB
- flush shared_pool

..but if nobody will touch instance,... it is possbile that instance will free some space in shared_pool by itself so login will work again?

Kind regards,
Peter.
Tom Kyte
September 04, 2009 - 3:55 pm UTC

it should be such that when you hit the 4031 - the shared pool sort of cleans up - since so many sessions would hit cascading errors and make their stuff freeable.

in my experience, once the 4031 happens - it self corrects. what are you experiencing?

sql

A reader, September 05, 2009 - 2:34 pm UTC


Alternative of Flush SGA

Peter, September 07, 2009 - 5:00 am UTC

Dear Tom,

Exactly this happened, 4 session were running dbms_stat.gather_table_stats on many tables when all of them got ORA-04301.

No logins were allowed due
ORA-04301 .. "sga heap(1,0)","kglsim object batch"

Without DBA priviledges I could only wait from Friday to Monday.. then DBA made same space by flush shared pool
and login worked again.

So it seems to me that there was no auto-clean effect enabled (pmon/smon/...) so shared pool was flooded with sql statements effectively blocking logins for 50 hours in my case.
I do not consider this as bug, however it could be "nice to have" feature to free shared pool after an hour or so.

Kind regards,
Peter.

some help in clearing my doubt

VS, September 16, 2009 - 11:02 am UTC

Dear Tom,

good day to you, this may sound a dumb question but just wanted to clear my doubt.

I observed that a query had some sql_id after executing alter system flush buffer_cache and alter system flush shared_pool, I executed the query again and found that it had the same sql_id as earlier. I want to know what's how is sql_id derived, is it based on query text?.

Thanks a lot for your help and time.

Regards,
VS
Tom Kyte
September 16, 2009 - 5:09 pm UTC

sql_id = hash(sql_id)

it is a hashing function - given the same inputs, it always returns the same value


A constant sql string will always have the same sql_id (same inputs - sql string - result in same output)

Many sql statements that are different could share the same sql_id, sql_id is not "unique" for a string, it is a hashing function.

flush

A reader, December 11, 2009 - 10:20 am UTC


Anonymous PL/SQL block filling up shared pool

Sujit, May 13, 2010 - 3:14 pm UTC

I am currently working on a project for about a year , the project is pretty old about 10-15 years and started with oracle 7 or before and now using 10g.
Here is how a part of the main application architecture is : based on the user selection of input , frontend prepare a PL/SQL block like below and execute it:

BEGIN proc1(val1,val2,..);proc2(val01);proc2(val0n); ... exception....END;
pleae note val1 , val2 are exact values not bind variable.

very often the size of this PL/SQL block is very big and they are all different is each execution.

When I look in V$sql I do see that shared pool is full of those PL/SQL blocks. And a job is been in place to flush the share pool every hour. After talking to the person who is been managing this database for a long time , it comes out that without flushing the pool , its cause lots of performance issues.

I assume the response of my abovestatement would be bad architecture , we should change it. But changing this is a huge effort , and probably is not going going to happen. Earlier number of transactions are less so performance was never been an issue , but nowadays more transaction , though the hardware configuration is increased a lot , but performance issue still remain. And it looks like this is one of the many reason of performance problem. Though I don't have a way to measure how the performance would be if the architecure would have been in proper way , so its hard to convince management.

My question:
1) Keeping the system architecture as it is , is there anything we can do , from the database configuration side to improve performance?
2) Is there any way to flush only the PL/SQL anonymous block from shared pool?
3) Is there anything in 11g that help us?
4) Will it be worth to change the size of shared pool , incrase or decrease and check how system behaves?


Answer to above followup

Sujit, May 25, 2010 - 2:00 pm UTC

Tom , I am awaiting for your response for the above followup questions. Appreciate your reply.
Tom Kyte
May 25, 2010 - 6:27 pm UTC

I don't see all of the questions - sometimes when I travel, I just skip them because I don't have enough time.

BEGIN proc1(val1,val2,..);proc2(val01);proc2(val0n); ... exception....END;
pleae note val1 , val2 are exact values not bind variable.


so they are literals, like "begin proc1( 'hello', 'world', 42 ); proc2( ..."

And a job is been in
place to flush the share pool every hour.


oh marvy - ugh, that is a mess.


it comes out that without flushing the pool , its cause lots of
performance issues.


chicken and egg issue here. Their lack of use of bind variables causes a huge performance issue all by itself - much much much larger than any shared pool fragmentation. They should shrink the size of the shared pool to cause us to flush it internally ourselves more often. As it is - I'll guess "we made it really really big - but when it fills us - it takes a long time to empty by itself so we flush it every hour way before it fills up to make that take less time". You know what would accomplish that as well? Making the pool smaller so we flush a smaller thing more often

but only while the developers fix the gaping bug in their incredibly bad application.


I assume the response of my abovestatement would be bad architecture , we should change it. But
changing this is a huge effort , and probably is not going going to happen. Earlier number of
transactions are less so performance was never been an issue , but nowadays more transaction ,
though the hardware configuration is increased a lot , but performance issue still remain. And it
looks like this is one of the many reason of performance problem. Though I don't have a way to
measure how the performance would be if the architecure would have been in proper way , so its hard
to convince management.


do you see as many contradictions in there as I do??

o changing is a huge effort (maybe it is, maybe it isn't, I don't think it is actually)

o performance was not an issue, but now it is (it always comes to that)

o changing is probably not going to happen - well well now, you just want a magic incantation to make things better? It doesn't work that way in real life! You want to observe change (in response time) - you'll have to make a change in your application.



1) Keeping the system architecture as it is , is there anything we can do , from the database
configuration side to improve performance?
2) Is there any way to flush only the PL/SQL anonymous block from shared pool?
3) Is there anything in 11g that help us?
4) Will it be worth to change the size of shared pool , incrase or decrease and check how system
behaves?


1) Not really, cursor_sharing (an attempt to 'fix' some hard parsing problems) doesn't really kick in for plsql.

2) no

3) better programmers?

4) probably smaller


But, the sooner you and your management face the facts that the sins of the past are catching up to you - that it will only get worse over time - the better. That way you can sit down and start to look at the code.


I firmly believe it will not be that big of a change to use binds - you should investigate that a bit.

number of entries for same PL/SQL block

Sujit, May 26, 2010 - 3:49 pm UTC

Tom,
  Thank you very much for your reply. Some of the things you have mentioned already came to my mind before , but hearing the same from you always brings up some more value.

  Another thing which I heard from the person who is managing this database for long time is "Not flushing the shared pool causes some unexpected behaviour , like out of memory , session get dropped etc etc and system didn't work " , though I have not seen it or heard it anywhere before. Just wondering if in case you come across this kind of problem anywhere ?

 One more thing when I query v$sql I do see there are number of entries for same sql id for PL/SQL block:
SQL> select sql_id,cnt
    from
    (select sql_id,sql_text,count(*) cnt from v$sql where command_type=47 group
 by sql_id,sql_text)
    where cnt>5
    order by cnt desc

SQL_ID               CNT
------------- ----------
58wrbm63xr0tf         24
a993yaqhnn0d1         14
fu3hww0587m6p         12
4qs6czd1m31yj         11
82qyfdmd0j7qx          8
cjggpcrkp4jm5          8
g0j5s8zkn9pgd          7
26rdg379gn81r          7
axkaudnbxuw5r          7

 And this is even after flushing shared pool every hour. Just to note all those sql id's listed above are PL/SQL block which uses bind variable. (I read your article and aware about the reason about the same for SQL statements , but I don't think same logic applies for PL/SQL). Just wondering if you can get me the reason for it and whether it has any impact or not?

Tom Kyte
May 27, 2010 - 7:20 am UTC

... Another thing which I heard from the person who is managing this database for
long time is "Not flushing the shared pool causes some unexpected behaviour ,
like out of memory , session get dropped etc etc and system didn't work " ,
though I have not seen it or heard it anywhere before. Just wondering if in
case you come across this kind of problem anywhere ?
....

not in many years. We did things like add the large pool and such way back in the mid 1990's to try to alleviate the issue of "people flooding the shared pool with literal sql"

Having 24 copies of something is absolutely normal in many cases. Use v$sql_shared_cursor to see why they are there. 24 is a teeny tiny number.

duplicate in v$sql for PL/SQL

Sujit, May 27, 2010 - 10:52 am UTC

Thank you very much Tom , for your valuable input. I have been able to get the reson behind the multiple entries in v$sql for the PL/SQL blocks. Number of those multiple entries are comparatively large in numbers during the pick hours , do you think is it something which need to have attention?

I have a question though:
my PL/SQL blocks are like this:
BEGIN proc1(:a1,:a2,:a3); END;
which is made automatically by the frontend application software (Power Builder) for calling any procedure.

The reason of duplicate entry in v$sql is bind mismatch , for all my scenarios. When I look at the v$sql_bind_capture I see the difference like below:
CHILD_NUMBER NAME POSITION DATATYPE_STRING
27 :0 1 CHAR(10)
27 :1 2 CHAR(11)
27 :2 3 CHAR(1)
.........................
---------------------------------
31 :0 1 CHAR(10)
31 :1 2 CHAR(10)
31 :2 3 CHAR(1)
.............................

Diference is Size of the bind variable , I think it is made by implicit conversion , based on the size of the input, not sure where it did this implicit conversion , apllication software or Database. Now most of those procedure has on an average 20-30 input parameter. So number of entries comes up with input variable size difference is very huge , so number of entries in v$sql for same sql_id is fairly large in numbers during pick hours. Do we need any attention for this ? If yes , Is there anything from database side we can do to overcome this sitution.
Tom Kyte
May 27, 2010 - 11:05 am UTC

the application decides what IT will bind.


ops$tkyte%ORA11GR2> create or replace procedure p( x in varchar2 )
  2  as
  3  begin
  4          null;
  5  end;
  6  /

Procedure created.

ops$tkyte%ORA11GR2> alter system flush shared_pool;

System altered.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> variable a char(10);
ops$tkyte%ORA11GR2> begin /* look for me */ p(:a); end;
  2  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> variable a char(11);
ops$tkyte%ORA11GR2> begin /* look for me */ p(:a); end;
  2  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> variable a char(12);
ops$tkyte%ORA11GR2> begin /* look for me */ p(:a); end;
  2  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> select sql_text from v$sql where sql_text like '% look for me %';

SQL_TEXT
------------------------------
begin /* look for me */ p(:a);
 end;

begin /* look for me */ p(:a);
 end;

begin /* look for me */ p(:a);
 end;

select sql_text from v$sql whe
re sql_text like '% look for m
e %'




sigh, wish they would have used a varchar.... if you do the above with a as a varchar2 instead of a stupid char (no one uses char, not if they know any better)....


Flushing Shared Pool

Veso, May 28, 2010 - 8:58 am UTC

Hello there Tom,

I found your asktom very useful with a lot of "hidden" information inside, so thank you for the good work.

On the topic, reading the reviews until now and the replies, you strongly recommend not flushing the pool and using the bind variables. On our system we have the problem of shared_pool going crazy, it constantly increases and the only option for us to keep the system alive is to flush the shared_pool.

The system is designed in a way to use bind variables, so this is one side covered. The flaw is it is constantly creating "temp" tables in not temp tablespaces, and doing operations with those tables.

As I understand it, every operation that has different table affected creates entry in the shared_pool.

After the operations are done, the system drops the tables.

We have cursor_sharing=SIMILAR if that can have any effect.
What do you suggest in this situation other than changing the code, which is external developer's and management won't approve?

Thank you in advance for your response and don't mind deleting this review if you find it not relevant.

Best regards V.
Tom Kyte
May 28, 2010 - 9:06 am UTC

.. you strongly
recommend not flushing the pool and using the bind variables. ..

stronger than strongly, vehemently is a better adjective.


...On our system we
have the problem of shared_pool going crazy, ..

oh no, you cannot blame the shared pool, the thing that has gone crazy is your developers and their developed code.


.... e flaw is it is constantly creating "temp" tables in not temp
tablespaces, and doing operations with those tables.
.....

hence defeating utterly and entirely the concept of shared sql...


think about this, if you

a) create a table
b) query table using query Q
c) drop table

can query Q every be reused ever again? Of course not, the table it used no longer exists.


This is a big mistake, one that can only be fixed by ripping the application apart and doing it correctly.

pin query plan

Ravi B, August 10, 2010 - 12:27 pm UTC

Tom,

One of our developers reported this issue. I am not sure if this could be a possibility.

We have a batch process where set of tables are populated to a high volume. Here is the description of the problem and workaround verbatim.

"...the content of the tables varies from empty to millions of rows during active collection. In order to use the best query plan for query those tables, the process collect stats on those tables every 2 hours. however, since those queries are run in a loop fashion, the query plan is pinned to the Oracle shared pool and not been cleared by just running stats. Thus cause oracle to use wrong query plan which turns a query from running for less than 1 min to 3-4 hours. We found a workaround to run a fake grant on the table to invalidate all query plans on the table, i.e (grant read on xxxx to public). it works for Oracle 10g
but it seems like it failed for 11g...."

I am new to this place and haven't seen their code yet. I know they are issuing these SQL's from java code and loop (yikes!). I am sure they might not be binding. But i dont get the pinning part.

Your comments are greatly appreciated.

Thanks!

Tom Kyte
August 18, 2010 - 11:15 pm UTC

in 10g and above, dbms_stats doesn't immediately invalidate plans by default (see the no_invalidate parameter).

I would actually suggest trying a different tact. Gather stats on the table when it is "big", then lock them in. In many/most cases - that plan will work fine when it is small - and it will be the plan you want when it is big.


so, it isn't pinning, it is the fact that dbms_stats by design does not immediately invalidate things in the cache. Running a fake grant would be a bad idea. Look at either using big stats always - or read up on no_invalidate

pin query plan

Ravi B, August 10, 2010 - 12:29 pm UTC

forgot to mention, they also issue these commands :)

alter system flush shared_pool;
alter system flush buffer_cache;

It seems the workaround *really* works for 10g. Now that we are on 11g, they say it is no longer working.
Tom Kyte
August 18, 2010 - 11:18 pm UTC

now that just has to stop.

they are developers, please immediately revoke the ability to do something so very very wrong from them.

To take the random samples

Ganesh, October 26, 2010 - 6:03 am UTC

I need to get a random sample of data from the database. When I run my query it gets me recently updated records which are all recently created ones. In that scenoria is it useful to flush the buffer? I can even schedule a down time to bounce the database if that makes any difference. I need a real random samples.


Tom Kyte
October 26, 2010 - 8:00 pm UTC

Flushing the shared Pool

GK, December 09, 2010 - 11:31 pm UTC

Hi Tom,

I have a report query which runs on production DB(Since we need the latest, we run in production not in DWH) which has almost all variables declared as bind variables. The query's cost is accectable and it is taking quite a lot of time during execution. But when the shared pool is flushed it runs fast. I have informed the client that even if I have done it as a temporary measure, its not recommended always. How can I proceed in analysing the cause of this?
Tom Kyte
December 10, 2010 - 9:57 am UTC

compare the plans - it sounds like they are different. And this sounds like a bind peeking issue.

http://asktom.oracle.com/Misc/when-explanation-doesn-sound-quite.html

make sure to get the plans from v$sql_plan

ORA-4031: unable to allocate 4112 bytes of shared memory.

Ramesh babu, June 27, 2011 - 1:39 pm UTC

I have a table RANGE PARTITIONED on COB_DATE column and
LIST SUBPARTITIONED on SOURCESYSTEM column.
Initially for 4 sourcesystems, for a span of 1 year(2010)
the partitions are done.
Apart from the 4 sourcesystems, a DEFAULT SUBPARTITION is also been created in the SUBPARTITION TEMPLATE.

So if i want to add a new SOURCESYSTEM i need to SPLIT the DEFAULT SUBPARTITION.

In order to perform this for the entire year of 2011 (365 COB_DATES).

A) In a PROCEDURE I am generating the ALTER SPLIT SUBPARTITION STATEMENTS dynamically and running them using EXECUTE IMMEDIATE.

B) Preparing the ATLER SPLIT SUBPARTION statements manually.

By using the first (A) option when i am running the PROC, I am getting ORA-4031: unable to allocate 4112 bytes of shared memory. ERROR.

Even after increasing the SHARED MEMORY also i am getting the same error.

Now the queries are :

1) Is there any alternative solution for this?
2)Will the shared memory be allocated for the PROC separately as a fixed one say some 200MB and doesn't increase it and throws the error?
3) when we go for the option (B) will the Shared memory be shuffling by increasing and decreasing the memory?
4) Do we have any option while performing these ALTER DDL statements mentioning not to RUN these in SHARED MEMORY in order to avoid this ERROR entirely by performing this DDL in other memory other than SHARED one?
Tom Kyte
June 27, 2011 - 1:49 pm UTC

is this reproducible? If so, please contact support - it shouldn't be happening for such a simple operation repeatedly. The DDL needs the shared pool to parse into, but that won't be a huge amount. The actual data transfer takes place on disk basically - not in memory.


If this was a one time event, then I would say your shared pool might just have been full at that split instant, you really did run out of that resource.

Re-Building Index of DEFAULT Subpartition

RAMESH, July 05, 2011 - 11:49 am UTC

I pushed the Dynamic SQL's into TABLE types and was running them. It solved the above issue.

The new Issue that raised is as below.

1)If the DEFAULT SUBPARTITION is not containing any data,

a)REBUILDING the INDEX of DEFAULT SUBPARTITION.
b)Inserting the Data into it
c)Splitting the DEFAULT SUBPARTITION
d)Again REBUILDING the Index of corresponding SOURCESYSTEM SUBPARTITION.

If i follow the above process then only it is working,
otherwise i am getting NO MORE DATA TO BE READ FROM SOCKET Error.

2)If the DEFAULT SUBPARTITION is already containing data,
What is the procedure to SPLIT it for the corresponding SOURCESYSTEM?




Tom Kyte
July 05, 2011 - 4:10 pm UTC

1) please contact support for something like that.

2) what do you mean by splitting the default? If you have a list partition with a default - you cannot add any new list subpartitions - you have to empty the default list partition (not have one basically)

one execution

Manoel, August 03, 2011 - 2:41 pm UTC


Tom,

I have one heavy and expensive SQL:

Example:

Select c1
,c2
from (lots of tables and views here.

On the first time when i run, it takes: 6 Seconds.

When i try to execute again, it no longer runs...

But

-> if i clean the shared_pool (alter system flush shared_pool)
and execute again, it takes the same 6 seconds.

-> If i execute the same query again changing something like

Select c1
,c2--
from (lots of tables and views here.)

or just:

Select c1
, c2
from (lots of tables and views here.)


6 Seconds again !!! But it runs only once !!!


- I have good and valid statistics.

- I tried to change cursor_sharing, didn't work.

Do you have any idea why its happening?

--------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for 64-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production

--/ Parameters

1492 cursor_sharing 2 EXACT EXACT

579 memory_max_target 6 3355443200 3200M
578 memory_target 6 3355443200 3200M
144 shared_pool_size 6 0 0
--------------------

Regards,
Manoel

Tom Kyte
August 04, 2011 - 7:56 am UTC

define "it no longer runs", that doesn't compute to me.


if you are saying "it hangs, never returns", use another session to see what it is waiting on - v$session_wait, v$session_event would be useful to look at.

Flusing a single statement in 9i

A reader, November 25, 2013 - 10:52 pm UTC

Hi Tom,

Thanks for your inputs as always.

I have a 9.2.0.7 system. I am working on a POC that I need to delete only 1 statement/object/cursor from the shared pool instead of flushing everything.

How can I achieve this in a 9i based system?