Skip to Main Content
  • Questions
  • Shared Memory Problem (unable to allocate shared memory ...)

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Khalid.

Asked: April 20, 2001 - 5:37 pm UTC

Last updated: October 12, 2020 - 3:47 am UTC

Version: 8.1.5

Viewed 100K+ times! This question is

You Asked

Tom:

My Development Environment:
I have a NT development box with 256MB on it. There are two oracle databases on it. At least 2 identical schema owners on each of them. Each scema has on an average about 10 pl/sql packages of about 1000 lines each. I have separated two databases as D (Development) and Q (For our own internal testing). Also there is a JRUN server, and a Netscape Enterprise Server running on the same box.

My SGA:
Total System Global Area 24899532 bytes
Fixed Size 65484 bytes
Variable Size 7983104 bytes
Database Buffers 16777216 bytes
Redo Buffers 73728 bytes

Some of the pfile Parameters:
db_block_size integer 8192
db_block_buffers integer 2048
shared_pool_size string 5000000
shared_pool_reserved_size string 250000
shared_pool_size string 5000000

Problems Encountered:
Our developers are using servlet applications, and our testers are banging against the database pretty hard.
I have to restart the server at least once or twice every day due to shared memory errors such as the one below. Our QC and Production environments will be dedicated Oracle servers running on Solaris boxes with much better resource allocations.

Any help on how to eliminate/mitigate this problem will be greatly appreciated.

Thanks,
Khalid


Sample Errors:
***********************************************
Error: SQLException java.sql.SQLException: ORA-04031: unable to allocate 4096 bytes of shared memory ("shared pool","GF","PL/SQL
MPCODE","BAMIMA: Bam Buffer") ORA-06508: PL/SQL: could not find program unit being called ORA-06512: at line 1 begin :1 := gfx.insrt_coach('Ron
Jennings',5,'04172001','','','',''); end;

Error: SQLException java.sql.SQLException: ORA-00604: error occurred at recursive SQL level 1 ORA-04031: unable to allocate 4216 bytes of shared memory
("shared pool","select con#,obj#,rcon#,enabl...","sga heap","library cache")
INSERT INTO gfx_suggestion (suggestion, suggestion_id, timestamp, suggestion_type_fl,
name, email, business_unit_key) select 'test suggestion. RJ 04/19/01', max(suggestion_id)+1, sysdate, 'T', 'Ron Jennings' , 'rj@rwd.com', '5' from gf_suggestion
*******************************************************







and Tom said...

#1 -- you are NOT USING BIND VARIABLES, for example I clearly see:

INSERT INTO gfx_suggestion (suggestion, suggestion_id, timestamp,
suggestion_type_fl,
name, email, business_unit_key) select 'test suggestion. RJ 04/19/01',
max(suggestion_id)+1, sysdate, 'T', 'Ron Jennings' , 'rj@rwd.com', '5' from
gf_suggestion


that MUST BE rewritten as:

INSERT INTO gfx_suggestion (suggestion, suggestion_id, timestamp,
suggestion_type_fl,
name, email, business_unit_key) select :1, max(suggestion_id)+1, sysdate, :2, :3, :4, :5 from gf_suggestion

Or you will not go ANYWHERE with this application. Bind variables are SO MASSIVELY important -- I cannot in any way shape or form OVERSTATE their importance.

Same with the PLSQL call I see there:

begin :1 := gfx.insrt_coach('Ron
Jennings',5,'04172001','','','',''); end;

That MUST be coded as:

begin :1 := gfx.insrt_coach(:2,:3,:4,:5,:6,:7,:8);



If you do not fix this, your application is doomed to utter and total failure from day one.


#2 -- I see the query:


INSERT INTO gfx_suggestion (suggestion, suggestion_id, timestamp,
suggestion_type_fl,
name, email, business_unit_key) select 'test suggestion. RJ 04/19/01',
max(suggestion_id)+1, sysdate, 'T', 'Ron Jennings' , 'rj@rwd.com', '5' from
gf_suggestion


and shudder - that is frightening!!! What happens when two people insert at about the same time? --- answer -- both get the SAME suggestion_id. That is a horrible programming practice your "database" developers have (i quote "database" because I don't think they are database developers, I think they are java programmers trying to use a database-- these code snippets must just be the tiny tip of a really big iceberg).

They NEED to read about sequences. That insert should be inserting:

... SUGGESTION_SEQ.NEXTVAL, ....

A sequence is a highly scalable, non-blocking ID generator.



Java supports bind variables, your developers must start using prepared statements and bind inputs into it. If you want your system to ultimately scale beyond say about 3 or 4 users -- you will do this right now (fix the code). It is not something to think about, it is something you MUST do. A side effect of this - your shared pool problems will pretty much disappear. That is the root cause.

If I was to write a book on how to build 'non scalable
applications in Oracle', this would be the first and last
chapter. This is a major cause of performance issues and a
major inhibitor of scalability in Oracle. The way the Oracle
shared pool (a very important shared memory data structure)
operates is predicated on developers using bind variables. If
you want to make Oracle run slowly, even grind to a total halt 
just refuse to use them.

For those that do not know, a bind variable is a placeholder in
a query. For example, to retrieve the record for employee 1234,
I can either query:

SELECT * FROM EMP WHERE EMPNO = 1234;

Or I can query:

SELECT * FROM EMP WHERE EMPNO = :empno;

And supply the value for :empno at query execution time. The
difference between the two is huge, dramatic even. In a typical
system, you would query up employee 1234 maybe once and then
never again. Later, you would query up employee 456, then 789
and so on. If you use literal (constants) in the query  each
and every query is a brand new query, never before seen by the
database. It will have to be parsed, qualified (names
resolved), security checked, optimized and so on. In short, it
will be compiled. Every unique statement you execute will have
to be compiled every time. This would be like shipping your
customers Java source code and before calling a method in a
class  you would invoke the Java compiler, compile the class,
run the method and then throw away the byte code. The next time
you wanted to execute the same exact method, you would do the
same thing  compile it, run it and throw it away. Executing
SQL statements without bind variables is very much the same
thing as compiling a subroutine before each and every call. You
would never consider doing that in your application  you should
never consider doing that to your database either.

Not only will parsing a statement like that (also called a HARD
parse) consume many more resources and time then reusing an
already parsed query plan found in the shared pool (called a
SOFT parse), it will limit your scalability. We can see it will
obviously take longer, what is not obvious is that it will
reduce the amount of users your system can support. This is due
in part to the increased resource consumption but mainly to the
latching mechanisms for the library cache where these plans are
stored after they are compiled. When you hard parse a query, we
will spend more time holding certain low level serialization
devices called latches. These latches protect the data
structures in the shared memory of Oracle from concurrent
modifications by two sessions (else Oracle would end up with
corrupt data structures) and from someone reading a data
structure while it is being modified. The longer and more often
we have to latch these data structures, the longer the queue to
get these latches will become. Similar to the MTS architecture
issue described above, we will start to monopolize scarce
resources. Your machine may appear to be underutilized at times
 but yet everyone in the database is running very slowly. This
will be because someone is holding one of these serialization
mechanisms and a line is forming. You are not able to run at
top speed.

The second query above on the other hand, the one with :empno,
is compiled once and stored in the shared pool (the library
cache). Everyone who submits the same exact query that
references the same object will use that compiled plan (the SOFT
parse). You will compile your subroutine once and use it over
and over again. This is very efficient and the way the database
is intending you will do your work. Not only will you use less
resources (a SOFT parse is much less resource intensive), but
you will hold latches for less time and need them less
frequently. This increases your performance and greatly
increases your scalability.

Just to give you a tiny idea of how huge of a difference this
can make performance wise, you only need to run a very small
test:

tkyte@TKYTE816> alter system flush shared_pool;
System altered.

tkyte@TKYTE816> declare
2 type rc is ref cursor;
3 l_rc rc;
4 l_dummy all_objects.object_name%type;
5 l_start number default dbms_utility.get_time;
6 begin
7 for i in 1 .. 1000
8 loop
9 open l_rc for
10 'select object_name
11 from all_objects
12 where object_id = ' || i;
13 fetch l_rc into l_dummy;
14 close l_rc;
15 end loop;
16 dbms_output.put_line
17 ( round( (dbms_utility.get_time-l_start)/100, 2 ) ||
18 ' seconds...' );
19 end;
20 /
14.86 seconds...

PL/SQL procedure successfully completed.

tkyte@TKYTE816> declare
2 type rc is ref cursor;
3 l_rc rc;
4 l_dummy all_objects.object_name%type;
5 l_start number default dbms_utility.get_time;
6 begin
7 for i in 1 .. 1000
8 loop
9 open l_rc for
10 'select object_name
11 from all_objects
12 where object_id = :x'
13 using i;
14 fetch l_rc into l_dummy;
15 close l_rc;
16 end loop;
17 dbms_output.put_line
18 ( round( (dbms_utility.get_time-l_start)/100, 2 ) ||
19 ' seconds...' );
20 end;
21 /
1.27 seconds...

PL/SQL procedure successfully completed.

That is pretty dramatic. The fact is that not only does this
execute much faster (we spent more time PARSING our queries then
actually EXECUTING them!) it will let more users use your system
simultaneously.






Rating

  (200 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Sean Bu, April 21, 2001 - 12:10 pm UTC


K, April 21, 2001 - 3:36 pm UTC


Pauline, April 21, 2001 - 4:27 pm UTC


very explicit...

Marc, April 23, 2001 - 3:25 am UTC


A reader, April 23, 2001 - 1:38 pm UTC


Just excellent

Hakan Dombekci, April 24, 2001 - 6:50 am UTC


Helpful

David, June 07, 2001 - 1:49 pm UTC


Importance of using bind variable

Kalpesh, February 15, 2002 - 3:20 pm UTC

Tom,
This is excellent. Very useful for developers as well as for DBAs to find out the performance bottlenecks.
Simply Great!


A reader, February 27, 2002 - 4:31 am UTC


Thank you so much

Giga, April 19, 2002 - 3:47 pm UTC

Now i really know where our problems are.
I have the exact same problems and looking at the
sql code of our application(s), makes me shudder after i read your answer.


Funny I've been trying to tell some of our developers the same thing......

DBA, June 20, 2002 - 9:19 am UTC

We have been hitting this error on one of our systems, our software developers have suggested everything under the sun that could be the issue......

.......I looked into V$SQLAREA and couldn't believe what I saw. I was well impressed with your response if only I could have been so brutally honest!!!

Tom Kyte
June 20, 2002 - 11:57 am UTC

And to automagically see the problem statements see:

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1163635055580 <code>

(and lower the 100 in the last query to say 10 or so over time, as you correct the problem)

dynamic queries will cause ORA-04031 ?

I Singh, August 21, 2002 - 1:10 pm UTC

Tom,
Am i understanding it correct that if dynamic queries are executed on the database then it will eventually give me a ORA-04031 ? Is there a threshold number of objects that oracle ages from shared pool (based on LRU principle) before it determines that it cannot fit the new object in?

Thanks in advance.

Tom Kyte
August 21, 2002 - 1:29 pm UTC

No, you misunderstand.

If you use bind variables -- as suggested, there will never be a problem.

If you do not use bind variables and you flood the server with hundreds/thousands of unique queries you will

o run dog slow
o consume a ton of RAM (and maybe run out)
o not scale beyond a handful of users, if that

among other really bad side effects.


All sql in Oracle is dynamic sql under the covers. The question here is "did you do it right with bind variables or not". These guys (above) did it wrong

do java PreparedStatements work like queries with bind variables

SC, October 09, 2002 - 3:51 pm UTC

If I have a query in a PreparedStatement that reads
String sql = "select a,b,c from table where d=? and e=?";
and create a PreparedStatement with it and subsequently fill the ? with the setXXX() method. Does it work like a query with bind variables or do I need to write the query in a yet different way?


Tom Kyte
October 09, 2002 - 5:18 pm UTC

that is how to do bind variables in JDBC, the ? are bind placeholders.


So yes, this is all you need to do to use binds in JDBC

A reader, May 29, 2003 - 5:10 pm UTC

I have a question related to this issue.

One think I couldn’t understand is

1. When Oracle needs to parse new SQL it required memory in shared pool. I will check for a free chuck.
2. If it won’t find a free space it will free up the occupied space using LRU method.
Then why are we getting this ORA-04031 error? I have a database with 250 MB shared pool. The database is running in dedicated server mode.

If oracle couldn’t find a 4k continues block then it can free up an unused memory.


Tom Kyte
May 29, 2003 - 7:04 pm UTC

because there weren't enough freeable chunks in the amount of space it decided to look through.

bind variables, bind variables, its all about bind variables.

Thanks for you comments

A reader, May 29, 2003 - 7:35 pm UTC

When oracle is going to release these objects? After the end of each transaction?

I have a job, which consists a sequence of call to 10 packages.

Begin
Call package 1;
...
Call package 10;
End;

In this scenario, is oracle going to release locks and free up the memory after completion of each package call?

How to find out whether the problem is with not using “bind variables” or in sufficient shared pool or setting?




Tom Kyte
May 30, 2003 - 7:12 am UTC

what "objects"? what "locks"?

A reader, May 29, 2003 - 8:10 pm UTC

I wrote the following test case to simulate the problem (ORA-04031)

SQL> declare
  2  aSql varchar2(1000);
  3  mycount number;
  4  begin
  5      for i in 1..100000 loop
  6              aSql := 'select count(*) from t1 where id = ' || i;
  7              execute immediate aSql into mycount;
  8      end loop;
  9  end;
 10  /

PL/SQL procedure successfully completed.

Elapsed: 00:04:45.00

I thought it is going to fail with Ora-04031 error, but it didn’t. I observed v$sql while the program was going on. V$sql table got flushed all the selected statements until I=80000. If this is the case why Oracle is not able to flush the statements on my production db.
 

Tom Kyte
May 30, 2003 - 7:25 am UTC

because an execute immediate is

open
execute
close <<<====


on your production system, people are actually still using the things see. so, they are not freeable. it is because people are using it and on production you have lots of people.

maybe something like:

create or replace procedure recursive( p_n in number default 1 )
as
type rc is ref cursor;
l_cursor rc;
begin
open l_cursor for 'select * from all_objects where object_id = ' || p_n;
recursive(p_n+1);
exception
when others then
dbms_output.put_line( 'error ' || p_n );
dbms_output.put_line( sqlerrm );
return;
end;
/
exec recursive


you might get some other error other then 4031 (like cannot find called unit or max cursors or something else) as it is quite a disasterous thing to do

(warning -- don't run this on your real systems!)

RA-04031: unabl e to allocate 64 bytes of shared memory ("large pool"

A reader, June 12, 2003 - 12:39 pm UTC

hi tom

I am getting the following error on our system:

-------
2003-06-12 04:13:33,120 [Job Dispatcher Thread] ERROR conn.ConnectionServic
e
verifyRepositoryEx.324 - Invalid Connection Pool. ERROR = ORA-04031: unable
to allocate 64 bytes of shared memory ("large pool","unknown object","sess
i
on heap","trigger condition node")"


what do various components of this error message
mean? (e.g. trigger condition node)?

My doubt is whether this indicates a "bind variable problem"? I was skeptical as it says "large pool" which
I thought is "used by MTS for session memory, by
Parallel Execution for message buffers.." from your
book.
I would think that if it had menioned "shared pool" then
it would most likely indicate a bind variable issue..


What do you think the problem could be?

Thanx!


Tom Kyte
June 12, 2003 - 1:21 pm UTC

this is likely happening because your database is being connect to via shared server (MTS) and we are unable to allocate the initial UGA portion (large pool isn't big enough)

large pool is insufficiently sized for the number of users you desire to connect.

the database is not using MTS

A reader, June 12, 2003 - 1:45 pm UTC

I found this out by doing an sqlplus and
running
ps-ef|grep oracleora901

Every session creates a new process.

which implies dedicated server mode.

(There must be an easier way to detect dedicated/shared mode)



Tom Kyte
June 12, 2003 - 3:49 pm UTC

in your application

select server from v$session where sid = ( select sid from v$mystat where rownum=1)

or -- well -- ask your DBA how it is setup. They are the ones that need to be looking at this issue in the first place with you.

thanx!

A reader, June 12, 2003 - 4:01 pm UTC


MG, June 30, 2003 - 3:25 am UTC

Hi Tom,

I have called, some of pl/sql packages from my application. It works fine for long time.

Then recently, I transfer this database to a another network.

Then after some time I get following error:
ORA-04031

I am confused, If this is because not properly using bind variables, then why is this not arise in earlier (before tranfering the database)?

I use same init.ora file without changing sizes of memory allocations.

So could you please tell me, is there any other reason can be caused for this? or why is this happens?







Tom Kyte
June 30, 2003 - 7:05 am UTC

more users, different load -- i would guess, since we can only guess given the amount of information.

bind variables

Santhanam, June 30, 2003 - 7:26 am UTC

Excellent

Excellent Analysis

Lakshmi Jeedigunta, July 01, 2003 - 1:59 pm UTC


Excellent Explanations

Srini, July 10, 2003 - 5:23 am UTC


large pool and dedicated server mode

A reader, July 10, 2003 - 6:23 pm UTC

if we are in dedicated server mode, how can we get
"ORA-04031: unable
to allocate 64 bytes of shared memory ("large pool"..
thought large pool was relevant only for MTS mode.
How does dedicated mode use this piece of memory.
(assuming we are not running parallel queries.)

Thank you!!

Tom Kyte
July 11, 2003 - 7:31 am UTC

tell me what you were doing -- was it logging on.

no an app was running

A reader, July 11, 2003 - 11:30 am UTC

It occurs every now and then in our application.

this occurs in 9.0.1.0. The solution apprently in
our group is to move to 9.0.1.4. Actually I posted
this problem in the same thread (June 12).. The error
message is:

"2003-06-12 04:13:33,120 [Job Dispatcher Thread] ERROR conn.ConnectionService verifyRepositoryEx.324 - Invalid Connection Pool. ERROR = ORA-04031: unable
to allocate 64 bytes of shared memory ("large pool","unknown object","session heap","trigger condition node")"

I am sure this is dedicated server mode - then how
come large pool is getting used. Also, could you kindly
explain what the various components of the above error
mean (e.g. what does "trigger condition node" mean?

Thanx!


Tom Kyte
July 11, 2003 - 12:26 pm UTC

the tags in there just represent where in the code it was, they are not meaningful to you and me, only to support.


you can use lsnrctl status or lsnrctl services to see if your doing shared or dedicated (or just query SERVER from v$session)

thanx it is in dedicated mode!

A reader, July 11, 2003 - 12:36 pm UTC

I ran the select...

SQL> set echo on
SQL> select server from v$session where sid=(select sid from v$mystat where rownum=1);

DEDICATED                                                                       

SQL> spool off 

Tom Kyte
July 11, 2003 - 12:47 pm UTC

not so fast

that shows a single dedicated server session.

select server, count(*) from v$session group by server;

Sort Area size

George Frederick, July 11, 2003 - 12:59 pm UTC

I have been an Oracle DBA for many years. Tuning is a complex science (or art), indeed.

But if you asked me: "If you had to improve the performance of a database (not the application) by tuning only one thing, what would you do ?"

I would answer "Increase the sort_area_size (or the PAT in 9i)". On the very whole, it seeems to me the most constraining factor of the databases I have touched. Especially because of lots of implicit sorts carried out dynamically due to inefficient design. Some systems speed up thereafter.

I might expect a slam from you, but it's my experience, Tom.

Tom Kyte
July 11, 2003 - 1:04 pm UTC

well, in my next book I write:

Note: Most implementations simply leave SORT_AREA_SIZE and HASH_AREA_SIZE at the defaults, which is generally in the 64KB range. This is far too small in most cases. In general, a value of 512KB to 1MB should be considered. If you review a Statspack report and see many sorts to disk, reevaluate your sort area size.

As of Oracle9i, I personally consider these parameters obsolete when using dedicated server.. You should be using WORKAREA_SIZE_POLICY = AUTO and PGA_AGGREGATE_TARGET settings instead, and forget about fine-tuning and tweaking these parameters



So I concurr -- however, increasing the sort area size will not do anything to fix lack of bind variables ;)


here you go

A reader, July 11, 2003 - 1:04 pm UTC

"not so fast that shows a single dedicated server session..."

whoa - now i am even more confused!!


SQL> set echo on
SQL> select server, count(*) from v$session group by server;

DEDICATED         15                                                            
NONE               8                                                            

SQL> spool off 

Tom Kyte
July 11, 2003 - 1:38 pm UTC

8 of those are shared servers that are currently inactive.....



thanx Tom!

A reader, July 11, 2003 - 1:42 pm UTC

"8 of those are shared servers that are currently inactive....."...
I take it that "NONE" implies shared server connection.

That is strange - This is my development environment.
A database that I installed (Enterprise Edition 9.0.1.0)
. I did not do anything specific to have those
shared server connections (unless our applications
middle tier connection pooling has anything to do
with it - I dont think so!).
I installed a 9i and started it up normally...
Any ideas where these shared server connections
are coming from!!

Thank you!!!





Tom Kyte
July 11, 2003 - 2:57 pm UTC

mts/shared server is typically configured by default -- the database registers itself with the listener and broadcasts the fact that these connections are available

Excellent

Brian, July 11, 2003 - 5:30 pm UTC

I think I'm finally getting it.

Bind Variables are an issue when writing dynamic SQL such as Native Dynamic SQL (Execute Immediate statements) or using DBMS_SQL. The predicate (WHERE statement) should always be a bind variable and declared as (e.g., :variable_name) in native dynamic SQL.

In SQL*PLUS, I can optionally declare a bind varable using:

variable [variable_name] [type]

before my statements.

In dynamic sql, I have to follow up the open for statement such as:

open l_cursor for 'select . . . where x = :variable_name' using variable;

with 'using . . .' where the 'using variable' is one of the parameters of the function or procedure.

However, if I am just writing static sql such as:

create or replace procedure test(input number)
as
begin
for x in (select . . . where x = input)
loop
...
end loop;
end;

PL/SQL will automatically bind my variables.

I hope I have finally figured this out.

And this allows me to better scale my applications by preventing execessive parsing.

Right?

Tom Kyte
July 12, 2003 - 9:08 am UTC


exactly -- and static sql in plsql is the very best way to ensure that. it has

o least parses
o most binds
o tightest code path.

shared pool/ bind variables

DS, July 11, 2003 - 6:09 pm UTC

Tom, little confused about the original error message in this post and your repsonse. I thought that if the application used no bind variables then the shared poll will get filled up with non-reusable queries so performance will go down (understandbale). So when the shared poll becomes full it will just age out the oldest query, so why were they getting that error - was it because all available ram on the machine was being used or because so many queries were filling up the shared pool that it couldn't quickly get more in

hope that makes sense

Thanks

Tom Kyte
July 12, 2003 - 9:15 am UTC

the bigger the shared pool, the harder it is to manage and the more fragmented it can become. the algorithms don't even attempt to compact the entire thing when searching for memory (sort of a serialization issue there). when not using binds, a smaller shared pool is actually nicer then a big fat one.

but, the answer is -- use binds.

thanx!

A reader, July 14, 2003 - 4:37 pm UTC

"mts/shared server is typically configured by default -- the database registers
itself with the listener and broadcasts the fact that these connections are
available "

Thanx but the Oracle installation guide says this:

"General Purpose configuration: If you select this option with an Enterprise Edition or Standard Edition installation, the Database Configuration Assistant creates a preconfigured, ready-to-use, multipurpose database with the following:

Default initialization parameters using server parameter file and Automatic Undo Management

Automatic installation and configuration of Oracle Options and interMedia

Advanced Replication capabilities

****Database configuration of dedicated server mode ***

Archiving mode of NOARCHIVELOG "

Since I chose General purpose, I was suprised to see
that in addition to dedicated servers, I also had some
mts servers configured. Is there anyway, I can get rid
of the mts server connections?

thanx!

Tom Kyte
July 15, 2003 - 1:09 am UTC

unset the mts_ init.ora parameters

getting rid of mts..

A reader, July 14, 2003 - 5:42 pm UTC

actually after the machine got rebooted (due to some other reasn)- I ran the same query again - now it shows only
dedicated sessions.

14:41:25 test@ORA901> /

DEDICATED 18

14:41:26 test@ORA901> spool off

thanx!

Tom Kyte
July 15, 2003 - 1:13 am UTC

No, that just means no one connected via mts at the point in time you looked is all...


mts?

A reader, July 15, 2003 - 11:32 am UTC

"unset the mts_ init.ora parameters "

could not find any "mts" parameters (searched for
them in the initora901.ora)

For the time being let us justigmore the topic - If i
run into this again, I can let you know..

thanx for the all the help and insight!!


("shared pool","unknown object","sga heap(1,0)",

Saradha Bavanadam, July 15, 2003 - 4:24 pm UTC

Tom,

We have migrated to Oracle9i instance on this weekend
and the database is up and running from Sunday and the
application users are working on. Today (Tuesday)
we received shared memory problem and unable to logon to
the database.

ORA-04031: unable to allocate 26200 bytes of shared memory ("shared pool","unknown object","sga heap(1,0)",

I used the following query to find out the SQLs that are not using the bind variables. And I did find many dynamic SQLs coming from Java are not using bind variables.

But my question is, these dynamic SQLs are there before and
we did not get this problem before the 9i migration.

Is there anything changed in 9i or is any parameter got
changed that we have to take care of during the migration
that is relevent to this problem.


SELECT substr(sql_text,1,30) "SQL",
count(*) ,
sum(executions) "TotExecs"
FROM v$sqlarea
WHERE executions < 5
GROUP BY substr(sql_text,1,30)
HAVING count(*) > 30
ORDER BY 2
/

Thanks
Saradha

Tom Kyte
July 15, 2003 - 4:33 pm UTC

do you still have your 8i instance on your test machines to check against?

no, we do not tend to change init.oras on you during the migration -- we might suggest some, but we do not change them ....

that sql isn't good for finding non-bind variable statements at all.


when you tested the applications -- before upgrading your production box -- what happened in that environment?

but why after the migration ?

Saradha Bavanandam, July 15, 2003 - 4:56 pm UTC

Tom,

I do have production support instances which are not
yet migated to 9i and still running in 8i but the parameter
size are different as compare to the Production machine.

I identified the SQLs that are not using the BIND variables
using different queries and conveyed to the correspondig people to take care of this.

They are asking me, these dynamic SQLs did not create problem before and are working fine for the last one year in Oracle8i instance and Why all of a sudden after the migration this problem is comming.

They did agree to change those dynamic SQLs but they are still looking for the answer, why it is now.

Thanks
Saradha

Tom Kyte
July 15, 2003 - 5:46 pm UTC

without a controlled reproducible environment to test with - all i can say is "i don't know", insufficient data to diagnose.

ORA-06508: PL/SQL: could not find program unit being call

Pushparaj Arulappan, August 18, 2003 - 4:31 pm UTC

Tom,

I am receiving the following error from my session whenever a particular dependent program is re-compiled.

ORA-06508: PL/SQL: could not find program unit being called.

I have a package (PACK1) which calls another package (PACK2). The two packages are dependent on many other stored programs and packages also.

Assume initially all the programs are compiled and valid.

I open a SQLPLUS session (session 1) and execute the PACK1 package and it executes successfully.

While this session (session 1) is still open, I open another SQLPLUS session (session 2) and re-compile PACK2.

Now I go back to the (session 1) and try to execute the PACK1 pacakge again and this time it raises this error.

ORA-06508: PL/SQL: could not find program unit being called.

But If I logout and re-connect session 1 then it is not a problem. I could not repeat this error with any other test programs something very similar to this. This is happening only to this program. These two programs are relatively large programs 2000 lines each program and these two call many other programs also.

The package PACK1 does call another package say
PACK3 and re-compiling PACK3 is not producing any error.

We have this situation in our environment where the WEB application server initially opens a minimum number of sessions and keeps it in the connection pool. So whenever we recompile this program, we will have to re-start the web server in order to have a fresh connection.

Please give us some clue.

Thanks
Pushparaj

Tom Kyte
August 18, 2003 - 8:37 pm UTC

pack1 maintains a state doesn't it.

Is there any work around ?

Pushparaj Arulappan, August 19, 2003 - 10:45 am UTC

Tom,

Just for testing I did the following.
After I re-compiled package PACK2 in session 2,
I have executed DBMS_SESSION.RESET_PACKAGE in session 1
and then I executed PACK1 package from session 1 and
the problem did not occur.

I may not want to implement this work around in production
but why I am not able to reproduce this error with other
similar test cases. Also, this problem does not occur if I
recompile some other dependent program of PACK1 package.

Thanks
Pushparaj


Tom Kyte
August 19, 2003 - 6:03 pm UTC

because the package HAS A STATE.

you recompile it - and we blow away this particular session state that YOUR session is dependent on.

using dbms_session.reset_package -- resets ALL PACKAGE states, as if you had just logged in.

this is not an error, it would be an error to silently wipe out your package variables and not tell you about it!

A reader, August 19, 2003 - 1:50 pm UTC

Very interesting scenario,

Tom could you please shed some light over this.

Tom Kyte
August 19, 2003 - 6:11 pm UTC

put a light over your monitor!

A reader, August 20, 2003 - 9:43 am UTC

But Tom,

Why is this not occuring in case of PACK3 in the above person's case. He says when PACK3 is compiled and he returns back to session where in he was using PACK1 this problem is not occuring.

Tom Kyte
August 21, 2003 - 8:16 am UTC

because the compilation of pack3's body does not invalidate pack1 (that is the beauty of packages) and pack3 apparently does not have any state to maintain.

packages MAY have a state.
packages do not HAVE to have a state.

ora 4031

A reader, August 20, 2003 - 12:37 pm UTC

Tom, why am I supposed to get an ora-4031 ? I thought the worst that would happen would be my statements aging out. How do they "crowd" the shared pool to the extent of creating an out-of-space situation ? Wouldn't the LRU statements simply age out ?

PS: Where is that script of yours that checks for candidates for binds ? I cannot find it.

Thanks

Tom Kyte
August 21, 2003 - 4:58 pm UTC

search this site for

remove_constants




Shared Memory Problem (unable to allocate shared memory ...)

Cong Tang, August 21, 2003 - 5:01 am UTC

I experienced the same problem with Oracle8i 8.1.5. It was shared pool latch contention due to literal/unshared SQL. Modifying existing applications to use bind variables is expensive, especially with complex applications. Flushing the shared pool every hour and pinning packages could be a valid workaround. Upgrading to Oracle9i and setting cursor_sharing = SIMIMILAR will help.

Tom Kyte
August 21, 2003 - 6:08 pm UTC

what is expensive is doing it wrong in the first place no?

think about the massive amounts of hardware you could have saved had you a single developer who understood or had experience with oracle?

think about the massive amounts of time you could have saved....



answer

Bargh Kara, August 21, 2003 - 8:18 am UTC

dear tom, you seem to have forgotten to answer the reader asking about "ora 4031" above. Could you answer it please ?

I AM STILL TRYING TO UNDERSTAND THIS BIND VARIABLES

Paul, August 21, 2003 - 2:53 pm UTC

Tom can you tell me how to bind variables in VB , say i have follwoing select statement

select * from coil_status where
coil_nbr ='" & input_coil_nbr & "'

is that the way you bind a statement , can you tell me how to bind statement during updates too
thanks

Tom Kyte
August 21, 2003 - 7:16 pm UTC

that is not a bind variable. that results in a unique query for each execution.

you have queries like


select * from coil_status where coil_nbr = '55';
select * from coil_status where coil_nbr = '56';
select * from coil_status where coil_nbr = '57';

and so on. bad that you quoted a number, worse that you didn't bind.

the only VB i have is here:

</code> http://asktom.oracle.com/~tkyte/ResultSets/index.html <code>

can someone who programs vb post a small snippet using binds? it keeps coming up over and over

when I tried to compile vb on my linux desktops it keeps giving me an error "sorry, only open systems code allowed, try again"



Using Binds in VC

Pinto, August 22, 2003 - 4:09 am UTC

Sorry, i don't know VB but i have seen lot of queries as to how its done in VC.

common code
HSTMT stmtHandle;
SQLRETURN retcode;
CDatabase *pDB = new CDatabase;
pDB->OpenEx(...);
char* SQLStmt = new char[2000];
SQLTCHAR szTaskID[10] = "Hello";
SQLINTEGER iTaskTimeStamp = k ;
SQLINTEGER ivalSizeChar = SQL_NTS;
SQLINTEGER ivalSizeInt = 0;
int nTaskIDLength= strlen((const char*)szTaskID);
SQLAllocStmt(pDB->m_hdbc,&stmtHandle);
/*****************
CODE FOR SQLs
*****************/

sprintf(SQLStmt,"select c1 from t1 where c1 = ?");
retcode = SQLBindParameter(stmtHandle, 1,SQL_PARAM_INPUT,SQL_C_TCHAR,SQL_CHAR,nTaskIDLength,0,szTaskID,nTaskIDLength,&ivalSizeChar);

retcode = SQLExecDirect(stmtHandle,(unsigned char*)SQLStmt,SQL_NTS);

/**************************
Code for Stored procedures
***************************/
strcpy(SQLStmt,"call ins_proc(?)");

retcode = SQLBindParameter(stmtHandle,1,SQL_PARAM_INPUT,SQL_C_TCHAR,SQL_CHAR,nTaskIDLength,0,szTaskID,nTaskIDLength,&ivalSizeChar);
retcode = SQLExecDirect(stmtHandle,(unsigned char*)SQLStmt,SQL_NTS);

/**************************
Code for functions
**************************/

strcpy(SQLStmt,"begin ?:=ins_fun(?); end;");

retcode = SQLBindParameter(stmtHandle,1,SQL_PARAM_OUTPUT,SQL_C_SLONG,SQL_INTEGER,0,0,&iTaskTimeStamp,0,&ivalSizeInt);
retcode = SQLBindParameter(stmtHandle,2, SQL_PARAM_INPUT, SQL_C_SLONG, SQL_INTEGER,0, 0, &k, 0,&ivalSizeInt);
retcode = SQLExecDirect(stmtHandle,(unsigned char*)SQLStmt,SQL_NTS);










Auto generation

Ismail, August 22, 2003 - 4:53 am UTC

During Autogeneration of no, if I use sequence, i am facing the following problem.

- If finally the updation fails (becoz of some reason or other), data will be rolled back, but sequence increment is not getting rolled back. Hence number skipping is happening.

- As you said, max +1 logic is also not working properly.

Pl suggest me best method.



Tom Kyte
August 22, 2003 - 8:49 am UTC



use the SEQUENCE and ignore the fact that they are non-sequential. so what?




unable to determine the problem

Anurag, August 22, 2003 - 5:09 am UTC

Tom I've a testing database oracle 8i on PIII 500, win-nt 128 mb ram, After a normal startup the system becomes damn slow even no activity is performed. I calculated the hit ratio of library cache, it was 90.67. I increased shared_pool_size by 1 MB and then rebound the datbase. The database shows great improvement with hit ratio considerabley increasing from 97.89 to 99.99. Later after an hour with no activity performed, the hit ratio was again 92.34. I am not performing any activity. Still i see the changes in ratios. What is the mystery?? and how to get rid of it.

thanks in adv.

Tom Kyte
August 22, 2003 - 8:58 am UTC

lets see, running a database on a machine with just about enough memory to run the operating system.

thinking that cache hit ratios are the important thing...

adding a measly 1mb to the shared pool...

not really describing what is happening in the system, what the system needs to do, how many users it needs to support, how it is currently using resources....


128m of ram is less ram then you need to run windoze and word at the same time - let alone windows and a database.

cache hit ratios are not meaningful in themselves (see hotsos.com and get "are you still using cache hit ratios")

If you want your cache hit ratio to go waaaayyy up - i can fix that easily.  Here see:

ops$tkyte@ORA920> SELECT round( (1 -(phy.value / (cur.value + con.value)))*100,2) "CACHE HIT RATIO"
  2    FROM v$sysstat cur, v$sysstat con, v$sysstat phy
  3   WHERE cur.name = 'db block gets'
  4     AND con.name = 'consistent gets'
  5     AND phy.name = 'physical reads'
  6  /

CACHE HIT RATIO
---------------
          90.03

ops$tkyte@ORA920>
ops$tkyte@ORA920>
ops$tkyte@ORA920> create or replace procedure set_cache_hit_ratio_to( p_ratio in number )
  2  as
  3      l_cache_hit number;
  4  begin
  5      loop
  6          SELECT round( (1 -(phy.value / (cur.value + con.value)))*100,2) "CACHE HIT RATIO"
  7            into l_cache_hit
  8            FROM v$sysstat cur, v$sysstat con, v$sysstat phy
  9           WHERE cur.name = 'db block gets'
 10             AND con.name = 'consistent gets'
 11             AND phy.name = 'physical reads';
 12
 13          exit when ( l_cache_hit >= p_ratio );
 14
 15          for i in 1 .. 100
 16          loop
 17              for x in ( select d1.dummy from dual d1, dual, dual )
 18              loop
 19                  null;
 20              end loop;
 21          end loop;
 22      end loop;
 23  end;
 24  /

Procedure created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> exec set_cache_hit_ratio_to( 92 );

PL/SQL procedure successfully completed.

ops$tkyte@ORA920>
ops$tkyte@ORA920>
ops$tkyte@ORA920> SELECT round( (1 -(phy.value / (cur.value + con.value)))*100,2) "CACHE HIT RATIO"
  2    FROM v$sysstat cur, v$sysstat con, v$sysstat phy
  3   WHERE cur.name = 'db block gets'
  4     AND con.name = 'consistent gets'
  5     AND phy.name = 'physical reads'
  6  /

CACHE HIT RATIO
---------------
             92

ops$tkyte@ORA920>

<b>whoo-hoo.  we could get close to 100% if we like.</b>

bummer that my CPU was pegged to do it, but my cache hit - perfection!


you probably have insufficient ram to run excel and word at the same time with good performance, let along a database. 

if your SGA is over say 50-60meg, you are paging like wild.  look and see if your disk drive light isn't blinking like crazy. 

How they've to pass parameter

Ashiq Shamsudeen, August 22, 2003 - 9:00 am UTC

Hi Tom,

Can you show me small example how to pass parameter to the stored procedure from the front end(say front end be java or asp) , which will be using bind variable and the code which is not using bind variable.
I doesn't have idea about Java ,so if front end(or java) guys they code wrongly (in the sense not using bind variables) .I can ask them to change the code which use bind variable.



Auto gen - sequence

Ismail, August 22, 2003 - 10:22 am UTC

"Auto generation no" should be sequence, because
- "financial transactions, the nos MUST be in sequence, otherwise auditors will question the accuracy and there will not be any control for the users to maintain no. of transactions. (if no skipping happens)

Hence sequencial order is very much required.

Tom Kyte
August 22, 2003 - 1:51 pm UTC

and in the paper based days, what did you do eh?

when you spilt coffee on the forms and had to throw away some?

why, you voided them.



if you want 100% sequential numbers with no gaps nowhere -- you will serialize. plain, simple, to the point. you will scale not at all.


serialization, one at a time -- and even then, I'll betcha you get gaps.


to maintain the number of transactions is as simple as "count(*)"

if you like, just issue "update t set transid = rownum" every night. where transid is some magic column you keep for "auditors" to make them happy. It would be as relevant and meaningful and accurate as serialization all day long!!!

RE: can someone who programs vb post a small snippet using binds?

Mark A. Williams, August 26, 2003 - 12:26 pm UTC

Tom,

I am not a VB programmer, but here is a small snippet I managed to cobble together. See embedded comments. Also, wrapping of the text posted may be an issue.

Thanks,

Mark

==========

Module VBOraBind
Sub Main()
'// This is a very simple example that illustrates how to use a bind variable in
'// a Visual Basic .NET program with Oracle. I am not a VB programmer, so I offer
'// not guarantee that this is the best way to use a bind variable in VB, only that
'// it works on my test system.
'// As you can see it has no error handling code or other features that would be
'// included in real production code.

'// In my opinion, a well-architected Oracle application would be using packages
'// in the server, and the client would simply be making calls to those packages,
'// and not submitting SQL statements as we do in this example.

'// Created with Visual Studio .NET (but the concepts are the same in Visual Studio 6)
'// Requires a reference to the ADODB library
'// version 2.7 was used in this example

'// Uses the 'HR' sample schema (employees table) supplied with the Oracle9i database

'// Declare and instantiate a connection object
Dim cnOracle As ADODB.Connection
cnOracle = New ADODB.Connection()

'// Declare and instantiate a command object
Dim cmdOracle As ADODB.Command
cmdOracle = New ADODB.Command()

'// Declare and instantiate a parameter object
Dim prmOracle As ADODB.Parameter
prmOracle = New ADODB.Parameter()

'// Declare and instantiate a recordset object
Dim rsOracle As ADODB.Recordset
rsOracle = New ADODB.Recordset()

'// Open a DSN-less connection
'// You must adjust values for your environment
cnOracle.Open("Driver=Oracle in OraHome92;server=cds9i;uid=hr;pwd=demo")

'// This is the SQL statement for the example
'// The '?' is the placeholder for the bind variable
cmdOracle.CommandText = "select first_name || ' ' || last_name from employees where employee_id = ?"
cmdOracle.CommandType = ADODB.CommandTypeEnum.adCmdText

'// Create the parameter object, assign it a value, and append it to the
'// parameters collection of the command object
'// This is the bind variable that will be passed to Oracle
prmOracle = cmdOracle.CreateParameter("employee_id", ADODB.DataTypeEnum.adInteger, ADODB.ParameterDirectionEnum.adParamInput)
prmOracle.Value = 200
cmdOracle.Parameters.Append(prmOracle)

'// Assign the connection object to which this command object belongs
cmdOracle.ActiveConnection = cnOracle

'// Execute the command and assign the recordset
rsOracle = cmdOracle.Execute

'// Simply print out the value from the recordset
While Not (rsOracle.EOF)
Debug.WriteLine(rsOracle.Fields(0).Value)
rsOracle.MoveNext()
End While

'// Explicitly close objects
rsOracle.Close()
cnOracle.Close()

'// Release objects
prmOracle = Nothing
rsOracle = Nothing
cmdOracle = Nothing
cnOracle = Nothing
End Sub
End Module

Tom Kyte
August 26, 2003 - 1:39 pm UTC

thanks, that was perfect.

Ora-04031 on Dedicated Server

Vivek Sharma, August 27, 2003 - 8:08 am UTC

Dear Tom,

I really appreciate the solutions provided by you related to each and every oracle related problems. Also the solution for ora-04031 was excellent. But I have a question. I have a dedicated server configuration with a shared pool size of 324 MB and shared pool reserved size is default to 5% (not set in init.ora) about 17 MB. The database version is 8.1.6.3. I have checked the statspack report and the soft parse ratio is well above 98%. Then why am i facing a problem of ora-04031. The error which I receive is

Tue Aug 26 02:41:20 2003
Errors in file /depdb/oracle/DEPDB/dump/bdump/depdb_snp0_1775.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-04031: unable to allocate 512 bytes of shared memory ("shared pool","STANDARDSYS","PL/SQL DIANA","PAR.C:parapt:Page")

We have not kept any PL/SQL in shared pool reserved area but it seems that oracle by default keep some of them in the reserved area. I have checked in 2-3 databases and found around 25 of them in reserved area.

What do you recommend. The application is using Bind Variables. What should I do ?

A) Increase the shared pool size by around 50MB (We have already increased it from 250 to 320 MB when we last faced this error).

or

B) Keep some objects in reserved area

or

C) Reduce the size of Share pool reserve area to 0.

What are your recommendations ? Please help me resolve this issus as it is production 24 x 7 database and when we face this error, we need to bounce the database else the users are not able to connect to the DB.

Regards



Tom Kyte
August 27, 2003 - 5:48 pm UTC

first -- not sure what you are saying about the reserved area. basically that is an area reserved for large allocations -- 512bytes won't go there.

how about option d) get onto supported versions

you can have a soft parse ratio of 98% and still be hard parsing like made. that means 2 out of every 100 parses is "hard".

have you tried flushing the shared pool versus killing the server?

Shared pool allocation

Krish Ullur, August 27, 2003 - 9:50 am UTC

I read (somewhere) that shared pool memory is allocated in chunks of contiguous 4K bytes.
But an 'exec dbms_shared_pool.sizes(0);' shows objects with 1(K) as well. Could you please throw light on this unit of memory allocation in shared pool?

Thx,

KU


Tom Kyte
August 27, 2003 - 5:49 pm UTC

smaller then 4k -- not just 4k, smaller then 4k

Is 4k the minimum allocation in shared pool?

Reader, August 27, 2003 - 8:08 pm UTC

Tom, How does oracle allocate memory for "small" objects? What does it mean by 4k chunks? Thanks.

Tom Kyte
August 28, 2003 - 7:42 am UTC

4k is not the minimum.

it is a size that plsql tries to chunk code into so rather then allocation 125k of space, it would try to allocate ceil(125/4) - 4k chunks. it helps prevent fragement of the shared pool where a large object would not be able to find sufficient contigous space. it is easier to find 50-4k chunks then it is to find a single 200k contigous one.

Can this happen when bind variable is not used?

Mike Costa, September 25, 2003 - 2:25 pm UTC

We had this happen twice on our Oracle DB.
The oracle DB simply becomes not responsive at all.
Cannot even log on as sysdba. We sent Oracle support
the traces and was told nothing special was found.

Could this be the result of lack of bind variables? We do
have queries that don't use bind variables and will be fixed
soon. But wonder if that could be the possible cause of
the crash.

Tom Kyte
September 25, 2003 - 11:32 pm UTC

no and support should not (in my experience would not) have stopped there.

this doesn't sound like a "crash", you had a hang, they should have been able to help you diagnose this.

ORA-04031: unable to allocate 4096 bytes of shared memory

Kom, February 02, 2004 - 6:19 pm UTC

Hi Tom,
We sometimes get this error ORA-04031. We have a mixed set of SQL ( Dynamic, Bind and Stored procedures ).

I am confused with the following terminologies.
Shared Pool , Bind Variables and Dynamic SQl's.
You always say that to avoid shared pool problems one should write only Bind variables.
1) By using Bind Variables will it completely remove the Shared pool problem ?
2) What if I am running 100,000+ unique SQL's which cannot be Bind ( Ad hoc )?
3) Does oracle take another route in handling Bind SQL's compared to dynamic SQL's
4) My understanding is if I use Dynamic SQL shared pool will crash in 2 days but If I use Bind variables eventually the shared memory will crash after 100 days, considering that my application has ad-hoc query facilty. How do I avoid this completely.
Regards,
Kom

Tom Kyte
February 03, 2004 - 7:40 am UTC

1) as long as the dynamic sql is generated in the same fashion (eg: same case, same bind variable namings, etc).

2) make your shared pool small to allow for aging to take place in a reasonable fashion. do not make your shared pool HUGE to accomidate this.

3) nope. they are the same.

4) umm, no -- that would not be true. HTML/DB makes EXTENSIVE use of dynamic sql --- asktom is built on HTML/DB -- the site runs for months between restarts. Many months.





Shared pool follow up

kom, February 03, 2004 - 10:31 am UTC

you said :
>>> 2) make your shared pool small to allow for aging to take place in a reasonable
fashion. do not make your shared pool HUGE to accomidate this

1) How to arrive at a Shared pool size. Is there any calculation to determine the shared pool size
2) How do I make sure the aging is taking place , is there a parameter or setting which needs to be done ?

Thanks in advance,

Tom Kyte
February 03, 2004 - 1:45 pm UTC

start at 50meg and use statspack to watch the utilization

How to use bind variable when query is stored in Database?

Shrikant Gavas., February 09, 2004 - 10:20 am UTC

Hi Tom,
Thanks Alot for your precious time.

We are preparing where clause for a query at run time(in different session) and storing it in a table. We are firing this sql using DBMS_SQL package and since in the query bind variables are not used we are facing problem of shared pool Similar type of queries are getting parsed again and again.
Basically we are finding COUNT from a fixed table using the stored WHERE_CLAUSE.

How to use bind variables in such cases ? Or is there any alternative for this. Please Please HELP.

Regards

Shrikant Gavas

Tom Kyte
February 09, 2004 - 10:39 am UTC

alter session set cursor_sharing=force;

A reader, February 23, 2004 - 12:58 pm UTC

Tom,

I 9i concepts guide chapter 7
<doc>
SGA Starting Address
The SHARED_MEMORY_ADDRESS and HI_SHARED_MEMORY_ADDRESS parameters specify the SGA's starting address at runtime. These parameters are rarely used. For 64-bit platforms, HI_SHARED_MEMORY_ADDRESS specifies the high order 32 bits of the 64-bit address.

<doc>

I understand it states it is rarely used, but when and why?

Thanks.

Tom Kyte
February 23, 2004 - 4:46 pm UTC

This controls the attach address of the SGA on certain platforms only. This can be useful if you get 'shmat()' type errors like ORA-7307 with an Invalid Argument error.

Bind variables rock!

Duke Ganote, March 21, 2004 - 5:02 pm UTC

Tom-- I think I'm "getting it" on bind variables (it was never quite clear to me since I was embedding most of my SQL in PL/SQL for ETL, which did most of the work for me). I took your earlier 2 examples with dynamic SQL and included a 3rd static SQL rendition.

I also included counts from V$SQL, per your question on the V$SQL* views at </code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:385254862843, <code>

The output is in the next 3 lines, and the code follows.
Thank you!

OUTPUT RESULTS:
Dynamic SQL, no bind vars took 53.6 seconds...394 SQL statements in shared pool
Dynamic SQL, w/ bind vars took 1.37 seconds...33 SQL statements in shared pool
_Static SQL, _autobinding took 1.29 seconds...32 SQL statements in shared pool

CODE:

GANOTEDP\dganote> ed
Wrote file afiedt.buf

1 declare
2 type rc is ref cursor;
3 l_rc rc;
4 l_dummy all_objects.object_name%type;
5 l_start number default dbms_utility.get_time;
6 begin
7 execute immediate 'alter system flush shared_pool';
8 -- FIRST ITERATION: NO BIND VARIABLE
9 for i in 1 .. 1000 loop
10 open l_rc for
11 'select object_name
12 from all_objects
13 where object_id = ' || i; -- not a bind variable
14 fetch l_rc into l_dummy;
15 close l_rc;
16 end loop;
17 for r in ( select count(1) cnt from v$sqlarea ) loop
18 dbms_output.put_line
19 ( 'Dynamic SQL, no bind vars took '||round( (dbms_utility.get_time-l_start)/100, 2 ) ||
20 ' seconds...'||r.cnt||' SQL statements in shared pool' );
21 end loop;
22 execute immediate 'alter system flush shared_pool';
23 -- SECOND ITERATION: BIND VARIABLE
24 l_start := dbms_utility.get_time;
25 for i in 1 .. 1000
26 loop
27 open l_rc for
28 'select object_name
29 from all_objects
30 where object_id = :x' -- bind variable
31 using i;
32 fetch l_rc into l_dummy;
33 close l_rc;
34 end loop;
35 for r in ( select count(1) cnt from v$sqlarea ) loop
36 dbms_output.put_line
37 ( 'Dynamic SQL, w/ bind vars took '||round( (dbms_utility.get_time-l_start)/100, 2 ) ||
38 ' seconds...'||r.cnt||' SQL statements in shared pool');
39 end loop;
40 -- THIRD ITERATION: AUTOBIND
41 execute immediate 'alter system flush shared_pool';
42 l_start := dbms_utility.get_time;
43 for i in 1 .. 1000
44 loop
45 for rec in ( select object_name
46 from all_objects
47 where object_id = i ) loop
48 l_dummy := rec.object_name;
49 end loop;
50 end loop;
51 for r in ( select count(1) cnt from v$sqlarea ) loop
52 dbms_output.put_line
53 ( '_Static SQL, _autobinding took '||round( (dbms_utility.get_time-l_start)/100, 2 ) ||
54 ' seconds...'||r.cnt||' SQL statements in shared pool' );
55 end loop;
56* end;
GANOTEDP\dganote> /
Dynamic SQL, no bind vars took 53.6 seconds...394 SQL statements in shared pool
Dynamic SQL, w/ bind vars took 1.37 seconds...33 SQL statements in shared pool
_Static SQL, _autobinding took 1.29 seconds...32 SQL statements in shared pool

PL/SQL procedure successfully completed.

Elapsed: 00:01:01.06

memory bug

A reader, March 27, 2004 - 7:25 am UTC

Sorry, it look to me that I was on wrong topic first ime.
Here is a link of my first question:
</code> http://asktom.oracle.com/pls/ask/f?p=4950:61:2240880001785671395::::P61_ID:1288301763279 <code>

and text:
******************************
--------------------------------------------------------------------------------
Cursor bug March 26, 2004
Reviewer: Fan from Germany

Hi Tom,

Recently our database has crashed having problem with memory (Oracle v9.2).
Oracle support gave us an answer telling that we have hit oracle bug when a
query (cursor) in DB procedure accessing partitioned table use function new_time
and cursor_sharing is enabled => raise ORA-600.
But to me this look quite strange. To describe:
cursor_sharing is set to exact, query itself does not use function new_time
(procedure does) and system crashes only when in query where clause we use one
particular column (in that case query have a cost of 5000).
If we execute the same query under the same conditions but use another column
(in this case query have cost of 500 and return the same result set) system does
not crash.
In both cases query (actually cursor) use bind variables.
I have checked shared pool in second case and query has been parsed once and
executed 120 times. In first case is the same but only difference is
cost...query takes longer to finish.
This is confusing to me. If the bug is presistent, system will chrash in both
cases.
Does the query execution time play some role in this bug and could you please
give me some more information about this oracle bug if you have?
I do not know wheter my explanation was clear, but if you need some more
detailed info, I can post it.


Followup:
so, your database isn't crashing, a session is getting an ora-600 with
cursor_sharing...


but you say you are not using that -- so obviously -- support has either

a) misdiagnosed the issue
b) failed to explain that the but can happen due to other reasons.


without having access to the actual bug number and tar related information --
i'd be hard pressed to really "say more"

*******************************************

to continue:

Actually database is crashing since no one is able to log on to database after running this procedure.
Here is original text that we got from Oracle support:

*****************************************
The stack trace information shows that you are hitting the Bug 2416262


Bug Details:
===========
ORA-600 [qbaopn1] / memory corruption can occur when using the
NEW_TIME function with CURSOR_SHARING enabled when the query involves
a partitioned table.

Fixed-Releases: 9205 A000
*****************************************

I do not know the tar number since all communication with Oracle support are going over our System support and they also have no idea what is happening on our system.

I do no expect that you give me 100% correct answer, but any hint in which direction I should dig to find the cause will be useful. As I said, I am simply not sure that this is a cause of our problem.
Question which comes out are like, for which value of cursor_sharing this is happening?
Does the query have to have in where clause column on which this table is partitioned in order for this to happen or can be any column from this partitioned table?
Does new_time function have to be explicitly used in query or not? and so on....
Any info that you have will be valuable to me regarding this bug. I do not need straight answer, only information which will help me to analyze and understand the problem.

Thanks in advance.


Tom Kyte
March 27, 2004 - 10:58 am UTC

well, this is what your "systems support personnel" were hired to do isn't it?  

IF you are in fact hitting this, there are patches for 9204, 9203 as well.  You will really have to work this *via support*.  

The symptoms in the bug are

cursor_sharing=force is set (force, probably similar as well)

Here is a test case to demonstrate.  

ops$tkyte@ORA9IR2> drop table junk3;
 
Table dropped.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table junk3(col date)
  2  partition by range (col)
  3  ( partition d1 values less than (to_date('20020101','YYYYMMDD')),
  4    partition d2 values less than (maxvalue) )
  5  /
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter session set cursor_sharing=exact;
 
Session altered.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select new_time( to_date('01011970','ddmmyyyy')
  2  + 1/24/60/60 * 1023297820 - 800, 'GMT', 'EDT' )
  3  from dual;
 
NEW_TIME(
---------
27-MAR-00
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select * from junk3
  2  where col = new_time( to_date('01011970','ddmmyyyy')
  3  + 1/24/60/60 * 1023297820 - 800, 'GMT', 'EDT' );
 
no rows selected
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter session set cursor_sharing=force;
 
Session altered.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select new_time( to_date('01011970','ddmmyyyy')
  2  + 1/24/60/60 * 1023297820 - 800, 'GMT', 'EDT' )
  3  from dual t1;
 
NEW_TIME(
---------
27-MAR-00
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select * from junk3 t2
  2  where col = new_time( to_date('01011970','ddmmyyyy')
  3  + 1/24/60/60 * 1023297820 - 800, 'GMT', 'EDT' );
select * from junk3 t2
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel


You would see something like:


ORA-00600: internal error code, arguments: [qbaopn1], [185], [], [], [], [], [], []
Current SQL statement for this session:
select * from junk3 t2
where col = new_time( to_date(:"SYS_B_0",:"SYS_B_1")
+ :"SYS_B_2"/:"SYS_B_3"/:"SYS_B_4"/:"SYS_B_5" * :"SYS_B_6" - :"SYS_B_7", :"SYS_B_8", :"SYS_B_9" )


in the trace file for that session on the server.  If you do not -- not the same issue.

workaround is to not use cursor_sharing=force.

 

Unable to allocate while logging on

Randy, March 30, 2004 - 4:24 pm UTC

Above, someone was getting "ORA-04031 and you said:
tell me what you were doing -- was it logging on.

He wasn't, but my code does. I get:
ORA-00604: error occurred at recursive SQL level 1
ORA-04031: unable to allocate 4200 bytes of shared memory ("shared pool","unknown object","sga heap","state objects")

The code uses almost all PreparedStatements, and I'm cleaning up the rest, which are used infrequently. But it creates/uses/closes connections constantly instead of caching them or pooling them. Could this be contributing to the problem?

Thanks,
Randy

Tom Kyte
March 30, 2004 - 6:34 pm UTC

they were large pool, not shared pool

so, does your application that uses prepared statements use bind variables?
are you MTS (shared server) or dedicated?
how big is your shared pool.
how many connections do you use.

our configuration

Randy, March 30, 2004 - 11:02 pm UTC

Yes, the PreparedStatements use bind variables,
'?' instead of hard-coded values.
Dedicated server.
The shared pool size is the default,
I think- like 50M for the shared pool
and 2M for the shared pool reserved size.
We use about 12 connections, often fewer than that are open.
Thanks- Randy

Tom Kyte
March 31, 2004 - 8:28 am UTC

50m could well be "too small".

the default is 8m on 32bit, 64m on 64bit -- check it out.

Already doing this--how do we deal with this one :)

selly, April 06, 2004 - 9:42 pm UTC

Tom,

We're getting a similar error--but we're already using PreparedStatements (bind variables). We've got ~2000 short-lived tables created dynamically with unique names for every one of our ETL load processes (imagine 2-3x a day--6000 or so queries of the form below a day). These tables contain intermediary data formats. We run a query ("SELECT * WHERE ROWNUM <=1" or something like this) from each of the tables automatically a number of times in our scheduler process.

Each of these queries is stored in V$SQLAREA; however, because each of them is very small (i.e., tiny explain plan, tiny compilation, etc.) they end up fragmenting the shared area. Therefore, our monitor is unable to pick up the fact that we've run out of allocable memory--we have MB's of allocable memory, but so fragmented, there is no sizable contiguous block which is usable for a query.

How would you suggest modifying the monitor to prevent running out of shared memory in this case? Can we monitor the largest contiguous block? Should we autoflush regularly (what is the performance cost here?)

Tom Kyte
April 07, 2004 - 9:03 am UTC

hows about fixing the program with the bug? you know, the one that is creating about 2000 tiny tables. DDL is a horribly expensive operation -- probably takes as long to create and drop 2000 tables as it does to load the data itself

Doesn't run that slowly

selly, April 07, 2004 - 8:30 pm UTC

Creating 2000 tables only takes longer to create than to load if you're loading < 10 GB of data.

PS We're loading > 100 GB of data.

Cheers,
Selly

Tom Kyte
April 08, 2004 - 9:58 am UTC

still, I'd be looking at the logic (and >100 gig of data shouldn't take very long to load). look at the logic, creating/dropping 2000 tables just isn't "a very good idea".




rongo

selly, April 09, 2004 - 11:20 am UTC

Tom,

Well, if loading 100 GB of data won't take so long, neither does creating 1000's of tables. :)

Nonetheless, suggestion taken. We revisited the logic and this is still the right decision (of course, unless Oracle really can't handle this). However, a number of the developers on this project have done similar things and been able to get around this.

Telling me to change the design of the application isn't quite what I'm looking for from the DB guru... Plz try again.

--Selly


Tom Kyte
April 09, 2004 - 1:23 pm UTC

yes it does -- creating 1000's of tables is unnecessary and heinous overhead, not to mention what you'll be doing to you data dictionary over time.


the main way to fix things is to fix the architecture.

Give a broken sagging bridge to an civil engineer guru and see what say. It won't be "patch it, band-aid it", it'll be rip that monstronsity down and do it right.

Not even being tongue in cheek, you asked -- you got answered. I cannot *fathom* (nor have I ever heard) of the need to create and drop 2000+ tables for a simple load.


I start by fixing the problem, not trying to make it limp along. (that is comp-sci 101, not even database related at this point)

Please try again

selly, April 14, 2004 - 1:16 pm UTC

Thanks for the passive-aggressive response.
Insults aside, I'm looking for help here and you're not providing it. I'll try once again.

To quote you:
I cannot *fathom* (nor have I ever heard) of the need to create and drop 2000+ tables for a simple load.

Response:
Not a simple load. ETL has teams of 10's of people both at Oracle and at Microsoft. If it were a simple thing, those people wouldn't have jobs (to clarify--they do have jobs and it's still a hard problem).
Furthermore "nor have I ever heard"--well, you have now. Please help.

To quote you:
I start by fixing the problem, not trying to make it limp along. (that is comp-sci 101, not even database related at this point)

Response:
Again, thanks for the passive-aggressive response. This is a forum for advice, not for insults. I'm sure you get paid either way, but this is why Oracle has a horrible reputation in the software world.
The problem is real, and there are a number of realistic options: flushing per app load, flushing per drop, etc.

WRT "it isn't even database related" at this point, MySQL, PostGre both work fine with the same app and their default table-types, so it is definitively at least a *database-specific* issue.

There are always corner-cases and partners of Oracle will frequently push the limits of a system such as this.

So, once again, I am asking for help, not insults. If you don't have any to offer, just don't respond--you only make yourself and Oracle look bad.

Tom Kyte
April 14, 2004 - 2:12 pm UTC

just to point out something:

Telling me to change the design of the application isn't quite what I'm looking
for from the DB guru... Plz try again.


talk passive aggressive? The design of the application is causing a problem. "please oh please don't throw me in the briar patch".


You are correct it is a database specific issue -- database independence is, will be, has been something that is not truly achievable for real apps unless you do it like SAP and Peoplesoft (eg: build your OWN database outside the database and just use the database to store bits and not much else).


those databases on the other hand, while not subject to this particular issue -- don't have a shared pool (hence they are not subject to this particular issue) and hence have issues in other unrelated areas (scalability for example).

What you've done in Oracle is just a really bad idea *in Oracle*. Like having a transaction spend more than a millisecond holding a lock is in sqlserver (but not so bad in Oracle).


Even in postgress, et.al. the design isn't near what it should be, could be.



there is only one solution for you flush after the load or use a really really really small shared pool (50megs or less) and let it do it itself.

Thanks Tom

Selly, April 14, 2004 - 2:52 pm UTC

Well Tom,

After putting up with your demeaning insults for a couple days, I finally got an answer to half of my initial question:

How would you suggest modifying the monitor to prevent running out of shared memory in this case? Can we monitor the largest contiguous block? Should we autoflush regularly (what is the performance cost here?)

Your answer:
there is only one solution for you flush after the load or use a really really really small shared pool (50megs or less) and let it do it itself.

Could you perchance trouble yourself to answer the other half? Can we modify the monitor to track contiguous space? What are the performance implications?

Your friend,
Selly

Tom Kyte
April 14, 2004 - 3:28 pm UTC

demeaning insults? where? -- you have something that is not designed for the database it is running in. I will continue to say that over and over (thats not what makes it true -- me saying it -- what makes it true is well, its truthfullness).

I don't see demeaning here. If others think this has been demeaning, please let me know (and where) and I'll rethink it.

I would not monitor, just flush -- you know you are doing the worst thing possible, just flush -- period. flush, flush often, flush frequently during your load.

It'll kill performance, but -- what other choice do you have? (besides doing it in a fashion that doesn't require 2000+ tables to be dropped and created a couple of times a day).


The only solutions are

o architect the application differently

o to flush and flush often

o to decrease the size of the shared pool, you aren't using it after all -- why bother filling up and managing a huge structure whose purpose you are defeating by design.

No need to go into gyrations over monitoring to figure out exactly how often to flush -- just do it often during the load.


Check yourself

Jeff, April 14, 2004 - 3:06 pm UTC

Selly,

I really think you've gotten the wrong idea here. I've read the whole post and you seem to be pretty defensive. I also think it's ridiculous to slam Tom when he has tried to give you an answer. Just because you didn't like the response, doesn't mean that he was being unhelpful. Tom does a great service to many, many people. Sorry that you feel like he is OBLIGATED to try and handle the problems that YOU are having with YOUR system.

You've been able to get advice (*free* I might add) from one of most knowledgeable Oracle people on the planet...

- 2 cents

RE: Oracle's reputation

Mark A. Williams, April 14, 2004 - 4:55 pm UTC

I, like Jeff, have read this post (several times over actually) and cancelled a review earlier today because I did not want to seem out of line.

I found this to be particularly amusing: "...this is why Oracle has a horrible reputation in the software world." Why? Because you need to understand it to properly utilize it to its fullest?

What I find horrible is that developers (not all developers mind you) do "stupid" stuff to the database and then want to know why Oracle is "broken". I have not yet found the '_application_does_stupid_stuff_so_oracle_must_adapt=true' init parameter.

Oracle is what it is. Properly written applications work astoundingly well. Like Tom tried to point out, what works on other systems may or may not work on Oracle, and is, in large part (I feel), irrelevant.

I am not sure what additional information or answers you are wanting. Tom has given his advice. Advice which you apparently do not like. Post on the comp.databases.oracle.server newsgroup for other opinions if you want.

- Mark

Dear readers

selly, April 14, 2004 - 7:05 pm UTC

Thanks for the feedback. I (obviously) disagree. I appreicate the fact I got a response (wrong at first, but eh, whatever)

I won't go into too much depth about reputations. Figure that out for yourself. I didn't steal $90MM from the California Board of Education.

WRT whether Oracle runs fast or not, for damn sure it does. That's why I'm using it. The fact that their memory monitor doesn't detect when the shared memory space is fragmented but not full is frustrating. That was my initial question. It was answered with "Fix the application" multiple times.

That isn't the answer. If I went into the depth of the application, you, as well, would realize that this is the correct application design (and very similar to the way that MS SQL's internal ETL application and many Oracle ETL applications work), however because they're running outside of the database, they can't use temporary tables (long story).

Alas, I won't defend myself further. Tom, thanks for the answer. Cheers.

Tom Kyte
April 15, 2004 - 8:15 am UTC

"MS SQL's internal ETL application" ouch, take the db with the LEAST in common with Oracle architecturally...

"Oracle ETL applications work" (not ones done properly -- ones that run slow, true, but not good ones)



hows about fixing the program with the bug? was about right.

Martin Burbridge, April 14, 2004 - 8:43 pm UTC

ETL is pretty common at all levels of complexity, I have never seen or heard of any application design that needed to create and drop thousands of tables to do it. Probably because, as you have already discovered they won't work on Oracle.

Selly, additional info...

Mark A. Williams, April 14, 2004 - 10:27 pm UTC

Selly,

I believe I misread (or misunderstood) your comment about Oracle's reputation. I also believe your statement about Oracle stealing $90M is a bit unfair. For starters, Oracle was not the only company involved. A company called Logicon was responsible for brokering the deal to which I believe you are referring. As reported in the Sacramento Bee, an audit done by the state determined (among other things) that "...the department took no responsibility for evaluating the proposal, which "seems inconsistent with its statutory responsibility."" and "The audit shows the three agencies involved dropped the ball." This occured at the end of a fical year (I believe) so I am sure there was some pressure to "close the deal" from Oracle. However, I would have to disagree with your choice of words.

In any case, I doubt much productive would come from a continuing debate here in 'AskTom', so I suspect we will have to "agree to disagree" about this (and other topics I suspect).

Thanks,

Mark

“eh, whatever”

Gabriel, April 14, 2004 - 11:48 pm UTC

I, unlike Jeff, did not find Selly at all “defensive” … quite the contrary.

<quote>PS We're loading > 100 GB of data.</quote>

PS? 100Gb? … you don’t say!

<quote>rongo … Plz try again.</quote>

“Rongo”? … I beg your pardon?

<quote>I'm sure you get paid either way, but this is why Oracle has a horrible reputation in the software world.</quote>

Guess you got a reputation for yourself now!
Soooo … you are from the “software world” (probably the real one) … you must be feeling a bit lonely out there?

<quote>I appreicate the fact I got a response (wrong at first, but eh, whatever)</quote>

Indeed … “whatever”? … I can see the appreciation pouring out.

<quote>We revisited the logic and this is still the right decision (of course, unless Oracle really can't handle this).</quote>

There is another “unless” … did it ever cross your mind that maybe, just maybe, the wrong people had done the visiting and the revisiting and made the wrong design decision? Oh yes, I see this every day … people in position of authority making wrong decisions and subsequently doing the public self-flogging!

<quote>If I went into the depth of the application, you, as well, would realize that this is the correct application design (and very similar to the way that MS SQL's internal ETL application and many Oracle ETL applications work) … </quote>

Please don’t spare us … take us into the “depth” of your application.

“many Oracle ETL applications”? Let us see …
1. OADW? … Nope, no “short-lived tables created dynamically with unique names for every … ETL load process” there.
2. Oracle Data Mart Builder (the one from Sagent) … Nope again
3. Oracle Warehouse Builder … Nope again
Right now, there aren’t “many” Oracle ETL applications … there is just one: OWB.
As for “MS SQL's internal ETL application” … which exactly is that? … I’m not privy with MS’s internal blueprints.

<quote> … however because they're running outside of the database, they can't use temporary tables (long story)</quote>

Any application with a connection to a database can use a temporary table … any application not connected to a database cannot use a temporary table (dahh!) … what other “long story” is there?

<quote>ETL has teams of 10's of people both at Oracle and at Microsoft.</quote>
Right you say.

I hope you didn’t feel insulted … but if you did, I do apologize … and to quote you one more time … “eh, whatever”.


What iff one is using ALL bind variables

Kanwar, May 05, 2004 - 9:50 pm UTC

Hi:
I am running Ora 817 and I have started getting this error as soon as a particular job executes a PL/SQL procedure. Upon investigating the procedure with developers, I have noticed that we are using bind variables everywhere.
Any help why this error could not be cropping up? Btw, I had increased my shared_pool size by around 30% last week and the error had subsided.


Tom Kyte
May 06, 2004 - 7:26 am UTC

30% of what. (percentages are used to hide information :)

Could well be that you in fact ran out of shared pool memory, that it in fact was too small.

could be caused by many things, using shared server (MTS) without having set a large pool for UGA memory for example.

Clarifications to my previous review

Kanwar Plaha, May 19, 2004 - 2:18 am UTC

Hi Tom:
Thanks for the reply. I had increased the shared_pool_size by 30%. It is now close to 200MB (raised up from 90MB). After posting the query to you last time, I did some "Research" and decided to mark the tables in queries that were taking up most of the buffer_gets to the RECYCLE buffer pool. That seemed to quieten the db for a week or so. Now, the error has returned.
Along with the developers, I have run through the entire code and there are bind variables everywhere.
Please indicate some other areas that I can tune to get rid of the (now dreaded) ORA-04031 ...
Thanks ...
Btw, I also added buffer_pool_recycle in init.ora and set it to 1000 - which in my case would make the pool to be around 8M.

Tom Kyte
May 19, 2004 - 9:52 am UTC

shared pool != buffer cache.

buffer cache = nothing to do with ora4031.
buffer cache is not relevant here.


are you using shared or dedicated server?

have you run MY test to see if you are binding or not?

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1163635055580 <code>



Using bind variables or not

Kanwar Plaha, May 19, 2004 - 11:38 pm UTC

Yes Tom, everywhere in the code, we are using bind variables except a small anonymous PL/SQL block below:

filter_condition := 'declare tmp char(1) := ''N''; '
||'begin '
||' if ('||filter_condition||') then '
||' tmp := ''Y''; '
||' end if; '
||' :par1 := tmp; '
||'end; ';

and this shows up in the your test query for bind variables.

The application that is causing this error has audit tables which are updated each time a table is insert-ed or update-d. These audit tables get huge -- some over 1 GB. I have been adding these tables to the recycle buffer pool and have found the error vanish.

By design, the application creates a complete schema with data tables and audit tables dynamically. So, as of now, I am changing the storage clause of "rogue" tables and banishing them to the recycle pool. I have talked to the developers to add this clause in their 'create table ...' statements so each time a new audit table is created, it goes to the recycle pool. Do you think I am on the right track or is it a faux solution to ORA-04031?

By the way, you do a great job of answering each and every query!!

Here is the output from your test query
SQL_TEXT_WO_CONSTANTS
---------------------------------------------------------
DECLARE TMP CHAR(@) := '#'; BEGIN IF ('#' = '#' AND '#' = @ ) THEN TMP := '#'; END IF; :PAR@ := TMP; END;
COUNT(*)
--------
221
DECLARE TMP CHAR(@) := '#'; BEGIN IF ('#' = '#' AND '#' = @ AND '#' = @ ) THEN TMP := '#'; END IF; :PAR@ := TMP; END;
660
DECLARE TMP CHAR(@) := '#'; BEGIN IF ('#' = @ AND '#' = '#' ) THEN TMP := '#'; END IF; :PAR@ := TMP; END;
695
DECLARE TMP CHAR(@) := '#'; BEGIN IF ('#' = '#' AND '#' = @ AND '#' = @) THEN TMP := '#'; END IF; :PAR@ := TMP; END;
10430

--
Kanwar

Tom Kyte
May 20, 2004 - 10:33 am UTC

is is coincidental...

there is no reason those cannot be bound -- should NOT be bound in fact.... when they are the same.

looks like since they are all "x=y" and "a=b" and "m=n" type of stuff -- seems the "code should be doing that", there would be zero reason to be building a dynamic block to parse and compile each time - they have all of the data they need to compare -- they should just be comparing it.

you will defintely 100% want to change this routine to not even use SQL or PLSQL at all! It should just be procedural code in the first place.

A reader, May 21, 2004 - 12:04 pm UTC

I have an application written in Ascential ETL tool. This job reads around 350M records joins to couple of small tables and using a transformer, it writes to 3 different tables. The condition for writing to 3 different tables is determined in the transformer stage.

After running almost 90% of the job, I got the following error

Record 12777880: Rejected - Error on table RPT_FACT_RETAIL_BLEND.
ORA-04031: unable to allocate 2072 bytes of shared memory ("shared pool","unknown object","sga heap(1,0)","enqueue resources")


Can you help me out in this?

Thanks

Tom Kyte
May 21, 2004 - 1:58 pm UTC

betcha that is dog slow. sounds like it should just be a single sql statement. slow by slow (opps meant row by row) processing is really slow.


so how big is your shared pool and ascential is generating sql with BIND VARIABLES right?

A reader, May 21, 2004 - 2:07 pm UTC

It might be single SQL statement. I saw the log file the Ascential has generated, and it has written its own control file and similar to SQL load, it might be loading the data.

The scenario is, there are 3 oracle stages, 2 join stages and one transformer stages. After this there are 3 more oracle stages each representing a table that gets inserted.

Oracle stage1 has 350 million records, Oracle stage2 around 300,000 joins to Oracle stage3 with around 25,000 and the output of this joins to Oracle Stage1. The result of these joins are passed through a transformer. The transformer determines which record needs to go to what table.

The whole process ran for 4hrs 30 minutes before blowing the Shared pool.

The size of the shared pool is 117772160.

I don't know about Ascential is using a bind variable inside its own SQL statement. However, Ascential loads using Direct Load.

Tom Kyte
May 21, 2004 - 2:26 pm UTC

95m could be too small in your situation. what else is going on in the database during this time?

you can use:

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1163635055580 <code>

to see if they are binding or not.

A reader, May 21, 2004 - 2:36 pm UTC

I increased the shared pool by another 50M. the application is curently running and looks good.

As per your suggestion, I created a table t1, function "remove_constants" , updated the table t1 and executed the query on t1. I got 0 records from the output.
The query is
select sql_text_wo_constants, count(*)
from t1
group by sql_text_wo_constants
having count(*) > 100
order by 2


What is the danger mark based on the above SQL statement? What inference can be drawn from its output?

Thanks a lot.


Tom Kyte
May 21, 2004 - 4:27 pm UTC

if you got zero records, your etl is probably making good use of bind variables (thats good), if you got output - that would be something to investigate.

shared memory Error

Praful Dhonge, June 02, 2004 - 6:27 am UTC

I am writing a complex query which has no of joins involving at least 7 tables. This query I am writing using TOAD uitility. While executing this query I got an error which is described below. This error not necessarily comes everytime. But when comes then I can not execute any more queires.So the final solution will the restarting the Database. The complete discription of the error is as follow:

ORA-00604: error occured at recursive SQL level 2
ORA-04031: unable to allocate 4048 bytes of shared memory("shared pool","TRIGGER$","sga heap","state objects")
ORA-00604: error occured at recursive SQL level 1
ORA-04031: unable to allocate 4048 bytes of shared memory("shared pool","unknown object","sga heap","state objects")

Tom Kyte
June 02, 2004 - 8:46 am UTC

sort of means either

a) read above, you are not using binds in your applications and that must be corrected

or

b) your shared pool is in fact *too small*


given the amount of detail here, thats the most one could say.

Large pool problem

vinodh, July 09, 2004 - 3:05 am UTC

Tom,

iam facing problem with large pool. i would like to know how to flush large pool alone eg . alter system flush shared pool;

rite now iam solving the problem by shuting down the database and restarting it. i would like to show the way i do.

The below statement when i am getting the error for large pool.
 
SQL> select pool,name,bytes from v$sgastat where name like '%free%';
 
POOL        NAME                            BYTES
----------- -------------------------- ----------
shared pool free memory                  27045760
shared pool message pool freequeue         771984
large pool  free memory                  40954752
java pool   free memory                 268435456
 
I am flushing the shared pool
 
SQL> alter system flush shared_pool;
 
System altered.
 
Checking the amount of free memory available no change in large pool
 
SQL> select pool,name,bytes from v$sgastat where name like '%free%';
 
POOL        NAME                            BYTES
----------- -------------------------- ----------
shared pool free memory                 329579400
shared pool message pool freequeue         771984
large pool  free memory                  40881656
java pool   free memory                 268435456
 
 
 
To know the total size of the SGA
select name,value from v$parameter where name like '%pool%';
 

SQL> column name format a30
SQL> column value format a25
SQL> /
 
NAME                           VALUE
------------------------------ -------------
shared_pool_size               352321536
shared_pool_reserved_size      17616076
large_pool_size                201326592
java_pool_size                 268435456
buffer_pool_keep
buffer_pool_recycle
global_context_pool_size
olap_page_pool_size            33554432
 
8 rows selected.
 
So i shutdown the database
 
sql> shutdown immediate
 
Now checking the free memory in large pool their is a change.
SQL> select pool,name,bytes from v$sgastat where name like '%free%';
 
POOL        NAME                                BYTES
----------- ------------------------------ ----------
shared pool free memory                     340051632
shared pool message pool freequeue             771984
large pool  free memory                     195081664
java pool   free memory                     268435456

Please can you advice me. How to proceed in this case?

 

 

Tom Kyte
July 09, 2004 - 7:40 am UTC

you cannot -- nor would it even BEGIN to make a tiny bit of sense!!!!!!

do you know what the large pool is used for? how it is used? what it does?

if you do, you would realize that the only way to flush it would be "shutdown"


the large pool is used to hold UGA memory (session memory) for user sessions in shared server mode. if you flushed it -- it would be like giving all of your sessions a lobotomy -- their session would be utterly "wiped out"

the large pool is used for parallel query execution messages -- for the duration of the message only - these come and go very quickly.

the large pool is used for rman backup buffers -- again, short lived, come and go memory allocation.


So, don't even think about flushing the large pool.


You say 'i am having a problem with the large pool' but you haven't described this problem *at all*. if the problem is "unable to allocate memory" - it is quite probable that you have UNDER SIZED the large pool for your system.


If you describe your system and describe your problem, perhaps we can offer advice -- but your current approach doesn't compute.

can not using bind variables over db link?

Mike, July 20, 2004 - 11:09 am UTC


BEGIN :I_VAR := APP_PM_IFACE.PMBP_COST_PULLED@S0PROD(@); COMMIT; END;
487

Tom, I have found the above sql appeared 487 in the shared pool , each only different at S0PROD( ). S0PROD is the database link to remote database S0PROD.
I have shown the developer for the code not using bind variable. He cliamed that becuase it's going the remote database and the bind variable not being able used.

Does he cliam correct? If so, is there workaround so that the bind variable can be used?
thanks,

Mike

Tom Kyte
July 20, 2004 - 8:22 pm UTC

he is dead wrong.


begin :i_var := APP_PM_IFACE.PMBP_COST_PULLED@S0PROD(:another_var); commit; end;

will work just dandy.

how to find out the source codes?

Mike, July 21, 2004 - 9:47 am UTC

Tom,

SELECT PROJWBS.WBS_ID, PROJWBS.PARENT_WBS_ID, PROJWBS.WBS_SHORT_NAME, PROJWBS.OBS_ID, PROJWBS.PROJ_N
FROM PROJWBS, PROJECT WHERE PROJECT.PROJ_ID=PROJWBS.PROJ_ID AND PROJWBS.PROJ_ID = @

I found the above statement has not been using the bind variables. And I tried to find out where it comes from.

sql> select name, owner from dba_source where text like
2 '%FROM PROJWBS, PROJECT%';

no rows selected

sql> select name, owner from dba_source where text like
2 lower('%FROM PROJWBS, PROJECT%');

no rows selected

sql> select name, owner from dba_source where text like
2 '% PROJWBS, PROJECT%';

no rows selected

sql> select name, owner from dba_source where text like
2 lower('% PROJWBS, PROJECT%');

no rows selected

are there other methods?


Tom Kyte
July 21, 2004 - 10:57 am UTC

it probably is not coming from PLSQL -- else it would use a bind variable very naturally.

In plsql it is IMPOSSIBLE to code with static SQL and not use bind variables.

You would have to be using dynamic sql (sql in a string) in PLSQL in order to not bind properly. And if the sql was stored in a string -- it might be constructed bit by bit, piece by piece -- meaning there is no "string in the source code" to search for.

But in all likelyhood, this is coming from a program (not plsql) that is connected to and access the database.

Mike, July 21, 2004 - 11:51 am UTC

"
You would have to be using dynamic sql (sql in a string) in PLSQL in order to
not bind properly. And if the sql was stored in a string -- it might be
constructed bit by bit, piece by piece -- meaning there is no "string in the
source code" to search for.

But in all likelyhood, this is coming from a program (not plsql) that is
connected to and access the database.
"
Sorry for ignorance. Can it be fixed so that bind variable used, if the dynamic sql has to be used?


Tom Kyte
July 21, 2004 - 11:54 am UTC

absolutely -- you can bind very easily in dynamic sql.

Mike, July 24, 2004 - 11:30 am UTC

I have identified the SQL statements which are not using bind variables. Those come from a 3rd party APIs. The following response from the manager of the 3rd party:
"
...

I doubt the programmers are going to put bind variables in because we run on more than Oracle - at least not imediately.

...
"

I am not sure what are they other than Oracle. I will ask.

But for my understanding, Microsoft's SQl server and Java should support bind variables. What else do you think can not using bind variables?

Tom Kyte
July 24, 2004 - 1:00 pm UTC

and every database out there supports and would prefer bind variables.

there are no commercial RDBMS's that do not and would not prefer bind variables to be used.

database independence means they run slowly on one database and really poorly on all of the rest.

you might be able to use cursor_sharing=force -- if this 3rd party api logs in as a "single user" especially -- setup a logon trigger to enable it for just that user account.

unable to allocate shared memory ..

Anthony, July 24, 2004 - 4:41 pm UTC

Hi Tom,

Sometimes I encounter this error when importing or creating a package that is wrapped so I have to check-out/get the unwrapped package to create it. Any clues why I encounter this?

Regards,
NOTNA

Tom Kyte
July 24, 2004 - 9:25 pm UTC

because at the time you ran it -- you really did run out of shared memory and when you got around to getting the "source" version, you did not run out of shared memory?

we are talking about operations that are many minutes apart right?

if you tried the wrapped code right before the "checked out code" -- it probably would have worked then as well.

could be the shared pool in this case is really "too small"?

What could be the reason...

kamal Kishore, July 24, 2004 - 7:12 pm UTC

Hi Tom,
When checked for applications not using bind variables using your script (in the link above), there were no hits.

Do you know what could be the reason for the following error:

ORA-00604: error occurred at recursive SQL level 1
ORA-04031: unable to allocate 4104 bytes of shared memory ("shared pool","unknown object","kgl simulator ","kglsim object batch")

Thanks,


Tom Kyte
July 24, 2004 - 9:41 pm UTC

I did not get a hit on that in the support db, nor have I personally seen it....

sorry. (could be you have a too small shared pool for real? did you take a look at how thing were sized?)

A bit more info...

kamal kishore, July 26, 2004 - 10:12 am UTC

Hi Tom,
Thanks for your response.
The shared pool is currently sized at 150M. Do you think it is too low?
Actually, the application needs to call a packaged procedure that takes a bit of time to execute, programmers decided to put this procedure in the DBMS_JOB queue for execution, based on the client request.

This procedure is causing the above error. This procedure calls another procedure dynamically and this dynamic procedure performs a bunch of inserts.
Upon further investigation, it was found that if these insert statements were commented out, the error went away.

It may be incidental that it happend, but do you see any reasons for this behaviour?
Thanks,

Tom Kyte
July 26, 2004 - 11:57 am UTC

what version of the database and how many jobs are you running (how many queued)

on the book page 437-439

A reader, July 27, 2004 - 4:02 pm UTC

On your book page 437-439, you illustrated the concurrent issue with no bind variable used for example of 5 sessions. They all did the same DML. Does other sessions besides of these 5 will experience slow also?

5 sessions doing:
Insert into T values (#,#,)
Because the latch maganism, the one hard parse has to wait other one hard parse to be complete.

At the same time, there is another session trying to hard parse:
Select * from T where col=@;
Does the session has to wait the hard parse of “Insert into T values (#,#,)” to be complete?


Tom Kyte
July 27, 2004 - 4:11 pm UTC

they are all doing *different* DML's -- sure against the same table but the DML is all unique (that is the problem)

Yes, if you were inserting into t1 and I into t2 and them into t3 and we were hard parsing all of our statements - same issue arises.

more info...

Kamal Kishore, July 28, 2004 - 6:09 am UTC

Hi Tom,
It ran on 9.2.0.1 then has been upgraded to 9.2.0.3.
when simulating upto 100 requests from the users, we get this error. If we simulate upto 15 to 20 requests, it appears to work corectly.
Could there be a contention among all these queued sessions trying to insert into the same header table that might cause this error? Is there a way to avoid this?
Thanks,


Tom Kyte
July 28, 2004 - 8:27 am UTC

do you use bind variables?

A reader, August 17, 2004 - 4:15 pm UTC

We use a third party application, and application has been broken from the time to the time with the 4031 error. I found that is because no bind variables being used in the application. I have shown the issue to the management and the third party and urgent the third party to fix the application ASAP. The third party has been resisting to fix with excuse of need a lot time to rewrite and test the code. They offer the temporary solution: to increase the Sharedpool to 700M from 200M current setting. I believe they have not intention to fix the no bind variable issue. But management wants to accept their offer. I have very difficult in convince the management to press hard on the third party in fixing the root of the cause.

Any advice will be appreciated.


Tom Kyte
August 17, 2004 - 4:37 pm UTC

so, they've been here, done that -- you are not the first person to call this company.

pushing out to 700 meg is just pushing the wall back a couple of feet, giving you more room to build up speed as you....

ram smack into it.


I'd rather go *smaller*, much smaller.

and/or attempt cursor_sharing=force (if this 3rd party app also uses a single common logon name, a trigger on logon on that schema to enable it would be best, not setting it in the init.ora is preferred)

increasing sharedpool size

A reader, August 20, 2004 - 2:09 pm UTC

followed the last comment of no bind variables used.
The management has accepted the third party advice to increae the sharedpool from 200M to 300M. I have set to that value. You have advised that would conter act to the result we expect, because the sharedspool will become more in terms of the fragamentation. I would like to present to management what are the negitive impacting after the sharedpool increase as comparing the before. I have the statspack run every 15 minutes for a few months for the production. The statspack reports are avaiable for the before and the after. Please advise what I should look for in the reports to convince the management the increase of the sharedpool has been causing performance problem in the production instance.

THANK YOU SO MUCH

Tom Kyte
August 21, 2004 - 11:05 am UTC

you'll find out soon enough when you hit the wall again. it is like groundhog day -- the movie. only you'll hit the wall a little harder this time since you gave yourself more running room.

shared pool fragmentation

Raju, August 23, 2004 - 6:35 am UTC

Tom,

i read from 8i performance tuning student guide that

1."Loading large objects is the primary source of fragmentation. Users’ response time is
affected because of the large number of small objects that need to be aged out from the
shared pool to make room."

2.
"Find the anonymous PL/SQL blocks and convert
them into small anonymous PL/SQL blocks that
call packaged functions."

1.i don't understand how loading large objects cause fragmentaion?


2.in what way does this avoid fragementation?

please clarify

Tom Kyte
August 23, 2004 - 8:26 am UTC

I'm not familar with that piece of literature, but it looks like it is quoting out of date material. plsql is pieced into the shared pool in bite sized chunks of about 4k. the large objects being a problem was a version 7.x thingy.

A reader, August 24, 2004 - 8:25 am UTC

is CURSOR_SHARING cost parameter?

Tom Kyte
August 24, 2004 - 8:56 am UTC

no, not really

it just makes all literals go "poof" from your query, turns them into bind variables. will this affect plans? most likely, yes.

Bind enabled ERP

Raaghid, September 10, 2004 - 11:13 am UTC

Oracle 8.1.5 / Server with 3 GB RAM

2 ERP's are running. We identified that they are simply disabling the binding, then we rectified the same. Now performance improved immensely.

Now my question is: Can DB buffer size be reduced by 30%?, since now after bind enabled, memory managed nicely. (previously, worst perfomance, since each sqlparsed)




Tom Kyte
September 10, 2004 - 11:15 am UTC

no one could answer that given the supplied information.

in general, unless binding results in different plans, that access less blocks -- the buffer cache (db block buffers) would be 100% unaffected by using binds.

it's shared pool size

Raaghid, September 11, 2004 - 1:03 am UTC

Sorry. I have put db buffer instead of Shared pool size. (Shared pool where the parsed SQL's are lying)

Tom Kyte
September 11, 2004 - 8:19 am UTC

Actually, given that my response to a system that hard parses too much would be to downsize the shared pool, my reaction would be to think about increasing the shared pool if it was not able to hold the working set after correcting the bind bug in the code.

But given that most people do the opposite and over inflate the shared pool in an attempt to make everything fit, you might be oversized.

So, the answer would be -- use statspack or whatever monitoring tool you use to review your shared pool utilization over time. React according to that (eg: if you are using 90+% of it constantly, maybe it is the right size already. If you are using most of if constantly and still have lots of reloads, you haven't fixed the problem. If you are using 40% of it, it is too big)

BIND VARIABLES

Turanyi, September 20, 2004 - 11:19 am UTC

Your answer about relation between BIND VARIABLES and SHARED_POOL give me big possibility to improve our database.

I think that Form builder use BIND variables. So query database block is OK. Select in trigger POST-QUERY is OK too
because of using
SELECT ... INTO .. FROM ... WHERE column=:BLOCK.ITEM
but what to do with
SELECT ... INTO .. FROM ... WHERE column=:BLOCK.ITEM and active='Y'
How to use bind variables in this case.
Thank you.


Tom Kyte
September 20, 2004 - 11:33 am UTC

SELECT ... INTO .. FROM ... WHERE column=:BLOCK.ITEM and active='Y'


that is find -- unless you have hundreds of

SELECT ... INTO .. FROM ... WHERE column=:BLOCK.ITEM and active='N'
SELECT ... INTO .. FROM ... WHERE column=:BLOCK.ITEM and active='Z'
SELECT ... INTO .. FROM ... WHERE column=:BLOCK.ITEM and active='xda'
SELECT ... INTO .. FROM ... WHERE column=:BLOCK.ITEM and active='afdadfda'
SELECT ... INTO .. FROM ... WHERE column=:BLOCK.ITEM and active='y'
.....

out there - your goal -- make sure that things that change in the query -- are BOUND.

things that would remain static for each call do not need (should not be) bound, they are constants.

???

Turanyi, September 21, 2004 - 4:21 am UTC

Hello,
I don't know what you mean BOUND. Please write example.
How to replace hardcoded columns condition with BIND variables.

Tom Kyte
September 21, 2004 - 7:45 am UTC

"BOUND" means "TO BIND"


how to bind depends on your language. In general, a 3gl will

o prepare a statement with placeholders
o use "set" calls to set the bind variable values
o execute the statement


if you read a couple of articles on this site or have either of my books -- i spend half my life talking/showing this tiny feature that makes such a huge difference.

???

Turanyi, September 22, 2004 - 10:43 am UTC

I use Oracle Form Builder (PL/SQL). How to use BIND variables there?

I think that my problem with shared_pool have not reation with bind variables.

Our server is Oracle9i release 9.0.1.1.1 running on Win XP.
Important parameters are there
File init.ora
###########################################
# Pools
###########################################
java_pool_size=1048576
large_pool_size=1048576
shared_pool_size=50000000

parameters

  1* select substr(NAME,1,30) NAME, substr(VALUE,1,20) VALUE from v$parameter where name like '%shared%'
SQL> /

NAME                           VALUE
------------------------------ --------------------
shared_pool_size               62914560
shared_pool_reserved_size      2516582
shared_memory_address          0
hi_shared_memory_address       0
shared_servers                 1
max_shared_servers             20
shared_server_sessions         165

  1* select substr(NAME,1,30) NAME, substr(VALUE,1,20) VALUE from v$parameter where name like '%large%'
SQL> /

NAME                           VALUE
------------------------------ --------------------
large_pool_size                1048576


freemem before run report
SQL>  select * from v$sgastat where name='free memory';

POOL        NAME                           BYTES
----------- -------------------------- ---------
shared pool free memory                 48196944
large pool  free memory                4,295E+09
java pool   free memory                    12288

when I run report I'll get message about unable to allocate shared memory ('','','','').

and freemem after report
SQL> select * from v$sgastat where name='free memory';

POOL        NAME                           BYTES
----------- -------------------------- ---------
shared pool free memory                 45226592
large pool  free memory                4,293E+09
java pool   free memory                    12288

rowcount of sqlarea is increased about 50 (fist time).

I think there is lot of space in shared pool, but report don't think so.
I don't understand difference between values large pool in init.ora (v$parameter) and v$sgastat. 
I define large_pool_size=1048576 (1MB) and v$sgastat indicate value 4,293E+09=4293000000 (4GB) 

Tom Kyte
September 22, 2004 - 10:57 am UTC

unless you start doing dynamic sql -- forms is awesome at binding.


not sure what your "question" here is. specially since you've left out relevant things like "ora-xxxxx" numbers.

???

Turanyi, September 23, 2004 - 2:41 am UTC

1: "awesome" -- you mean good?
2: I think that ORA-XXXX is clear about alocate shared pool. It is ORA-4031.

Thaks a lot.

Tom Kyte
September 24, 2004 - 7:52 am UTC

1) that is what websters says, yes. "awesome" means excellent.

2) that would be a number you left out, yes.. how can something not said "be clear"???!??



search this site for

remove_constants


run that v$sql script to see if your developers are or are not using bind variables. if they are not -- make them fix it.

shared server

reader, October 19, 2004 - 4:54 pm UTC

Can resource manager be used for shared server environment? If so, anything special I need to do? Thanks.

Tom Kyte
October 19, 2004 - 8:54 pm UTC

it works user by user, as normal.

Job queue

A reader, October 26, 2004 - 10:16 pm UTC

Oracle 9.2.0.4

My shared pool size is 100M. All sessions connect using dedicated server. Large pool is left at the default. Most applications use bind variables after learning the hard way!

Questions:
1. Would the Oracle server record a message in the alert log for an session that encounters a ORA-4031?

2. Recently, I am seeing that the only session that gets a ORA-4031 (shared pool) is a job queue processes that runs a job every 5 minutes. The job opens a cursor for a query across a dblink, updates some local tables. Repeats every 5 minutes.

Why would this job queue process be the only one that gets the ORA-4031? Would the fact that it is a always-running daemon-type process have anything to do with it?

3. What is the use of v$shared_pool_advice? What does it help with? If I am reading it right, it says that no matter if I halve or double my shared_pool_size, my cache hit ratio would stay (more or less) the same! Does this v$ view change if my apps start to use/not use bind variables?

Thanks

Tom Kyte
October 27, 2004 - 7:06 am UTC

1) very few errors are logged there -- cannot remember if 4031 is one, but I don't think so. It would be for a "job queue" job since any error from a job queue job is put there.

2) is it really always running or is the job queue running a new copy every 5 minutes. what is the full error stack of the 4031

3) if you cut your shared pool way back, it would tell you to increase it. if your working set was larger than fit (lots of reloads), it would tell you to increase it. that it is constant would indicate it is well sized.

A reader, October 27, 2004 - 9:43 am UTC

"2) is it really always running or is the job queue running a new copy every 5 minutes. what is the full error stack of the 4031"

It is a dbms_job scheduled to run every 5 minutes. Each run takes less than a minute.

The full error stack is

ORA-12012: error on auto execute of job 421
ORA-04031: unable to allocate 39824 bytes of shared memory ("shared pool","SELECT distinct record_id
f...","sql area","froncold: ddfnet1Describe")
ORA-06512: at "XXX.XXX", line 436
ORA-06512: at "XXX.XXX", line 574
ORA-06512: at "XXX.XXXX", line 603
ORA-06512: at line 1

No one else has complained about ORA-4031, so it makes me suspect a memory leak or some other bug in 9.2.0.4 related to frequently running jobs like these?


A reader, October 31, 2004 - 2:44 pm UTC

Hi Tom,
We are using siebel crm application against oracle 8.1.7.4 database. We used to get ora-4031 error almost every month .After the error we used to bounce the database. We had 150MB of shared pool. Last week go got this error again.After this we changed shared pool to 300MB.
Question :
Is there anyway to know that we would get this error in future? (sql query or something that would let us know that the error is imminent OR we don't have contigous space in the shared pool)

Siebel is not always using bind variable.They don't even recommand cursor_sharing to force So we don't have any choice.
I would appreciate any help.

Thanks

Tom Kyte
October 31, 2004 - 3:25 pm UTC

Now you'll get it harder later. Siebel is reported to have a bind variable issue or two or more.

Going smaller (seems counter intuitive) might be the right way to go. It is easier to manage and clean out a smaller pool than a larger one. You may well have moved the wall back a couple of feet -- giving you the ability to build up a bigger head of steam before you hit it.

A reader, October 31, 2004 - 3:45 pm UTC

Hi Tom,
Thnaks for the response...
Can you explain how shared_pool memory is being used?
I've both of your books.. Can't figure out on ora-4031.
Isn't that Oracle flush the memory when it need from shared_pool?
Is there any relationship between ora-4031 and open_cusrosrs (siebel suggest 1500) parameter?

I am desperate to solve this problem.. Please help

I've theses script from steve adams(got it from web site)
Is this valid for 8.1.7.4?

-- Purpose: to get an overview of chunks in the shared pool
-- For: 8.0 and higher
select
ksmchcom contents,
count(*) chunks,
sum(decode(ksmchcls, 'recr', ksmchsiz)) recreatable,
sum(decode(ksmchcls, 'freeabl', ksmchsiz)) freeable,
sum(ksmchsiz) total
from
sys.x_$ksmsp
where
inst_id = userenv('Instance') and
ksmchcls not like 'R%'
group by
ksmchcom


-- Purpose: to check how much spare free memory remains in the perm chunk
-- For: 8.0 and higher
select
avg(v.value) shared_pool_size,
greatest(avg(s.ksmsslen) - sum(p.ksmchsiz), 0) spare_free,
to_char(
100 * greatest(avg(s.ksmsslen) - sum(p.ksmchsiz), 0) / avg(v.value),
'99999'
) || '%' wastage
from
sys.x_$ksmss s,
sys.x_$ksmsp p,
sys.v_$parameter v
where
s.inst_id = userenv('Instance') and
p.inst_id = userenv('Instance') and
p.ksmchcom = 'free memory' and
s.ksmssnam = 'free memory' and
v.name = 'shared_pool_size'
/

-- Purpose: to check the shared pool lru stats
-- For: 8.0 and higher

select
kghlurcr,
kghlutrn,
kghlufsh,
kghluops,
kghlunfu,
kghlunfs
from
sys.x_$kghlu
where
inst_id = userenv('Instance')
/

Thanks

Tom Kyte
November 01, 2004 - 3:11 am UTC

you'll want to peek at <Note:62143.1>

the shared pool is a heap -- just like a database table or tablespace (especially a dictionary managed one - that would be the best analogy).

It is managed in small chunks -- generally about 4k in size.

When it gets chock full of stuff, we attempt to clean out what we can -- but if someone is using it, we cannot.

the goal of this freeing is to try and free a chunk big enough for what we need to do. Just like a tablespace might have 500meg free -- but is not able to allocate a table with an initial extent of 1m (even after coalescing the tablespace, combining adjacent free space), a massively fragemented shared pool (caused by tons of literal sql over days -- leaving plenty of free, but not contigous, chunks of space over time) may not be able to allocate a relatively small chunk of memory.


as long as you install the views or change x_/v_ to just x/v, they work (steve's queries)

A reader, November 01, 2004 - 11:22 am UTC

Hi Tom,
I 've gone through the note and now I've good understanding on how shared pool works. Thanks a lot..

In your previous reply. You mentioned "Going smaller (seems counter intuitive) might be the right way to go".Can you elobarate on this statement?

Thanks

Tom Kyte
November 02, 2004 - 5:45 am UTC

if you have lots of literal, unbound sql -- rather than increasing the size of the shared pool (making management even harder) consider shrinking the shared pool so that it is smaller - easier to manage and lets things get aged out much faster.

shared pool

Dave, November 02, 2004 - 3:46 pm UTC

If a statement has been parsed, but the resulting execution failed (i.e. in an insert statement tried to insert null into a nt null column). Is this sql still in the shared pool because it was successfully parsed or is it out because it didnt work fully.

Thanks

Tom Kyte
November 03, 2004 - 6:32 am UTC

it is there, it is parsed and ready to go.



A reader, November 03, 2004 - 9:51 am UTC

Hi Tom,

Can too high setting of OPEN_CURSORS( We have 1500) cause ORA-04031 error?

Thanks

Tom Kyte
November 03, 2004 - 10:33 am UTC

depends on other settings (cursor space for time maybe) - but doubtful. We can flush out most everything if need be -- the recreatable bits. and if the application is not using binds -- it won't be having 1500 cursors open (they are the applications that have about 1 cursor open at a time, very sqlserver'ish). So, this would be 1500 well bound queries -- meaning you should be able to hold your working set in the shared pool.

How to use string parameters in Sql IN clause with bind variables

Krish, November 06, 2004 - 7:38 am UTC

Hi, Tom thanks for your previous follow-ups

1. I have a simple function as below,
parameter is numeric single empid and returns numeric.

create or replace function rtnSalary(p_empno in number) 
return number
as
    v_temp number ;
begin
    select sum(sal) into v_temp from emp where empno in (p_empno) ;
    return v_temp ;
end ;

How do i modify it to accept multiple empid and returns sum of sal
from the same function
something...

select sum(sal) from emp where empno in (7900, 7901, 7902) ;

I tried creating like input parameter as varchar and 
passing it like '7900, 7901, 7902' but obviuously
it didn't work
 
create or replace function rtnSalary(p_empno in varchar2)
return number
as
    v_temp number ;
begin
    select sum(sal) into v_temp from emp where empno in (p_empno) ;

    return v_temp ;
end ;

SQL> select rtnSalary_ID('7900, 7901') from dual ;
select rtnSalary_ID('7900, 7901') from dual
       *
ERROR at line 1:
ORA-01722: invalid number
ORA-06512: at "SCOTT.RTNSALARY_ID", line 6

2. I want to write same function which returns sum of salary
   and input is single name or multiple names and please
   show how to call it 
   
   I tried something like below but trapped in number of quotes to use ...
   
   select rtnSalary_By_Name(''MIKE', 'JOHN'') from dual

Important thing is I want to make use of bind variables in the function.
A similar demonstration from you, can be very helpfull. 

Tom Kyte
November 06, 2004 - 10:59 am UTC

oh, don't do that -- that function is just making your sql really really really slow.

search for

varying in list

on this site for how to do the inlist, but don't select that function from dual, just run the query!

A reader, December 31, 2004 - 5:07 am UTC

Tom,
Is cursor_sharing=Force will convert the literal to bin variable?

Tom Kyte
December 31, 2004 - 11:06 am UTC

bind variable, yes.

it helps with hard parsing, which is only the first step on a 3 step "bad programmer recovery program". you are still way over parsing and the only good parse is NO parse.

chong, January 03, 2005 - 2:05 am UTC

Hi Tom,
I have a global setting of cursor_sharing=force.
I have 2 statement below:
1) select * from table1 where column1 = 'V1'
2) select * from table1 where column1 = 'V2'
Is that means when oracle execute statement 2 it will not reparse the statement again?
does oracle will reused back the same excution plan as statement 1?

Tom Kyte
January 03, 2005 - 8:29 am UTC

no, it means that the second execution will do a SOFT parse which is better than a hard parse -- but not as good as "no parse whatsoever"

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:2588723819082 <code>

every statement you submit to Oracle must be parsed -- either HARD (never saw it before) or SOFT (we saw it, we got it, here it is)

A soft parse is better then a HARD parse but billions of times worse than no parse at all.

chong, January 04, 2005 - 3:08 am UTC

Hi Tom,
I have seen some of oracle notes stated that bind variable might not be a smart choice because oracle doesn't has enough information to decide the best execution path. Does bind variable will ignore the histogram? What about If I set cursor_sharing=force in the system level will optimizer ignore all the columns histogram statistics?

thanks

Tom Kyte
January 04, 2005 - 8:30 am UTC

the answer (stated billions of times here on this site) is....


if you are going to parse and execute statements many times - you NEED to use binds.

if you are running a data warehouse whereby each and every query run is going to be unique anyway -- don't use binds.

oltp -> must bind.
report run once in a blue moon -> don't have to bind.
data warehouse with ad hoc queryies -> probably don't want to bind
report run couple of times a day -> bind it.
r

x$ksmsp (Headdump level 2)

Vivek Sharma, January 12, 2005 - 5:12 am UTC

Hi Tom,

Metalink Note Id.146599.1 shows a query which is also used to generate the information as a part of HEADDUMP level 2. The query is as mentioned below :

select '0 (<140)' BUCKET, KSMCHCLS, KSMCHIDX, 10*trunc(KSMCHSIZ/10) "From",
count(*) "Count" , max(KSMCHSIZ) "Biggest",
trunc(avg(KSMCHSIZ)) "AvgSize", trunc(sum(KSMCHSIZ)) "Total"
from x$ksmsp
where KSMCHSIZ<140
and KSMCHCLS='free'
group by KSMCHCLS, KSMCHIDX, 10*trunc(KSMCHSIZ/10)
UNION ALL
select '1 (140-267)' BUCKET, KSMCHCLS, KSMCHIDX,20*trunc(KSMCHSIZ/20) ,
count(*) , max(KSMCHSIZ) ,
trunc(avg(KSMCHSIZ)) "AvgSize", trunc(sum(KSMCHSIZ)) "Total"
from x$ksmsp
where KSMCHSIZ between 140 and 267
and KSMCHCLS='free'
group by KSMCHCLS, KSMCHIDX, 20*trunc(KSMCHSIZ/20)
UNION ALL
select '2 (268-523)' BUCKET, KSMCHCLS, KSMCHIDX, 50*trunc(KSMCHSIZ/50) ,
count(*) , max(KSMCHSIZ) ,
trunc(avg(KSMCHSIZ)) "AvgSize", trunc(sum(KSMCHSIZ)) "Total"
from x$ksmsp
where KSMCHSIZ between 268 and 523
and KSMCHCLS='free'
group by KSMCHCLS, KSMCHIDX, 50*trunc(KSMCHSIZ/50)
UNION ALL
select '3-5 (524-4107)' BUCKET, KSMCHCLS, KSMCHIDX, 500*trunc(KSMCHSIZ/500) ,
count(*) , max(KSMCHSIZ) ,
trunc(avg(KSMCHSIZ)) "AvgSize", trunc(sum(KSMCHSIZ)) "Total"
from x$ksmsp
where KSMCHSIZ between 524 and 4107
and KSMCHCLS='free'
group by KSMCHCLS, KSMCHIDX, 500*trunc(KSMCHSIZ/500)
UNION ALL
select '6+ (4108+)' BUCKET, KSMCHCLS, KSMCHIDX, 1000*trunc(KSMCHSIZ/1000) ,
count(*) , max(KSMCHSIZ) ,
trunc(avg(KSMCHSIZ)) "AvgSize", trunc(sum(KSMCHSIZ)) "Total"
from x$ksmsp
where KSMCHSIZ >= 4108
and KSMCHCLS='free'
group by KSMCHCLS, KSMCHIDX, 1000*trunc(KSMCHSIZ/1000);

The Note does not explains about the query. I would like to know the importance of the value used to divide KSMCHSIZ. I mean :

In the first query 10*trunc(ksmchsiz/10) is used which is unioned to 20*trunc(ksmchsiz/20) which is further unioned to 50*trunc(ksmchsiz/50) and so on...

1. How are these values derived ?
2. What are the components of Shared SQL Statements which can be removed from the Shared Pool, if required.
3.What are the chunks (objects) that are normally needed for the duration of a session or call in the shared pool.
4.What are the permanent memory chunks which are persistent in nature.

Your reply will help me understand the contents and working of Shared Pool.

Thanks and Regards
Vivek




Tom Kyte
January 12, 2005 - 9:27 am UTC

sorry -- I don't go there. You shouldn't have to either. Maybe support and development do to diagnose a problem, but I'm not going there.

Other Questions related to Shared Pool

Vivek Sharma, January 12, 2005 - 1:24 pm UTC

Hi Tom,

Thanks for your suggestion. But, can you please answer my queries number 2,3 and 4 above. These are all related to the shared pool. The queries are :

2. What are the components of Shared SQL Statements which can be removed from the Shared Pool, if required.
3.What are the chunks (objects) that are normally needed for the duration of a session or call in the shared pool.
4.What are the permanent memory chunks which are persistent in nature.

Waiting for your reply.

Thanks
Vivek


Tom Kyte
January 12, 2005 - 1:59 pm UTC

2) anything marked recreatable or not needed anymore. do I have a list of such elements? nope.

3) don't have such a list.

4) stuff the server needs from startup to shutdown.

like I said "i don't go here". This is way below where we need to be.

Oracle Consultant

Dawar, February 17, 2005 - 1:48 pm UTC

Tom,

My Oracle database version is 10.1.0.2.0.
I have installed Oracle Database last year and it was working fine.

Since yestaerday morning its behave strange.
We are not able to go to our dbconsole page.

OR either connect to the sqlplus from command line.
Here waht happened.

oracle@time:~> sqlplus

SQL*Plus: Release 10.1.0.2.0 - Production on Thu Feb 17 10:24:57 2005

Copyright (c) 1982, 2004, Oracle.  All rights reserved.

Enter user-name: username
Enter password:
ERROR:
ORA-00604: error occurred at recursive SQL level 2
ORA-04031: unable to allocate 4096 bytes of shared memory ("shared
pool","select obj#,type#,ctime,mtim...","Typecheck heap","kgghteInit")
ORA-00604: error occurred at recursive SQL level 2
ORA-04031: unable to allocate 4096 bytes of shared memory ("shared
pool","select /*+ rule */ bucket_cn...","Typecheck heap","kgghteInit")


Enter user-name: sys as sysdba
Enter password:
Connected.
SQL> desc dba_objects
ERROR:
ORA-01034: ORACLE not available

SQL> startup
ORA-01081: cannot start already-running ORACLE - shut it down first

Please note:
** This issue has comes up with other box also.
** Three weeks back I set up network card for full duplex
mode with NO auto negotiation and a fixed 100m speed for both boxes. After that its work fine.

Regards,
Dawar 

Tom Kyte
February 17, 2005 - 2:29 pm UTC

startup force will get over this, but you changed something else (a NIC change would not do this). deployed a new application, something.

Multiple variables in dynamic query

whizkid, April 01, 2005 - 11:38 am UTC

This has been most informative. Now I realise why the dynamic query are killing the database. CPU used to shoot to 100% when a particular dynamic query was being fired and it did not use bind variables. Shudder to think what it must have done to the Shared pool! I have taken a list of all dynamic queries and am trying to implement bind variables in those one by one. This has become my topmost priority now. Im stuck up with a query where I'm lost on how to bind. Your help will go a long way in achieving my target!

lSmt := 'SELECT DISTINCT b.Clm_Ref Clm_Ref,
b.Claim_id Claim_Id,
c.Policy_Ref Policy_Ref,
a.Clm_Status Clm_Status,
(a.Trans_Amt) Trans_Amt,
a.Sf_Total_Type Sf_Total_Type,
f.Bank_Branch_Code BANK_BRANCH_CODE,
a.Trans_type Trans_Type,
a.Trans_Date Trans_Date,
d.Ip_Type Ip_Type,
f.Payee_Ip_No Payee_Ip_No,
d.Part_Id Part_Id,
a.Sf_No Sf_No,
a.Rsv_Amt Rsv_Amt,
OCP.Contract_Id Contract_Id,
d.Claimant Claimant,
a.Trans_No Trans_No,
a.Int_Ref Int_Ref,
a.IP_NO Ip_No,
g.Product_4Digit_Code Product_4Digit_Code,
NVL(a.Tax_Amount,0) Tax_Amount,
a.INT_REF Unique_Code
FROM CLM_TRANS a,
BJAZ_MOT_CLM_TRANS_EXTN f,
CLM_INTERESTED_PARTIES d,
CLM_BASES b,
BJAZ_POLICY_BASES_EXTN g,
CLM_POL_BASES c,
OCP_POLICY_BASES ocp
WHERE a.Trans_Type IN (''20'',''30'')
AND a.Clm_Status IN (''FA'')
AND b.Clm_Status <> ''CLOSED''
AND a.Claim_Id = b.Claim_Id
AND OCP.Contract_Id = c.Contract_Id
AND c.Contract_Id = g.Contract_Id
AND a.claim_Id = c.Claim_Id
AND a.claim_Id = d.Claim_Id
AND a.claim_Id = f.Claim_Id
AND a.Trans_No = f.Trans_No
AND a.Sf_No = f.Sf_No
AND d.Ip_No = f.Payee_IP_No
AND ocp.Top_Indicator = ''Y'' ' ;

IF p_Product_Code IS NOT NULL THEN
lSmt := lSmt || ' AND g.product_4digit_code =' ||TO_CHAR(p_Product_Code);
END IF;
IF p_Loc_Code <> '1000' THEN
lSmt := lSmt || ' And b.Loc_Code = ' ||TO_CHAR(p_Loc_Code);
END IF;
IF lLimit IS NOT NULL THEN
lSmt := lSmt || ' And a.Trans_Base_Amt <= ' ||lLimit;
END IF;
IF p_view_option <> '2' THEN
lSmt := lSmt || ' And RowNum <= 10 ';
END IF;

IF p_order_by IS NULL THEN
lOrderBySmt := 'b.Claim_id';
END IF;
IF p_order_by = '1' THEN
lOrderBySmt := 'b.Claim_id';
ELSIF p_order_by = '2' THEN
lOrderBySmt := 'b.Clm_Ref';
ELSIF p_order_by = '3' THEN
lOrderBySmt := 'c.Policy_Ref';
ELSIF p_order_by = '4' THEN
lOrderBySmt := 'a.Sf_Total_Type';
ELSIF p_order_by = '5' THEN
lOrderBySmt := 'a.Trans_Date';
ELSIF p_order_by = '6' THEN
lOrderBySmt := 'd.Ip_Type';
END IF;
lSmt := lSmt || ' Order By '||lOrderBySmt;
IF p_asc_dsc = 'D' THEN
lSmt := lSmt || ' desc ';
ELSIF p_asc_dsc = 'A' THEN
lSmt := lSmt || ' asc ';
END IF;

IF LENGTH(lsmt) > 0 THEN
OPEN c_rfa FOR lsmt;
LOOP
FETCH c_rfa INTO lrfa_trans_rec(idx);
EXIT WHEN c_rfa%NOTFOUND;
idx := idx + 1;
END LOOP;
CLOSE c_rfa;

In this query, we have multiple IF conditions. So where and how exactly do I bind? For Eg. in the first concatenation, lSmt := lSmt || ' AND g.product_4digit_code =' ||TO_CHAR(p_Product_Code);, if I rewrite it as
lSmt := lSmt || ' AND g.product_4digit_code = :x' where do I put "USING TO_CHAR(p_Product_Code)"? While opening the cursor? But I cant put all the "USING" where conditions while opening the cursor becoz it's dynamic. Hope Im not confusing you.

Tom Kyte
April 01, 2005 - 12:23 pm UTC

Really Good one !

A reader, April 26, 2005 - 11:33 am UTC


ORA-04031:unable to allocate 4096 bytes of shared memory"

aru, July 03, 2005 - 10:14 pm UTC

Hi Tom,
Is the "ORA-04031:unable to allocate 4096 bytes of shared memory" error ever related to the shared_pool_reserved_size
as well?
Please help.
Regards,
ARU.


Tom Kyte
July 04, 2005 - 10:14 am UTC

if 4096 is smaller than your _shared_pool_reserved_min_alloc then it would not be allocated from the reserved pool size and could fail even if the reserved pool had ram.

There is not a "direct relation" - the 4031 comes about when there is insufficient memory in the pool(s) (or in 9ir2 and up with multi-cpu machines and a large sga, the subpool) you are allowed to use.

ORA-04031

Mahesh Kumar, August 04, 2005 - 3:52 am UTC

Hi Tom

I am getting this error

Errors in file /opt/emageon/oradb/online/dbtrace/bdump/darc_j001_9315.trc:
ORA-12012: error on auto execute of job 209
ORA-04031: unable to allocate 4096 bytes of shared memory ("shared pool","STANDARD","PL/SQL MPCODE","BAMIMA: Bam Buffer")
Wed Aug 3 12:34:43 2005

less /opt/emageon/oradb/online/dbtrace/bdump/darc_j001_9315.trc

Errors in ORA-12012: error on auto execute of job 209
ORA-04031: unable to allocate 4096 bytes of shared memory ("shared pool","STANDARD","PL/SQL MPCODE","BAMIMA: Bam Buffer")
ORA-06508: PL/SQL: could not find program unit being called
ORA-06512: at "UV_DICOM.TRG_SERIES_BU", line 7
ORA-04088: error during execution of trigger 'UV_DICOM.TRG_SERIES_BU'
ORA-06512: at "UV_DICOM.DO_INSTANCE_COUNTS", line 14
ORA-06512: at line 1
*** SESSION ID:(155.4165) 2005-08-03 12:34:42.773
*** 2005-08-03 12:34:42.773



Database Version - 9.2.0.1.0

System Global Area

Total System Global Area 387518764 bytes
Fixed Size 450860 bytes
Variable Size 285212672 bytes
Database Buffers 100663296 bytes
Redo Buffers 1191936 bytes

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cursor_sharing string EXACT
shared_pool_reserved_size big integer 10905190
shared_pool_size big integer 218103808
shared_servers integer 0


This job is running successfully on other database with same configuration. What should i do. Please show me the way.



Tom Kyte
August 04, 2005 - 9:26 am UTC

does your application make use of bind variables (the applications running in this database)

Just because you don't run out of a shared resource (memory) on machine 1 doesn't mean you won't on machine 2.

That would be like saying "my file system never fills up on machine 1, so why does it fill up on machine 2"

Sorry for you Selly

Aru, August 16, 2005 - 6:30 pm UTC

Selly,
I have come into this thread a bit late but am lamenting the fact that halfway through the thread I have had to read real egoistic, inept, totally time wasting stuff from you. But rest assured you're not in big minority there in the development world with this rind of mentality. People like you don't want to study the database you're working on or even get to know the architecture concerned. You just want to go ahead and design and expect people like TOM to play janitor later, cleaning up your act.

As for Tom, you don't have to tell us what he is like. I can confidently say there are hundreds of DBA's there who have totally benefitted from his knowledge and are better DBA's today because of him.
Regards,
ARU.

error - 4031

Vivek, October 04, 2005 - 6:43 am UTC

Tom,

I got oracle error - 4031 on one of our servers running oracle 9.2.0.5 on AIX. Error says ("shared pool","unknown object","sga heap(1,0)","session param values") and solution seems to be increase the shared pool as oracle is not able to acquire enough memory. Before increasing the memory , i want to know how can i analyse that the error lies with shared pool size only. Also what does the error strings(last 3) suggests. Is any information can be gathered from these strings to investigate. I have 16M of shared pool.

Thanks
Vivek

Tom Kyte
October 04, 2005 - 4:18 pm UTC

16m is sort of small - but it depends on the amount of distinct SQL and PLSQL you have to run.

do you bind?

shared pool flush not working

Roberto, October 06, 2005 - 12:23 pm UTC

Tom,
Thank you very much for being such a good resource for the Oracle community.
We had an ora-04031 incident yesterday which ultimately resulted in a system reboot, but before getting to that point we tried flushing the shared pool which did not work. Why would Alter system flush shared_pool not work? We kept getting the error after we did that,in fact so ofter that we opted for the reboot.

Thanks much.

Tom Kyte
October 06, 2005 - 12:35 pm UTC

it works, it just means "it did not find enough flushable stuff to fix the issue".


rebooting the system was sort of drastic, I would have started with a database restart if forced to.

Alter system flush shared_pool

Roberto, October 06, 2005 - 1:04 pm UTC

Tom,
The box was not reeboted. Just the whole PeopleSoft infrastructure, including Oracle, of course. There was not much going on in the system when we did the flush so I cant imagine why the flush did not work. An internet posting I found talked about having to flush three times in order for the flush command to work, which I found kind of weird. I also found the following posting on Metalink. Had you ever heard of such things?

----------------
From: Oracle, Helen Schoone 13-Jan-00 16:31
Subject: Re : SGA Fragmentation - daily restart of the instance recommended ?

Hi. I conferred with my colleagues on this. The note is valid, but like everything, there are exceptions. Experience at other sites (such as is given) suggests that flushing the shared pool may be an effective alternative if you can do it early (before the shared pool becomes too fragmented and begins to thrash). Early means when a select from V$SGASTAT still shows about 10% free memory left. Decreasing the size of the shared_pool can also help. Turning off features that introduce overhead such as session_cached_cursors usually helps if closed cursors are rarely going to be shared anyway. Packages should be pinned so that they are not flushed.

However, be aware that there are exceptions to this rule as well. Unfortunately, it is something that you will need to try and then monitor the effectiveness.

I hope this helps.

Regards,
Helen Schoone
Oracle Server EE Analyst


Tom Kyte
October 06, 2005 - 1:41 pm UTC

if you have a bad hard parse problem (lots of literal sql) "Decreasing the size of the shared_pool can also help." is valid, yes.

How the practice of dump memory to a disk works?

A reader, November 08, 2005 - 3:38 pm UTC

Hi Tom, when having strange problems on memory, you suspect is a bug for example.
How works the practice of dump memory to a disk works (to copy the content of memory to a file to analyze it)
I read on metalink
" If your kernel panics, most OS-es will (or can) try to dump the contents of memory to the first swap device -- this kernel dump *might* give you a fighting chance of diagnosing the problem"
I had read in the past some dbas found a bug making a dump of memory an analyzing this.

I'm really lost in this topic, could you please give a hint, about how usefull is this, and if possible how can this be done on windows.

Do they use an utility, how can you analyze 2 gigabytes of memory, and found there it was a bug. Using a view I could understand but analyzing an archive got from memory, no idea.

Thank you

Tom Kyte
November 08, 2005 - 10:34 pm UTC

it would not be useful for me or you.

It would be useful to a support analyst for the product in question. They would help you do it. That is, this sort of diagnostic stuff is useful for support organizations to look at, not so much us.

A reader, November 09, 2005 - 8:01 am UTC

Thanks Tom

RMAN, RAC and ORA-04031

Rob H, February 08, 2006 - 12:42 pm UTC

Tom, 

We have a database recieving consistant ORA-04031's (once a month).  Flushing the shared pool does not resolve the issue.  Only a restart will correct the problem.

ORA-04031: unable to allocate 840 bytes of shared memory ("shared pool","select ts#,file#,block#,nvl(...","sql area","frodef : qcpitnm")


It is a RAC database used as an RMAN catalog.  Both instances are 9.2.0.5.  Here are our param's:

NAME                                 TYPE                              VALUE
------------------------------------ --------------------------------- ------------------------------
hi_shared_memory_address             integer                           0
max_shared_servers                   integer                           20
shared_memory_address                integer                           0
shared_pool_reserved_size            big integer                       9227468
shared_pool_size                     big integer                       184549376
...
large_pool_size                      big integer                       67108864
...
cursor_sharing                       string                            EXACT
...
parallel_automatic_tuning            boolean                           FALSE

As you can see we have a very large shared pool (I think).  We also have a large pool setup to help RMAN (sorry new to RMAN).  Both instances of the RAC have exact same configuration.  Also, as you can see there appears to be a lot of room for reserved and no misses (yet, both instances have similar stats):

SQL> exec ismdba.print_table('select * from V$SHARED_POOL_RESERVED');
FREE_SPACE                    : 7321696
AVG_FREE_SIZE                 : 79583.6521739130434782608695652173913043
FREE_COUNT                    : 23
MAX_FREE_SIZE                 : 671600
USED_SPACE                    : 1409104
AVG_USED_SIZE                 : 15316.3478260869565217391304347826086957
USED_COUNT                    : 69
MAX_USED_SIZE                 : 232600
REQUESTS                      : 543904
REQUEST_MISSES                : 0
LAST_MISS_SIZE                : 0
MAX_MISS_SIZE                 : 0
REQUEST_FAILURES              : 0
LAST_FAILURE_SIZE             : 0
ABORTED_REQUEST_THRESHOLD     : 197568495524
ABORTED_REQUESTS              : 0
LAST_ABORTED_SIZE             : 0
-----------------

V$SHARED_POOL_ADVICE doesn't seem to indicate a requirement for a larger shared pool:

SHARED_POOL_SIZE_FOR_ESTIMATE ESTD_LC_SIZE ESTD_LC_MEMORY_OBJECTS ESTD_LC_TIME_SAVED_FACTOR ESTD_LC_MEMORY_OBJECT_HITS
----------------------------- ------------ ---------------------- ------------------------- --------------------------
                          112           95                  11393                     .9956                    5167412
                          144          126                  14649                     .9978                    5190384
                          176          157                  18274                         1                    5219988
                          208          188                  22447                    1.0018                    5242404
                          240          221                  26864                     1.005                    5272469
                          272          252                  31964                    1.0087                    5304787
                          304          283                  35438                    1.0111                    5321793
                          336          314                  38210                     1.012                    5329972
                          368          361                  43532                    1.0121                    5332386

Here is a copy of top of SGASTAT:

POOL                 NAME                                          BYTES
-------------------- ---------------------------------------- ----------  
shared pool          sql area                                   57787320
shared pool          library cache                              43353536
shared pool          miscellaneous                              23581224
shared pool          KGLS heap                                  18657280
shared pool          free memory                                16286552
              
You can see SQL is taking up the most space.   When I query v$sqlarea  I see tons of :

SQL_TEXT                                 SHARABLE_MEM PERSISTENT_MEM PARSE_CALLS EXECUTIONS
--------------------------------------- ------------- -------------- ----------- ----------
select * from "RMANUSERID1"."DF"                21275           2144           1          0
select * from "RMANUSERID1"."DB"                17331           1352           1          0
select * from "RMANUSERID1"."BS"                22363           2320           1          0
select * from "RMANUSERID1"."BP"                23171           2496           1          0
select * from "RMANUSERID1"."AL"                28163           2672           1          0

We have tons of single selects from different 2-char-named tables for all the RMAN catalogs we have (we create a different schema for each database we backup).  I cannot see a way to use bind variables to help (maybe you can).

Any help? 

Some additional info from trc file

Rob H, February 08, 2006 - 12:54 pm UTC

I looked into the trace file from the times of the incidents and found this. It appears the lib cache gets VERY large.
===========================
Current Instatiation Object
===========================
-------------------------------------
INSTANTIATION OBJECT: object=1037cd6d0
type="cursor"[2] lock=3a55b1070 handle=3a98bd208 body=0 level=0
flags=FST[60] executions=0
cursor name: select o.owner#,o.name,o.namespace,o.remoteowner,o.linkname,o.subname,o.dataobj#,o.flags from obj$ o where o.o
bj#=:1
child pin: 0, child lock: 3ab831870, parent lock: 3a55b1070
xscflg: 4008, parent handle: 3a98bd208, xscfl2: 0
----- Call Stack Trace -----
.....
----- End of Call Stack Trace -----
===============================
Memory Utilization of Subpool 1
===============================
Allocation Name Size
_________________________ __________
"free memory " 36072472
"miscellaneous " 26454800
"trigger source " 0
"ges resources " 6339848
"ges enqueues " 10384024
"KQR S SO " 5632
"type object de " 0
"PL/SQL DIANA " 0
"KSXR pending messages que" 853952
"PLS non-lib hp " 2088
"KGLS heap " 90008
"PL/SQL MPCODE " 0
"KQR M PO " 208080
"sim memory hea " 140688
"errors " 0
"gcs shadows " 1021840
"KQR L PO " 240648
"message pool freequeue " 771984
"parameters " 0
"KGK heap " 7000
"partitioning d " 0
"fixed allocation callback" 1768
"dictionary cache " 3229952
"joxs heap init " 4240
"Checkpoint queue " 6159360
"1M buffer " 2098176
"trigger inform " 0
"library cache " 120024320
"ksfv subheap " 9576
"sql area " 11376
"ges big msg buffers " 691688
"table definiti " 0
"gcs resources " 1538088
"PX subheap " 4680
"event statistics per sess" 675360
"trigger defini " 0
"KQR M SO " 28160
"KSXR receive buffers " 1034000

LIBRARY CACHE STATISTICS:
namespace gets hit ratio pins hit ratio reloads invalids
-------------- --------- --------- --------- --------- ---------- ----------
CRSR 8826620 0.977 44786933 0.987 140062 152
TABL/PRCD/TYPE 7755527 0.977 16998743 0.962 154570 0
BODY/TYBD 68976 0.915 70003 0.868 1591 0
TRGR 65 0.938 65 0.938 0 0
INDX 175 0.177 144 0.000 0 0
CLST 125881 0.993 120702 0.986 27 0
OBJE 0 0.000 0 0.000 0 0
PIPE 0 0.000 0 0.000 0 0
LOB 0 0.000 0 0.000 0 0



Tom Kyte
February 09, 2006 - 4:44 am UTC

I would always recommend you work through something like this with support. I simply cannot read in great detail all of these in the review section. (it prevents me totally from ever taking new questions)

My apologies

Rob H, February 09, 2006 - 10:59 am UTC

Sorry, I guess I've often found that during the process of creating the question or posting it the answer usually becomes appearent.

Plus, many times you point out the obvious solution very quickly (and I feel silly)

Good One !

Amol Tambolkar, March 01, 2006 - 3:45 am UTC

Good one ! And helps me to crack my problem.

ORA-4031 while compiling a procedure

PCT, July 11, 2006 - 11:23 pm UTC

Tom,

I don't understand why just compiling a procedure would generate ORA-4031? I am not actually running it.

I am getting following:

SQL>@proc1.sql
CREATE OR REPLACE PROCEDURE PROC1
*
ERROR at line 1:
ORA-04045: errors during recompilation/revalidation of
TEST.MY_PROC
ORA-04031: Unable to allocate 8216 bytes of shared memory ("shared pool","unknown object","PL/SQL DIANA","PKHSHCR: hshtbl")

Does this have to do with not using bind variables? Our shared_pool_size is around 200MB. Maybe I don't know how "bind variables" concept is used while compiling procedures.

 

Tom Kyte
July 12, 2006 - 3:42 pm UTC

you ran out of memory to compile??? You might have a 200mb shared pool, but right at that point, there was insufficient free and freeable memory to complete your operation.

bind variables doesn't come into play with the session DOING the compile, but could definitely be something all of the other sessions are doing wrong.

How to generage 4031 error in testing database?

jianhui, July 13, 2006 - 7:26 pm UTC

Hi Tom,
Is it an easy way to generate 4031 error in testing database? I tried to set tiny shared pool and some dynamic SQL, I could only see some flush but not able to generate some fragementation in order to get 4031, is there an easy way? The purpose is to see 4031 event heap dump.

Regards,

Tom Kyte
July 13, 2006 - 8:34 pm UTC

likely have to hit it from many concurrent sessions - I've used sqlplus in the past to do that (or the job queues)

eg: create a procedure that generates lots of literal sql


select userenv('sessionid') into l_sid from dual;

for i in 1 .. 10000
loop
l_sql := 'select * from dual d' || l_sid || ' where ' || i || ' = ' || i;
open c for l_sql;
close c;
end loop;



then using sqlplus in the background - run hundreds of sqlplus sessions, or using the job queues (set job queue processes high), schedule hundreds of jobs and commit (then they start running)


Sean, October 10, 2006 - 10:25 pm UTC

we have many 04031 errors lately. when the error occurred, i found the shared pool only 78% full (using v$sgastat ). Why error 04031 when the shared pool not fully utilized?

we are on 9.2.0 and parallel_automatic_tuning set false (not using the large pool)

Tom Kyte
October 11, 2006 - 8:04 am UTC

because you look at the shared pool after some processes got the 4031, they gave up and went away, freeing up memory.


You were not looking at the shared pool when the problem happened, you looked after the problem solved itself by having some people "leave"

oh: got binds?

Sean, October 11, 2006 - 10:06 am UTC

Tom,
thanks for the reply. i wanted to to find out if not using binds. i used your function remove_constants. and could not find the sql not using binds.
should i increase shared_pool_size further? it was increased from 400000000 to 700000000. we just upgraded the application yesterday.

Sean

Tom Kyte
October 11, 2006 - 3:57 pm UTC

there is no way anyone could answer this, none. think about it.

700,000,000 700mb is fairly "huge" already.

check out your soft parse %, use statspack and see what it is.

flush shared pool to resolve fragmentation

A reader, October 13, 2006 - 10:33 pm UTC

our db shared pool size = 400M. the lead dba thinks it is fragmented. he sets job to flush shared pool every 2 hours.

Do the flush shared pool can resolve the fragmentation?

How can we know the pool is fragmented?

TIA

Tom Kyte
October 14, 2006 - 8:04 am UTC

why does he think it is "fragmented", on what basis did they make this determination.

please don't ask me - because apparently your DBA has it figured. I'm willing to learn on this one, please have them tell us how they did this.

A reader, October 14, 2006 - 5:20 pm UTC

ora-4031 sharedpool error, he thinks because the fragmentation in the pool. so he jobs flush shared pool every 2 hours. but the ora-4031 is still occurring....

Tom Kyte
October 14, 2006 - 7:38 pm UTC

so, would their approach then be considered "valid" :)

are you using BINDS????

Very useful

Nick, December 18, 2006 - 10:45 am UTC

Tom,
Thanks for a great article.
We have customers who get the ORA-04031 error also: ie

ORA-04031: unable to allocate 70648 bytes of shared memory ("shared pool","SXAPI_AMVI","PL/SQL MPCODE","BAMIMA: Bam Buffer") ORA-06508: PL/SQL: could not find program unit being called

And I initially assumed that the SQL in our application was a likely candidate. I have checked the v$sql view for the most executed statements, the ones that consume most cpu / memory / parses etc and all the statements im seeing look good. The sql is all parameterised nicely. I have run through a fair bit of src code and seen nothing but nicely written sql.

I am now getting to a bit of a loss, so I tried an alter system flush shared_pool when the ORA-04031 occured, assuming this would clear down the pool completely and allow memory in the pool to be allocated on request. I could then monitor more closely to see how quickly this error re-occured and what was running at the time. This doesnt seem to happen though as the ORA-04031 still occurs immediatly after the flush.

I currently have the following init values. Any suggestions would be greatly appreciated. At the moment im wondering if the pool has developed some kind of lock out situation which is causing problems freeing memory:

LIVE SQL > select * from v$sga;

NAME VALUE
-------------------- ----------
Fixed Size 731976
Variable Size 117440512
Database Buffers 25165824
Redo Buffers 319488

4 rows selected.

LIVE SQL > select * from v$sgastat;

POOL NAME BYTES
----------- -------------------------- ----------
fixed_sga 731976
buffer_cache 25165824
log_buffer 295936
shared pool errors 28488
shared pool KGK heap 7000
shared pool KQR L PO 1667856
shared pool KQR M PO 983664
shared pool KQR S SO 13056
shared pool sessions 540800
shared pool sql area 15062064
shared pool 1M buffer 2098176

POOL NAME BYTES
----------- -------------------------- ----------
shared pool KGLS heap 4792456
shared pool parameters 7696
shared pool free memory 3266136
shared pool transaction 363616
shared pool PL/SQL DIANA 1313664
shared pool FileOpenBlock 1191104
shared pool PL/SQL MPCODE 486112
shared pool library cache 16081152
shared pool miscellaneous 17044544
shared pool MTTR advisory 11128
shared pool PLS non-lib hp 2872

POOL NAME BYTES
----------- -------------------------- ----------
shared pool joxs heap init 1192
shared pool sim memory hea 38352
shared pool table definiti 4216
shared pool temporary tabl 56416
shared pool trigger defini 18456
shared pool trigger inform 1256
shared pool Checkpoint queue 513280
shared pool dictionary cache 3229952
shared pool enqueue resources 1040816
shared pool ksm_file2sga region 370496
shared pool KSXR receive buffers 1034000

POOL NAME BYTES
----------- -------------------------- ----------
shared pool FileIdentificatonBlock 349824
shared pool message pool freequeue 771984
shared pool KSXR pending messages que 853952
shared pool event statistics per sess 2251200
shared pool fixed allocation callback 496
large pool free memory 8388608
java pool free memory 33554432

40 rows selected.

Tom Kyte
December 18, 2006 - 11:12 am UTC

are you sure you don't have a bind issue - you looked at "high load sql with lots of executions", look for things that ARE NOT executed more than once.

to Nick from uk

SZB, December 18, 2006 - 12:08 pm UTC

What's version of Oracle?
you have less than 100M sharedpool. it seems small.
and db buffer is too small also though it has nothing to the sharedpool

Tom Kyte
December 18, 2006 - 2:54 pm UTC

100mb is pretty good for most applications - big even.

SZB, December 18, 2006 - 7:44 pm UTC

depends Oracle version, for example 9.2.0.7, asking at lest 150M from the readme of the patchset.

Tom Kyte
December 18, 2006 - 8:47 pm UTC

for the upgrade, but that is not "every day".

If 150m was "normal".... they might not even think to note it...

9i using the subpool

SZB, December 19, 2006 - 8:53 pm UTC

starting 9iR2 oracle uses subpool. the number of the subpool depends on the server cpus. say you assigned 150M for the sharedpool then you may only have 30M for each subpool (5 subpools) if _kghdsidx_count is not set to 1. most likely you will get the error. you can set subpool to 1 by _kghdsidx_count=1 and shared_pool_size=150m then you will get subpool = 150M.

Tom Kyte
December 20, 2006 - 8:04 am UTC

it depends on the number of cpus AND the size of the shared pool

you need lots of cpus and a really big shared pool before that happens.

9i and subpools

Roderick, December 19, 2006 - 10:38 pm UTC

The number of subpools considered is something like max(trunc(cpu_count /4), 7). Plus Oracle will make sure that each subpool is at least 128MB in size. So the shared pool size has to be at least 256MB on a machine with at least 8 CPUs before it is divided into 2 subpools.
v$latch_children might show you have significant activity across multiple shared pool child latches (one per subpool).

SZB, December 20, 2006 - 8:43 am UTC

if you do not want the feature, set _kghdsidx_count =1 to defeat the feature.

the subpools would help with Latch contention, the subpool break up the shared pool and make fragmentation easier.


Tom Kyte
December 20, 2006 - 8:48 am UTC

do not set _ parameters without a really good reason and the suggestion of support.


Kamran Siddique, January 04, 2007 - 7:13 pm UTC

Hi,

Thanks for this greate discussion..

We are facing the same problem. We have multiple oracle databases in different QA,pre-production and then Production environment. Since , last 2 months on 4 of our oracle database 10.2.0 in QA and pre-production environment has faced "ORA-4031".

After many days discussion with oracle support now it's recommended to
_kghdsidx_count to 1.

We are really reluctant to change this parameter value to 1. Because then we have to change our production environment to same as we take all of databases on the same cofiguration.

Would you please tell me is there any negative effect for using this value to 1 on 3 Terabyte database with highly OLTP system 24/7 days environment !

Thanks .


Tom Kyte
January 05, 2007 - 9:48 am UTC

it will lead to decreased concurrency during parsing - if you parse like mad, you might not be able to parse at the same rate.

Kamran Siddique, January 05, 2007 - 3:02 pm UTC

But during this error , everything becomes unavailable . Even new connections being get refused by the database and ultimately we have to restart the database.

1-Would you think we need to trade-off between decreaded in concurrency and availabilty. ( I am using availability because when we have to restart the database , it means there's a question of it's availability).

2- We would like to configure our own "sub pool" inside of
"shared pool" instead of letting oracle RDBMS engine to do decide it's value. I know there's an indirect way to set these sub-pools like according to support :

"I think what the note is saying is that you should do the calculation 'backwards'.
In other words, decide on the maximum size you can have your shared pool and then see how many subpools of 500M will fit into it."

Instead of this is there any parameter available in oracle for setting the value of subpools inside shared pool. If not, then why not we revert this feature back to oracle 8i when there was only one pool was available inside shared pool.

I would appreciate your comments on the above mentioned points.

Best Regards.


Kamran Siddique
Production DBA
AleaSoftwareInc.



"
Tom Kyte
January 06, 2007 - 8:46 am UTC

you asked me what the side effect of setting that to one is, I replied. They are just "facts"

you do not configure your own "sub pool", Oracle does that.


Good Article

Jon E, January 05, 2007 - 3:41 pm UTC

I'm facing developers who are opposed to using bind variables, and are insisting that I turn 'CURSOR_SHARING' to 'FORCE' as a 'solution' to the latching issues we've been having ...

I've been reading on metalink, and seeing that there are a few possible problems with this, but I was wondering what your view on this solution would be Tom.

Tom Kyte
January 06, 2007 - 8:55 am UTC

I have a presentation I give, it is my favorite one currently - it is two hours long.

It is entitled "All about Binds", actually, I call it:

It's All About Binds


In there, I have a section "should I use cursor_sharing = force or similar".

That text is in a normal, small font. The next slide is simply one word, using the largest font that permits that word to fit, that word is:

                              NO



cursor sharing = force/similar

- negatively affects well written applications
- introduces bind variable peeking issues that someone might well have been trying to avoid
- does not AVOID SQL INJECTION ISSUES - if your developers do not bind, they have sql injection issues. Not might have, rather - they HAVE sql injection (huge security problem) issues.


So, don't even try to convince them for performance, scalability and memory usage reasons - just go to their management and say "hey, your coders are putting us at great risk security wise by not writing proper code - you should look into fixing that before we get hacked"

cursor sharing CANNOT be used to solve a sql injection bug.
https://www.oracle.com/technetwork/issue-archive/2005/05-jan/o15asktom-084959.html

to Kamran Siddique

SZB, January 05, 2007 - 9:51 pm UTC


we had the same problem after upgraded to 9iR2.

after worked with Oracle support, we understood the feature:

_kghdsidx_count is the parameter for setting of no. the subpool.

set 1 will be fall back to 8i for just one -- sharedpool.

set 2 there will be two subpools. ....

depends your server's memory and the oracle software bit (32 or 64), base on your current sharedpool size setting you could configure the multipools as:

for 32bit software the max SGA is 4G. say your current set of sharedpool is 300M. you may want each subpool to be 2*300M. you could configure 4 subpools for the database.

HTH


TO SZB

Kamran Siddique, January 08, 2007 - 12:28 am UTC

Thanks for details. We would definitely follow your given concept on our QA and Pre-Prod dbs.

However, We would not be able to apply this solution on Production database as it requires outage .



Thanks and Regards.

Kamran Siddique

ORA-01000: maximum open cursors exceeded

Christopher, March 21, 2007 - 3:35 am UTC

Tom,
We're getting the following error in our application:

java.sql.SQLException: ORA-01000: maximum open cursors exceeded

Please help me to resolve this.
Tom Kyte
March 21, 2007 - 6:54 am UTC

either

a) find the cursor leak you have (query v$open_cursor)
b) increase your maximum open cursors if it is too low for what you do.

got error from Production server in Oracle 10g R1

K P Ratnaker, March 30, 2007 - 7:53 am UTC

Hi tom,

I got an error from my Production server.Error is

ORA-07445: exception encountered: core dump [string] [string] [string] [string] [string] [string]
Cause: An OS exception occurred which should result in the creation of a core file. This is an internal error.
Action: Contact your customer support representative.

but here i didn't get any solution can you give me solution

Ratnaker
Tom Kyte
March 30, 2007 - 4:14 pm UTC

see the action above...

instance is crashed

K P Ratnaker, April 03, 2007 - 8:21 am UTC

Errors in file d:\nervecen\trace\bdump\nervecentrex_m000_3836.trc:
ORA-07445: exception encountered: core dump [ACCESS_VIOLATION] [0x345D704] [] [] [] []

Tue Apr 03 17:28:19 2007
Errors in file d:\nervecen\trace\udump\nervecentrex_ora_2640.trc:
ORA-07445: exception encountered: core dump [ACCESS_VIOLATION] [0x3470353] [] [] [] []

Tue Apr 03 17:28:20 2007
Errors in file d:\nervecen\trace\udump\nervecentrex_ora_2640.trc:
ORA-07445: exception encountered: core dump [ACCESS_VIOLATION] [0x3470353] [] [] [] []
ORA-07445: exception encountered: core dump [ACCESS_VIOLATION] [0x3470353] [] [] [] []

Tue Apr 03 17:28:22 2007
Errors in file d:\nervecen\trace\udump\nervecentrex_ora_2640.trc:
ORA-07445: exception encountered: core dump [ACCESS_VIOLATION] [0x3470353] [] [] [] []
ORA-07445: exception encountered: core dump [ACCESS_VIOLATION] [0x3470353] [] [] [] []
ORA-07445: exception encountered: core dump [ACCESS_VIOLATION] [0x3470353] [] [] [] []

Tue Apr 03 17:28:26 2007
Errors in file d:\nervecen\trace\bdump\nervecentrex_pmon_1072.trc:
ORA-07445: exception encountered: core dump [ACCESS_VIOLATION] [0x3470353] [] [] [] []

Tue Apr 03 17:28:44 2007
ARC1: terminating instance due to error 472
Tue Apr 03 17:28:45 2007
Errors in file d:\nervecen\trace\bdump\nervecentrex_ckpt_3072.trc:
ORA-00472: PMON process terminated with error

Tue Apr 03 17:28:45 2007
Errors in file d:\nervecen\trace\bdump\nervecentrex_rvwr_3624.trc:
ORA-00472: PMON process terminated with error

Tue Apr 03 17:28:46 2007
Errors in file d:\nervecen\trace\bdump\nervecentrex_lgwr_2240.trc:
ORA-00472: PMON process terminated with error

Tue Apr 03 17:28:47 2007
Errors in file d:\nervecen\trace\bdump\nervecentrex_mman_744.trc:
ORA-00472: PMON process terminated with error

Tue Apr 03 17:28:47 2007
Errors in file d:\nervecen\trace\bdump\nervecentrex_dbw0_308.trc:
ORA-00472: PMON process terminated with error

Tue Apr 03 17:28:47 2007
Errors in file d:\nervecen\trace\bdump\nervecentrex_reco_2324.trc:
ORA-00472: PMON process terminated with error

Instance terminated by ARC1, pid = 3648
Dump file d:\nervecen\trace\bdump\alert_nervecentrex.log

my instance is crashed within 24 hours in production server.please give me solution.How i can retify?

Ratnaker
Tom Kyte
April 04, 2007 - 9:44 am UTC

http://metalink.oracle.com/

please utilize support for something like this.

Alexander, May 03, 2007 - 2:24 pm UTC

Hi, a thought just occurred to me. If shared sql is aged out of the library cache based on LRU, how is it possible for ORA-4031's to happen because of lack of bind variables? Is bad sql aged out differently?

Tom Kyte
May 04, 2007 - 12:40 pm UTC

fragmentation, and the fact that the larger a shared pool is, the longer and longer it takes to flush out. You chew it up with thousands of small literal sql statements and then try to barf it all out - and after that binge and purge, there isn't sufficient contiguous memory. But you waited a while to find that out.

Large Size Package

Kishore, May 22, 2007 - 5:53 am UTC

Hello Sir,

I have a package which is huge in size (50000 lines). If I copy the text of the package to a notepad and the size comes to 1.19 MB. It is huge as it is a migrated one. Recently because of a new requirement we had to add some code to this package (approx 10 line, call to a new package).
Now when we run the program it comes out saying "end of file on communcation channel". I have the following basic queries.

1. Is there a way we can avoid such a situation. This huge package is being re written, however is there a way we could run it temporarily.
2. When we call a procedure of package B in package A does the code comes as INLINE or does it remains separate with its own address space etc ?

Thanks & Regards
Kishore
Tom Kyte
May 22, 2007 - 9:33 am UTC

"as it is a migrated one"???? no idea what that means or why that is an excuse for unwieldy code.

please utilize support for ora 600, 7445, 3113.

the code does not run in its own address space - but it isn't inlined either (the two options you present are not the only two options for executing code)

Kishore, May 22, 2007 - 10:59 am UTC

Thanks a lot for your response, however I couldn't get the line

"the code does not run in its own address space - but it isn't inlined either (the two options you present are not the only two options for executing code)"

As per my understanding. If you have a packages as shown below.

PACKAGE A
{
procedure a
{
B.b();
}
}

PACKAGE B
{
procedure b
{
C.c();
}
}

PACKAGE C
{
procedure c
{
does something
}
}


When I call A.a() the package A gets loaded in memory and has address XXXXXX to XXXXHHin memory. When I make a call to B.b() it (package B) gets loaded to memory at XXXXHY to XXXXIV. All the vraiables defined within B.b() goes to the stack where as any global variables defined in the package B goes to the heap. Along with these variables the return address of the next line of execution in A.a() is also stored.

Now B.b() calls C.c() (gets loaded at XXXXJK to XXXXKL) and the all the local varibles along with the return address of B.b() goes in the stack and all global variables in the heap. When I come out of C.c() all the local varibales of C.c() are gone. I come back to B.b() and from there when I come back to A.a() all the local variables of B.b() are gone. The global vraiables are still there on the heap.

Thus I meant that each program has its own address space in the memory. I am assuming that the INLINE code would share the same address space as the calling code. If the code was inline here all the code of B.b() and C.c() would be within
A.a() available in the memory.

Could you please let me know if my understanding is wrong.

Also could you please let me know the other ways of getting the code executed?

Your response is much appreciated!!

Regards
Kishore
Tom Kyte
May 22, 2007 - 7:00 pm UTC

they are in the same flat address space, they are not in different address spaces

if they were in different address spaces, both could be in XXXXXX to XXXXHH, in their "virtual" address space.

global variables and local variables are a matter of program scoping, they have nothing really to do with address spaces (other than a true separate address space would complicate global variables - but that is a problem for the language to deal with)


the problem here is in how you are uniquely defining address space (redefining it really).

vinod, June 20, 2007 - 2:05 am UTC

Hi Tom,

In 9i V$SQLAREA , SQL_TEXT with same bind variables are appearing more than once (CURSOR SHARING IS SIMILAR) .Why is this hapening ?The issue is these query's are consuming Approx 250M of SHARABLE_MEM for every execution.

Thanks & Regards,
Vinod SR (Oracle DBA )


Tom Kyte
June 20, 2007 - 10:07 am UTC

by design - cursor_sharing=similar is DESIGNED to allow for many child cursors (force = one or very few child cursors, similar - similar might not reduce ANY of of the child cursors)

similar's design goal is to do this!

similar only truly binds that which is "bind safe". if the optimizer says "you know, different bind values would or could lead to different plans" - then similar will not actually bind it.

force on the other hand just says "bind em all, everything, without regard to anything"

consider this example where I first have insufficient statistics for the optimizer to think "plans could or would change" versus later I give it enough statistics to think that:

ops$tkyte%ORA10GR2> create table t
  2  as
  3  select 99 id1, a.*
  4    from all_objects a
  5  /

Table created.

ops$tkyte%ORA10GR2> update t set id1=rownum where rownum=1;

1 row updated.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create index t_idx1 on t(id1);

Index created.

ops$tkyte%ORA10GR2> begin
  2     dbms_stats.gather_table_stats
  3     ( user, 'T', method_opt=>'for all indexed columns size REPEAT' );
  4  end;
  5  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> alter system flush shared_pool;

System altered.

ops$tkyte%ORA10GR2> select sql_text, count(*)
  2    from v$sql
  3   where sql_text like 'select /* look for me */ * from t where id1 = %'
  4   group by sql_text
  5  /

no rows selected

ops$tkyte%ORA10GR2> alter session set cursor_sharing=similar;

Session altered.

ops$tkyte%ORA10GR2> declare
  2      c sys_refcursor;
  3  begin
  4      for i in 1 .. 100
  5      loop
  6          open c for 'select /* look for me */ * from t where id1 = '||i;
  7          close c;
  8      end loop;
  9  end;
 10  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select sql_text, count(*)
  2    from v$sql
  3   where sql_text like 'select /* look for me */ * from t where id1 = %'
  4   group by sql_text
  5  /

SQL_TEXT                                             COUNT(*)
-------------------------------------------------- ----------
select /* look for me */ * from t where id1 = :"SY          1
S_B_0"


ops$tkyte%ORA10GR2> alter system flush shared_pool;

System altered.

ops$tkyte%ORA10GR2> begin
  2     dbms_stats.gather_table_stats
  3     ( user, 'T', method_opt=>'for all indexed columns size 254' );
  4  end;
  5  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> declare
  2      c sys_refcursor;
  3  begin
  4      for i in 1 .. 100
  5      loop
  6          open c for 'select /* look for me */ * from t where id1 = '||i;
  7          close c;
  8      end loop;
  9  end;
 10  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select sql_text, count(*)
  2    from v$sql
  3   where sql_text like 'select /* look for me */ * from t where id1 = %'
  4   group by sql_text
  5  /

SQL_TEXT                                             COUNT(*)
-------------------------------------------------- ----------
select /* look for me */ * from t where id1 = :"SY        100
S_B_0"



vinod, June 20, 2007 - 10:48 pm UTC

Hi Tom,

Many thanks for the reply ,you have defined this in a very simle way.You are really great .

Thanks & Regards,
Vinod SR (Oracle DBA)

I want to ask a query about oracle 9i architecture

naveed sulaiman, June 25, 2007 - 3:30 pm UTC

hello,
i am the student of oracle 9i master programe at oracle universite partner in pakistan. i have a problem in understanding the concept of architecture of oracle please tell me briefly about oracle 9i architecture and also which file work what function.

Tom Kyte
July 02, 2007 - 9:04 am UTC

Attn naveed

A reader, July 02, 2007 - 4:46 am UTC

Attn Naveed,

For architecture and basic query's you may need to look for some students section.This is a place where HI PROFILE people discuss things .

Thanks For Your Understanding.


Tom Kyte
July 03, 2007 - 9:41 am UTC

interesting, I was not aware of that "rule"

in fact, I'm not in favor of that "rule", so consider it stricken from the bylaws.


Helping Developers

Ron, July 19, 2007 - 3:55 pm UTC

Tom,

I've got a developer who wants to rewrite some his code to make better use of the shared pool. We've been getting some 4031's in the past so we (DBA's) want to do everything that we can to help him. In the absence of 4031 errors how can we help measure his *before* and *after* code to see if he's going in the right direction? It's Oracle 9 on Solaris 10.

Will your Test Harness be a good help? If so, what specifically should we look for in the Harness output?

Thanks.
Tom Kyte
July 19, 2007 - 7:21 pm UTC

have them write everything using static sql in plsql - it is IMPOSSIBLE to not be shared pool friendly that way!!

Whatever

Ron, July 19, 2007 - 9:37 pm UTC

Sorry to bother you.
Tom Kyte
July 20, 2007 - 8:05 am UTC

you didn't, you did just seem to pass up the best advice as it if were worthless.

so, whatever.

SGA fragmentation

AZURA, September 11, 2007 - 6:04 pm UTC

We are investigating an (9i)ORA=604,ORA-4031 ( 4K, unknown object,heap2.0), we are seeing perm portion of SGA is  increasing over time. 
1. One dba is suspecting it is because we shutdown down app. Server and restart it daily( 350 connection pooling sessions re-establish everday) and he thinks perm portion of sga for session parm value  does not clen up properly. He concludes this with metalink article  Note:396940.1

2. Statpack report just before error shows a huge version count on some sql statement. One sql occupied almost more than 50MB of shared memory.

I am just trying to understand what information  reside in perm portion of SGA and any relationship with session, and how these perm portion leads to memory fragmentation and end up with ORA-4031 error.  Also Metalink article Note:146599.1 explain with the following sql  ¿ if perm is increasing ¿ than it is system memory leak.  Is it OS memory leak?
  SQL> SELECT KSMCHCLS CLA SS, COUNT(KSMCHCLS) NUM, SUM(KSMCHSIZ) SIZ, To_char( ((SUM(KSMCHSIZ)/COUNT(KSMCHCLS)/1024)),'999,999.00')||'k' "AVG SIZE" 
FROM X$KSMSP GROUP BY KSMCHCLS; 

CLASS NUM SIZ AVG SIZE 
-------- ---------- ---------- ------------ 
R-free 12 8059200 655.86k <= Reserved List 
R-freea 24 960 .04k <= Reserved List 
free 331 151736448 447.67k <= Free Memory 
freeabl 4768 7514504 1.54k <= Memory for user / system processing 
perm 2 30765848 15,022.39k <= Memory allocated to the system 
recr 3577 3248864 .89k <= Memory for user / system processing 

a) if free memory (SIZ) is low (less than 5mb or so) you may need to increase the shared_pool_size and shared_pool_reserved_size. 
b) if perm continually grows then it is possible you are seeing system memory leak. 
c) if freeabl and recr are always huge, this indicates that you have lots of cursor info stored that is not releasing. 
d) if free is huge but you are still getting 4031 errors, (you can correlate that with the reloads and invalids causing fragmentation) 

 

Tom Kyte
September 15, 2007 - 4:09 pm UTC

rather than think that, have you confirmed it? You can see if perm memory is growing.


I suggest you utilize support to diagnose this.

Oracle not binding or nasty 10g "bug" ?

Anthony Molinaro, October 13, 2007 - 8:28 am UTC

Tom,

While poking around the oracle site, I noticed a 0ra-4031 error on this page:

https://irecruitment.oracle.com/OA_HTML/IrcVisitor.jsp

the error shown:

<PRE>Unable to create anonymous session. 
Your session is no longer valid. Oracle error 4031: java.sql.SQLException:
 ORA-04031: unable to allocate 100 bytes of shared memory ("shared pool","
select s.inst_id,s.addr,s.in...","sql area","qerfxs :
 kqfalo") ORA-06512: at "APPS.FND_TRACE", 
line 421 ORA-06512: at "APPS.FND_AOLJ_UTIL", 
line 395 ORA-06512: at "APPS.FND_AOLJ_UTIL", line 236 ORA-06512: 
at line 1 has been detected in createSession(int, String)(
userid=6','04685C8A62C3A9BBE040018AE6A1484115371239033583190488160275095128'). 
</PRE>

Just curious, is the site run on 10g? I ask because after 5 years of not seeing a 4031 error on any of our systems (dev, prod, or reporting) we recently hit it.
After opening a tar, the analyst reported that in 10g (10.2.*, to be specific) you must have a minimum of a 600m shared pool (regardless of how many users you have) otherwise you can hit any of these bugs:

Bug.5852503 (32) MMON SPINNING AND CAUSING ORA-4031 ERRORS
Bug.5311311 (33) ORA-04031 ERROR AND PRODUCTION DB CRASHED ORA-4031 
Bug.6368115 (10) 2 NODES RAC HANG BECAUSE OF CI ENQ HANG6476556 
Bug.6047393 (33) INSTANCE HANGS DUE TO LATCH WAITS


This requirement regarding the shared pool size isn't documented anywhere. 

Think you can get this in the doc somewhere? I made the same request with the sev 1 support analyst but didnt get a response. 

thanks,
  Anthony

Tom Kyte
October 14, 2007 - 2:29 pm UTC

I have a much small shared pool on 10g than 600mb - way way smaller. That is not true.

minimum recommended value

Anthony, October 14, 2007 - 10:16 pm UTC

Tom,

I thought that sounded high too, particularly since our shared pool was already 400m.
According to the analyst, 500m is what has been determined the "minimum recommended value". The analyst suspects we hit:

Bug 5852503 - MMON SPINNING AND CAUSING ORA-4031 ERRORS

Some snippets from the tar (The "fix", according to Ken (the analyst)):


"Resolved by increasing shared_pool_size to minimum recommended values. I think now that it's set to 500M, it meets the minimum recommended setting. We usually recommend at LEAST 500M per subpool and there is only one subpool in use"



I asked the analyst exactly where in the doc it says that the shared pool must be 500m or more on 10.2. I also asked for some more info on this bug regarding mmon and the shared pool size and his response was:



"The bug that I mentioned earlier was closed as "not a bug" since increasing the shared pool size to the minimum recommended level resolved their issue. It used to be 128M per subpool in earlier versions, but as time went on and new features were implemented, we saw that it just wasn't enough. I don't believe it's specifically documented anywhere that 500M is the minimum,
it's just information gathered over time based on behavior observed as new versions are released."


I thought that sounded a bit "off" so followed up with him about it, and this "minimum recommended value" came from the fix for some other customer who ran into Bug.5852503:



"Within this bug they recommended setting the shared
pool size to 600MB and made a comment at the end stating, "issue is resolved by setting minimum values for the shared""


so, on one hand it looks like perhaps he meant 600mb shared pool is the minimum if you're on 10.2 and you hit this mmon bug? On the other hand, he admits that he wasn't sure we even hit that bug:


"So the real issue here is that the shared pool was inadaquate and needed to be increased. I never made a definite match to the issue mentioned in that bug or to MMON specifically. The reality is, it could also be related to one of the other bugs or may not
be related to any bug. we just didn't have enough information to determine that. If the issue reproduces after increasing the shared pool, then we would need to collect additional diagnostics to determine if there really is an issue"


From that it seems like he's saying, "hey, I'm not 100% what the problem is, but at the very least set your shared pool to the min recommended value and we'll go from there".

The tar is closed, but I'm going to reopen it. I'd really like to have the analyst clarify whether 500m is the minimum in 10.2 (period) or only for those on 10.2 that hit 5852503.

If you have additional info on this, I'd love to hear it, otherwise I will update after I've heard back from the analyst.
Tom Kyte
October 15, 2007 - 11:32 am UTC

well, please don't tell my database (asktom) that it is way below the minimum by about 50%.

The bug pointed to was about automatic memory management and a problem that was occurring when a resize was happening. The workaround suggested prevented resizing from having to happen. It was not a recommendation for a minimum shared pool size.

A reader, October 15, 2007 - 1:56 pm UTC

Tom,

good to know.

we don't use automatic memory managment (except for pga) nor do we ever do online resizes (the analyst knew this).

I'll include your info to him as well. Will report back asap.

thanks.

unique statement

David, October 23, 2007 - 12:21 pm UTC

I understand that reusing PreparedStatements is good. However, is it still worth using a PreparedStatement if executed once ? I'm reluctant to change my program to use PreparedStatements everywhere. Is there any advantage of using bind variables on my queries that are not in loops?

I've run the following tests in JBDC :
for (int i=0; i<1000 ;i++) {
PreparedStatement stm = conn.prepareStatement("SELECT * FROM t WHERE id=?");
stm.setLong(1, i);
ResultSet rs = stm.executeQuery();
rs.next();
stm.close();
}

for (int i=0; i<1000 ;i++) {
Statement stm = conn.createStatement();
ResultSet rs = stm.executeQuery("SELECT * FROM t WHERE id="+i);
rs.next();
stm.close();
}

And both loops take the exact same time to execute.
Obviously, the database doesn't recognize (soft parse?) the prepared statement I execute.


Tom Kyte
October 23, 2007 - 1:26 pm UTC

YES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BINDYES THERE IS
JUST BIND
YES THERE IS
JUST BIND


I cannot stress this ENOUGH

YES THERE IS
JUST BIND
YES THERE IS
JUST BIND

please read this thread again and again.

and again

and again.


YES THERE IS
JUST BIND
YES THERE IS
JUST BIND

David, October 24, 2007 - 6:44 am UTC

Ok, there is. no kidding. But which ? Maybe I need one more 'YES THERE IS' in capitals to understand...
Maybe I've not explained myself right.
Using bind variables for every single statements makes my application slower and more difficult to write/read, so I can't see why THERE IS.
Sorry

this :
Statement stm = conn.createStatement();
ResultSet rs = stm.executeQuery("select * from t where id="+id);
rs.next();
rs = stm.executeQuery("select * from u where id="+id);
rs.next();
rs = stm.executeQuery("select * from v where id="+id);
rs.next();
stm.close();

perfroms faster, is simpler, and uses less cursors than this :
PreparedStatement stm = conn.prepareStatement("select * from t where id=?");
stm.setLong(1, id);
ResultSet rs = stm.executeQuery();
rs.next();
stm.close();
stm = conn.prepareStatement("select * from u where id=?");
stm.setLong(1, id);
rs = stm.executeQuery();
rs.next();
stm.close();
stm = conn.prepareStatement("select * from v where id=?");
stm.setLong(1, id);
rs = stm.executeQuery();
rs.next();
stm.close();

If the problem is that it uses more memory, then I'd rather allow more, and keep using non-bind variables (?)

Thanks !
Tom Kyte
October 24, 2007 - 9:18 am UTC

Using bind variables for every single statements makes my application slower and more difficult to write/read,

HUH???? HOW SO?

how does it make it slower?
how does it make it harder to read.

you have the simplest example in the world - please google sql injection and after you add all of the code to inspect the inputs to ensure they are safe to bind - come on back and tell us your code is easier to write.


ugh, i just give up, some days - it is like banging a head on the wall. same message over and over and over and over and over..


frustrating.

Look - hard parsing = massive latching. I've demonstrated that here on this site, on this page until I'm blue in the proverbial face.


Your statement based code in your simple single user test does not perform 'better', it is slower and uses more database resources (go ahead, flush your shared pool - run your tests, tell us what you see then, please - just do that)

You'll find you spend more time PARSING SQL THAN EXECUTING IT.

You should parse your prepared statement ONCE, then you just bind/execute.

Here, this is from Effective Oracle by Design - it is long, but it is somewhat comprehensive.

http://asktom.oracle.com/pls/asktom/z?p_url=ASKTOM%2Edownload_file%3Fp_file%3D835681700490262461&p_cat=binding.pdf&p_company=822925097021874


What if ??

karthick, October 24, 2007 - 10:06 am UTC

David, What if some one name there son as [robert'; DROP TBALE STUDENT;]

http://xkcd.com/327/

check it out :-)
Tom Kyte
October 24, 2007 - 3:25 pm UTC

discussed in the excerpt from Effective Oracle by Design I posted as well...

What if?? with JDBC

Stew Ashton, October 24, 2007 - 4:42 pm UTC


JDBC will reject statements with ";" in them, so the ";drop table" example will not work (at least with Oracle drivers...)

However, the most basic SQL injection attacks will work with JDBC; I found a huge security hole just two weeks ago, this problem is for real.

Tom mentioned "all of the code to inspect the inputs to ensure they are safe to bind". This is the code that removes single quotes and such from the input strings. Most web applications do this in the browser using Javascript, and anyone with a Firefox browser can bypass the Javascript! Just use Tools->Dom inspector and add a submit button that bypasses all that code !

Use bind variables and say farewell to such problems. In Java, connection pooling and prepared statement caching are there for the asking (provided by JDBC or the J2EE server) so there is no downside, no possible excuse.
Tom Kyte
October 25, 2007 - 5:42 pm UTC

jdbc will not - people do things like "dynamically constructed plsql blocks"

and in sqlserver - ';' is part of their language.

any SQL injection attack that works "on oracle" works in jdbc, jdbc is just a little API.

javascript is not really code, CLIENT code - real code - does this.

unique statement

David, October 25, 2007 - 5:09 am UTC

If you don't like to answer my stupid questions, I'm sorry for your head.

"You should parse your prepared statement ONCE, then you just bind/execute."
Stop banging your head on the wall and begin reading my first sentence. (I'm frustrated too)
I'm not talking about repetitive statements. I'm not in a loop.
I'm talking about a unique statement.

"run your tests, tell us what you see then, please"
Ok. I've done it again. I see the PreparedStatement is slower. :'(
I must be missing something. But I'm not able to see what, even with your anwsers.
Maybe it's just the memory managment on the client side (more java objects).

About the simplicity, karthick, it was funny. Maybe you're right. I'm used to escape literals though. So I guess it's a question of taste. I don't like to count my '?'.

English is not my native language. But I don't despair us to understand each other. Sorry for being so so stupid.

Have a good day.
Tom Kyte
October 25, 2007 - 6:25 pm UTC

did you read the paper i posted? It is chock full of java code - I specifically picked JAVA code

and you need to get out of single user mode

and you need to flush your shared pool (if you run your test a second time and all of the statement are cached in the shared pool - which never happens in real life since everyone does UNIQUE sql)....

but - really - did you read the paper I posted? Did you take a peek at that.


not always. great !

David, October 25, 2007 - 5:56 am UTC

Ok. I've found all my answers in the Chapter "There Are Exceptions to Every Rule".
That's exactly what I wanted to confirm.
So I'm relieved : I don't have to bind where I haven't been doing so.
Thanks heaps !!
Tom Kyte
October 25, 2007 - 6:26 pm UTC

ok

now you scare me

because you are obviously not building a data warehouse application.

You are doing transactional things and that - that is where you HAVE to use binds.

If you read that - and you got the idea you don't need binds - we are having an utter and complete failure to communicate on the most basic levels.

soft parse in different oracle drivers

Amir Riaz, October 25, 2007 - 6:41 am UTC

Hi Tom

the above discussion about bind variables is really nice. I remember in one of you reviews on this site while elobrating the difference between soft parse and softer parse you said that in java we have to code like this

if(stmt == null)
{
//prepareStatement
}

in plsql and pro c we dont have to do this because after closing cursor plsql retains the cursor and on next open cursor statement simply get the cursor from PGA without going to library cache.

my question is if i change(java language) the oracle driver from thin to oci will it behave the same way provided that am changing this driver on middle tier i.e at application server level.

I think with oci driver i dont have to use if condition and my prepareStatement call will get the cursor i opened previously and then closed as in plsql.

regards
Amir Riaz
Tom Kyte
October 25, 2007 - 6:27 pm UTC

there is prepared statement caching in jdbc, yes.

http://www.oracle.com/technology/products/oracle9i/daily/jun24.html

To Amir re: different oracle drivers

A reader, October 25, 2007 - 7:58 am UTC


Hello Amir,

You are referring to what is called "statement caching", which is available in both thin and OCI drivers.

http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/overvw.htm#BABJFBIF

Please note that if the java code is running inside an application server, the programmer may not have direct access to the JDBC driver. Instead, connection pooling and statement caching must be configured in the administration console of the application server.

Example of statement caching with JDBC thin driver:

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:249532300346363309#256587200346758405

David, October 26, 2007 - 5:40 am UTC

I did read the paper. And I find it greatly useful.

"If you read that - and you got the idea you don't need binds - we are having an utter and complete failure to communicate on the most basic levels."

Do we ?
But when you write :
"In fact, only '%A%' needs to be bound. No matter how many times that query is executed
by that application or how many times you run that application, the predicate where job
= 'CLERK' is constant. Only the predicate against ENAME is variable, so only it needs
to be bound.
"
I think I understand that, though. I don't think I don't need to bind variables. I just misunderstood when you wrote things like "It is not something to think about, it is something you MUST do" or "JUST BIND", because I thought it meant I had to bind on all the constant predicates I have. Then I read "But I have seen binding taken to an extreme, when developers seek and destroy all occurrences of literals in queries wherever they exist". And so, I hear you : I will not.

The paper is much complete and covers all my questions about bind variables.

Btw, I've rerun my testcase many times more accuratly and flushing the shared pool each time. I got the same executions times. Sometimes binding goes faster, sometimes slower. But the differences are always very tiny. I'm probably not doing it correctly.
Tom Kyte
October 29, 2007 - 11:02 am UTC

java is the culprit, you are timing java as much as anything else. and java is very unpredictable that way.

use tkprof, time the time you spend IN THE DATABASE

Also, make sure you understand what I mean when I talk about latching - how that inhibits scalability and check out the very important multi-user test.

issues of scale do not make themselves known in single user mode.

Just turning off bold (if I can)

Stew Ashton, October 27, 2007 - 4:35 am UTC


I hate the way the pound sign followed by B is still interpreted as bold face, especially since Oracle documentation sometimes puts that into its URLs :(

The APEX folks said they would at least turn off formatting at the end of each review...

Is share pool size indepnedent of database size?

A reader, December 09, 2007 - 6:20 am UTC

Hi,
Recently someone told to me that shared pool size is independent of the database size. I think that might be techinically true but is it advisable to give shread pool 150 MB to each of the databases, having sizes 700 GB and 90 GB? Will it not get wasted in smaller size databases?

Please clarify.


Tom Kyte
December 10, 2007 - 10:58 am UTC

the shared pool is not dependent on the size of the database.

It is dependent on the size of your plsql code, the amount of sql you execute, the number of schema objects that must be cached in order to parse efficiently.


If your 90gb database executes TONS of plsql and thousands of SQL statements (different sql statements) and your 700gb database just does some ad-hoc SQL in a warehouse environment then your 90gb database needs a "big" shared pool and your 700gb database needs a "small" one.



Here, you would be "wasting" space in the large database if you sized them the same.

Think about what is in the shared pool - it has nothing to do with the amount of data you have - everything to do with how you use that instance, how many objects you have and how big they are.

Alexander, January 02, 2008 - 4:21 pm UTC

Hi Tom,

Is it documentated anyway that we must use bind variables? Or that Oracle expects us to use them?

There's a ton of information about how to use them, and that the improve performance. I'm looking for something that does the subject more justice, like "if you don't use these you are going to destroy your shared pool".

We have lots of vendors who do not use bind variables. I am hoping I can find something easy for a manager to understand so I can bascially say, "see look, the instructions say not to do that".

I have ORA-4031's in mind. The appear random and only happen once in a great while.

Have you done test cases specific to ORA-4031's where you can easy demostrate, "look, no binds, ORA-4031's, binds, no ORA-4031's."
Tom Kyte
January 02, 2008 - 4:38 pm UTC

have you read this site, hosted on oracle.com? I've said it till I'm blue in the face - and then some.

have your vendors go to metalink (they must have support), have they search...

Your vendors are totally clued into this, or, they haven't used Oracle.


http://docs.oracle.com/docs/cd/B19306_01/server.102/b14211/memory.htm#i34548

...
Efficient use of the shared pool in high-concurrency OLTP systems significantly reduces the probability of parse-related application scalability issues.
....
Soft parses provide a significant reduction in the use of resources such as the shared pool and library cache latches.
....


that entire section is about "reduce parsing, just reduce parsing, you gotta reduce parsing, you should reduce parsing"

and you cannot reduce parsing if you do not bind.



Have your vendor google sql injection and ask them to prove to you they are secure - they are not.

Alexander, May 19, 2008 - 10:04 am UTC

Hi Tom,

Another ORA-04031 question for you. What does it mean if I'm seeing this even though cursor_sharing is set to similar? Does this simply mean we need more memory? I know right what to look for with the default, no binds. Thanks.
Tom Kyte
May 19, 2008 - 6:09 pm UTC

cursor_sharing similar might not fix a hard parse problem at all- what is the number of child versions you see out there?

Shared pool impact of dynamic SQL

Gary, May 20, 2008 - 7:42 am UTC

Tom

Thanks for a great resource.

Our application currently uses several thousand queries, all with bind variables and all statically coded within the application. Some enquiry programs will have dozens of hard coded statements in, one of these will be chosen as the 'best' cursor to fetch the data based on the query inputs given by the user. Still though - a finite number of queries, all bound.

We are considering moving to a situation where we have query programs where the 'where' and 'order by' are generated by dynamic SQL, again though with the inputs bound in, not literals.

I understand that this will have a big impact on the shared pool usage - going from deterministic to a possibly infinite number of queries. An average would be that a single query program would have up to 80-90 possible things that might go in the where clause, and up to 40-50 things in an order by. For each of about 200 query programs.

What is your opinion on the two approaches with regard to shared pool memory usage? We're only at prototype stage at the moment so nothing to get any actual evidence from.

Thanks
Tom Kyte
May 20, 2008 - 11:36 am UTC

well, forget about shared pool concerns for a moment.

What are your *fixed, absolutely must have, requirements*

Are they for

a) a finite number of queries
b) an infinite number of queries


that'll be what determines this isn't it? I mean, you have to do what you have to do - if you need to support a larger amount of predicates/ordering options - you have to do that (it is a binary thing - either you do or you do not have to support it)

if you do, you sort of have to - you'll want to build the queries predicably and understanding human behavior - understand that they won't really generate an infinite number of unique SQL's, there'll be some steady state they ultimately hit.

Alexander, May 20, 2008 - 9:23 am UTC

Not sure how to check that. Is it the child number value in v$sql?
Tom Kyte
May 20, 2008 - 11:41 am UTC

you can use version_count in v$sqlarea

select version_count, count(*)
from v$sqlarea
group by version_count
order by version_count
/

show that output

Alexander, May 20, 2008 - 12:11 pm UTC

  1  select version_count, count(*)
  2  from v$sqlarea
  3  group by version_count
  4* order by version_count
SQL> /

VERSION_COUNT   COUNT(*)
------------- ----------
            1        389
            2         16
            6          1


I should mention that this is a very old system (9.2.0.4), and I'm not sure if there is actually a hard parse problem. No one knows why this database was created with similar on, but the application support team is reporting ORA-4031's every once and awhile (and it's not crashing everything for some reason thank god).

I don't know if you can tell if binds are being used when cursor_sharing = similar.
Tom Kyte
May 20, 2008 - 3:31 pm UTC

you have hardly any sql loaded up there, what is your shared pool sized to?

you have very low version counts, that is good - you are "sharing sql"

and you have hardly any sql loaded.

Alexander, May 20, 2008 - 4:03 pm UTC

Correct, it's tiny.

NAME VALUE
----------------------------------- --------
shared_pool_size 67108864

I'm not real clear on how Oracle ages stuff out though. Meaning, in theory, if sql is being shared correctly, shouldn't it be aged out without a problem even if the shared pool is small....

Doesn't look like it though. Is it reasonable to think in this case it's being flushed too quickly due to user activity because it can't hold the shared sql because of how small the pool is?
Tom Kyte
May 20, 2008 - 4:15 pm UTC

if it is too small and we cannot age anything out because people are using it, then you will get an error.

it is not because something is flushed too quickly, but rather we need everything that is already in there and someone is trying to add one more thing to it (and there isn't anymore room)

Shared memory allocation

saravanan, June 08, 2008 - 11:09 pm UTC

Tom,
Thank you very much for your time.
In a single node 64 bit windows 10gR2 configuration the values of the 
java_pool_size,large_pool_size and shared_pool_size is set to 0 and the sga_target(ASMM) value has set greater than 0.
I am getting ORA-04031 error frequently.
Expecting your help on tuning the shared pool when a ASSM is enabled.

sga_target = 167772160
sga_max_size  = 167772160
java_pool_size= 0
large_pool_size = 0
shared_pool_size = 0
SQL> SHOW SGA;

Total System Global Area  167772160 bytes
Fixed Size                  2053360 bytes
Variable Size              96471824 bytes
Database Buffers           50331648 bytes
Redo Buffers               18915328 bytes



Tom Kyte
June 09, 2008 - 11:51 am UTC

do you have a bind variable issue going on here.... (that is, a LACK of bind variables)

Shared memory allocation

Saravanan, June 16, 2008 - 9:55 pm UTC

Tom,
Since the DB is not exists in production I hope the cause is not because of bind variable.
Tom Kyte
June 17, 2008 - 9:44 am UTC

why 'hope', why not check and see?

ORA-04031 error

A reader, July 05, 2008 - 6:53 am UTC

Hi Sir,
I am installing oracle 8.1.5 on a windows 2003 server R2 SP2
machine, its having 4GB of RAM.
I have choosen to install the preconfigured starter DB, when DBCA is creating
the DB is throws the following error message...
ORA-04031 unable to allocate 102724472 bytes of shared memory
("shared pool","unknown object","sga heap","db_block_buffers")

Then Ignore or abort comes.

The server is with 2 HDD with RAID1.Though the physical memory is 4GB, in
system property it is showing 3.2 GB.

Please give me any solution.

Memory Notification: Library Cache Object loaded into SGA

kodnruz, April 20, 2009 - 7:31 am UTC

Alarm in my alert_sid.log like this
"Memory Notification: Library Cache Object loaded into SGA".
The file like '.trc''s number is increasing so fast.
what is happening abort my database ?
I RDBMS release is 10.2.0.1.
Tom Kyte
April 21, 2009 - 1:32 pm UTC

please utilize support for something like this.

see support note:

Article-ID: Note 330239.1

Unable to Install - 8.1.5 - Shared pool

Nazmul Hoque, April 28, 2009 - 1:03 am UTC

Hi Tom,

We are unable to install oracle getting message at time of install database - Ora-04031:unable to allocate 78835800 bytes of shared memory ("shared pool", "unknown object","sgaheap" ,"db_block_buffers") - oparating sysem is Windows 2000 with SP4 and oracle is 8.1.5 Enterprise Edition
ram 2GB.

Please advise how we can overcome this problem.

thanks
Hoque

limit of in list value

A reader, August 10, 2010 - 1:57 am UTC

Hello Sir,

Is there any limit in IN operator? one of the query in my product use bind variables and it is giving me the "ORA-04031: unable to allocate 68 bytes of shared memory" when I redeuce certain values then it is working fine, for e.g query looks like

select * from emp where emp_no in (:1, :2, :3.....)

If these no. of bind variables (values) are more than 500 then it gives error. I know this is poor code written but I want to know is there any limit in the IN operator?

We are using Oracle 9i on Windows.



Tom Kyte
August 10, 2010 - 7:49 am UTC

there is but you would get an error about exceeding the limit of in list items (something like 1000 if I recall).

If you are hitting a 4031 - I would guess:

o you are using the RBO (rule based)
o your shared pool is in fact too small to hold the HUGE RBO plan that was generated.

A reader, August 10, 2010 - 11:57 pm UTC

Thank you sir for quick responce.

From our product when I run engine that time I am getting this error. I asked dba to find the actual statment for which error is occured. (We do not have access of source code of product) and we got the statment where total more than 500 bind variables are used. exact error message in alert file is
ORA-00604: error occured at recursive sql level 1
ora-04031: unable to allocate xxx bytes of shared memory ("shared pool", "select job, nvl2(last_date, ...","sql area", "ctxdef : kkslod")
when I ran the same sql statment in SQL plus I am getting end of file communication channel. I tried to reduce no. of values in the IN clause and it worked. thats why I asked is there any limit for IN clause? and so I feel that IN clause is the reason for getting ora-04031 error. Is there any way to increase this limit? I know one way is to store those value in a temp table and use them but that is not possible at this time to change in product.

Thank you again.

Tom Kyte
August 18, 2010 - 11:44 pm UTC

as I said, there is a limit (something like 1000) and there is no way to increase that.

It is not the cause of the 4031 (lack of bind variables is probably the cause of that - on other statements or on this statement)

If you get an ora-3113 from sqlplus, please open a support request - that would be a problem that needs to be corrected.

A reader, August 11, 2010 - 12:00 am UTC

Thank you sir for quick responce.

No we are not using rule based. we have auto sga and it set around 1.7 gb

From our product when I run engine that time I am getting this error. I asked dba to find the
actual statment for which error is occured. (We do not have access of source code of product) and
we got the statment where total more than 500 bind variables are used. exact error message in alert
file is
ORA-00604: error occured at recursive sql level 1
ora-04031: unable to allocate xxx bytes of shared memory ("shared pool", "select job,
nvl2(last_date, ...","sql area", "ctxdef : kkslod")
when I ran the same sql statment in SQL plus I am getting end of file communication channel. I
tried to reduce no. of values in the IN clause and it worked. thats why I asked is there any limit
for IN clause? and so I feel that IN clause is the reason for getting ora-04031 error. Is there any
way to increase this limit? I know one way is to store those value in a temp table and use them but
that is not possible at this time to change in product.

Thank you again.


Great

Eric.Tai, August 18, 2010 - 12:47 am UTC

It is useful for me ,
Thanks TOM

Deja vu

L, December 14, 2010 - 11:51 am UTC

Hi Tom,

The DB version is a 9.2.0.8, on a 32-bit Windows OS, which per se is not a problem.

Every now and then we get ora-04031. The people in charge have a hardware man point of view: if things show any problems, we just throw more resources/hardware at it. I have tried many times in the past to pass the idea that the software vendor code is poor (on top of it, they use hybernate, which to me is just another layer to obscure the DB from the developers) and it causes the DB to underperform. I have raised a ticket with them to address the issue, but it was said that it would be solved on later versions…

However, under the current economical climate acquiring more tin is not as easy as it was in the past, so I was hoping that maybe you could give me a hand on how to sell this idea. Because when I mentioned bind variables I see everyone’s eyes glazing over…

Please find below a snapshot of how things currently are:
select version_count, count(*)
from v$sqlarea
group by version_count
order by version_count desc

"VERSION_COUNT" "COUNT(*)"
1025 2
1024 13
1018 1
1016 1
974 1
905 2
898 1
897 1
877 1
871 1
841 1
840 1
839 1
838 1
833 2
778 1
776 1
723 1
642 1
623 1
557 1
553 2
550 1
548 1
497 1
487 9
471 1
456 1
443 1
442 1
386 1
290 1
246 1
202 1
200 1
196 1
182 1
152 1
150 1
134 1
132 1
130 1
127 1
121 1
91 1
76 1
68 1
51 1
48 1
46 3
43 1
38 1
30 2
27 2
26 2
25 1
24 1
23 4
22 1
21 1
20 2
18 1
17 2
15 5
13 1
12 5
11 5
10 1
9 9
8 8
7 4
6 10
5 6
4 18
3 53
2 145
1 987

Thanks
Tom Kyte
December 15, 2010 - 12:26 am UTC

do you have any idea what is causing all of the child cursors? Anything useful in v$sql_shared_cursor for one of them with a high version count? We'd need to understand what is making the cursors non-shareable before advising on anything.

Shared memory error

Ari, December 23, 2010 - 6:42 am UTC

Hi Tom,

While importing data into a DB I am getting the below error message:

IMP-00003: ORACLE error 4031 encountered
ORA-04031: unable to allocate 4080 bytes of shared memory ("shared pool","SELECT COUNT(BAD) FROM SYS.I...","sga heap(1,0)","library cache")


I had increased my SGA_MAX_SIZE to 1GB and shared_pool_size to 400M. But still I couldn't avoid the error. Can you please help me? Do I have to drop the user in order to try again?

Still Confused

A reader, March 02, 2011 - 12:44 pm UTC

So yes, bind variables are important to save on 'parsing' and so the SGA doesn't grow. The bigger question is why doesn't Oracle control the growth of the SGA itself. You state that a smaller SGA is 'better'. Well, why can't I say, hey Oracle, keep my SGA at 4GB at the most. It doesn't appear to work that way. In my development environment I often see the SGA growing to 8GB and the buffers shrinking to less than 1GB. How can this be good? What is more expensive, the hard parse of a SQL statement, or the reading of every block in the database from disk? Also, if I choose to use a bind variable, how does Oracle know the cardinality of the bind variable filter. That is, a date range filter of 1 month might drive a query plan that is quite different than a date range filter of 10 years, especially if there are other filters in the where clause where a different index might be used.
Tom Kyte
March 02, 2011 - 1:08 pm UTC

So yes, bind variables are important to save on 'parsing' and so the SGA
doesn't grow.


specifically on the HARD parsing, just using binds by itself will not affect overall parse rates - just HARD parse rates.

Well, why can't I
say, hey Oracle, keep my SGA at 4GB at the most


Indeed - why can't you? You can if you want to.


I often see the SGA growing to 8GB and
the buffers shrinking to less than 1GB.


the buffers are in the SGA, the buffers are PART OF the SGA. I think what you mean to be saying when you say SGA is "shared pool"

What is more
expensive, the hard parse of a SQL statement, or the reading of every block in
the database from disk?


You should measure the expense of hard parsing some day - it could be consuming 90% OR MORE of your execution time. It can be more expensive than reading from disk.

However, what you have is one of those extreme cases whereby "automatic" might not be what you want - due to a misbehaving application.

It takes exactly one application
Issuing one sql statement
over and over with different literals
often enough

to basically bring a server to its knees.


And you can certainly set the size of it - one of two ways

a) do not use automatic memory management. For a degenerate application as you are running - one that just abuses the shared pool to death, one that is a misbehaving (and insecure - google up sql injection) - automatic things might not work so well.

b) use automatic memory management but set lower bounds for each of the pools. You could set the db cache size to 4gb - and the sga target to 8gb. The buffer cache would be at least 4gb and could grow if the stars align and the database feels like growing it - but the shared pool would not grow down into the buffer cache of at least 4gb.


for the last bit, see

http://asktom.oracle.com/pls/asktom/asktom.search?p_string=adaptive+cursor+sharing

ORA-04031 --> occured when upgrading database (10.1.0.4)

Glenn, July 25, 2011 - 4:49 am UTC

Hi Tom,

I'm a complete newbie with oracle. I was upgrading a database a while ago to 10.1.0.5 from 10.1.0.4. The first database I upgraded in the test servers was a success, but subject error came through when upgrading the next database using the same patch I used. I tried to alter the shared_pool_size and java_pool_size parameter to 150M and the errors still came through. 

this were the steps I did:

SQL> STARTUP UPGRADE
SQL> SPOOL patch.log
SQL> @?/rdbms/admin/catpatch.sql
SQL> SPOOL OFF


your assistance will be deeply appreciated! THANKS!

Tom Kyte
July 27, 2011 - 8:04 pm UTC

did you follow every step by step instruction in the patch set notes?

I guess I would suggest "set it larger to upgrade".

this is really a support issue, not an asktom sort of thing.

unable to allocate 3936 bytes of shared memory

Gus, August 25, 2011 - 5:54 am UTC

Hi Tom,

Database version:Oracle Database 10g Enterprise Edition Release 10.2.0.4.0

at first the database's sga_target is 1040M, we thought it would be enough, because this database just a dev environment, we create many schema on this database to develop the code for different sub projects, there is almost no data in the tables on the database, we just writes the code, but in one deployment we got error:
ORA-04031: unable to allocate 3936 bytes of shared memory ("shared pool","select * from Session_privs...","sga heap(1,0)","kglsim object batch")

the error was not caused by the query"select * from Session_privs", it's caused by no space for the shared memory, so i can't execute a new query, because it need new space on shared pool, but i don't understand.

at first i execute "alter system flush shared_pool", and then database back to fine, and then we deploy the code again, got the error again, so we have to increase the size of shared pool, so i set the sga_target and sga_max_size to 2048M, and then restart the database, did the deploy again, it worked.

i came to the $ORACLE_BASE/admin/bdump/ got the TRACE file, found a list for the memery:

Memory Utilization of Subpool 1
================================
Allocation Name Size
_________________________ __________
"free memory " 373235404
"miscellaneous " 0
"Undo Meta Data " 72
"dpslut_kfdsg " 256
"File Space Usage " 11336
.....

i found some sub pool's size was much bigger in the trace file than now(after the deploy action):

CCursor:20984164(trace file)
8905356(now)

PCursor:28060900(trace file)
3739296(now)

qm_init_uga:qmsg:227225376(trace file)
45192(now)

library cache:42801736(trace file)
14977428(now)

Heap0: KGL:22167212(trace file)
7223624(now)

i noticed the (qm_init_uga:qmsg) has the most difference, but i don't know the qm_init_uga.

and now i query the size of the shared_pool:
SQL> select current_size from v$sga_dynamic_components where component= 'shared pool';

CURRENT_SIZE
------------
268435456

so here are the questions:
1. what cost the space of the SGA, so that shared pool can't get new space.is the Deploy? or the fragment of memory?
2. in Oracle there is a reuse rule in the shared pool, right?

Tom Kyte
August 30, 2011 - 4:14 pm UTC

1) it is probably your "many schemas" because it is 'test' - so you have way more sql in the shared pool in this database. It is also probably due to lack of bind variables - this is an assumption on my part made by me because I see it *all of the time*. Your developers are not using binds and they are flooding the shared pool with literal sql and this is compounded by the fact that you have tons of schemas.

2) sure, if possible, but if you don't bind and you flood the shared pool - it may well NOT BE possible.

ORA 04031 - Large Pool Error

Ramon, October 28, 2011 - 7:44 pm UTC

Tom,

Database Version: 10.2.0.5.0
OS: Red Hat Enterprise

Shared Servers ? : Yes

Problem:
   ORA-04031: unable to allocate 104 bytes of shared memory ("large pool","unknown object","session heap","trigger condition node")  

Database parameters:
   NAME                                 VALUE
------------------------------------ ------------------------------
shared_pool_size                     1800M
large_pool_size                      128M
java_pool_size                       160M
sga_max_size                         12G
pga_aggregate_target                 2G
db_cache_size                        8G




1-) I think, i will probably solve the issue increasing the large_pool_size, but is this the real and only problem here?

What i really want to know and i am trying to understand is that, if i provide more memory will solve my problem or it will just hide another problem ?

2-) Here: http://www.dbasupport.com/forums/showthread.php?t=34741
They blaimed, the oversized SORT_AREA_SIZE parameter, is that possible?


Thank you for your time.

Best Regards,
Ramon




Tom Kyte
October 29, 2011 - 11:21 am UTC

in shared server, the UGA (user global area - session memory for each session, includes cursor areas, some sort areas, plsql variables, etc) is allocated in the large pool.

1) it would depend. How many concurrent sessions do you have and how much session memory does each need?

2) not likely if you are using automatic pga memory management as that takes place using dynamic work areas in the PGA - and when automatic pga memory management is on - even with shared server given your version - does not use sort_area_* parameters.

Mystrey of ORA-4031

matt, July 25, 2012 - 9:15 am UTC

hi Tom,
first of all thank you very much for your time and the way you explains,provides answers to us. much appreciated the effort. i went through the entire thread regarding this ORA-4031 and i still having some issues understanding it.

we had encountered this error recently during our prod deployment and i flushed the shared pool immediately to allow the deployment get going, it went fine. meanwhile i was doing some research on metalink about this. found few things read it and got more confuse now. oracle says it allocates the chunk of contiguous memory in sizes of 4k in order to satisfy the request for allocation. so say my package or stored procedure size is 50K then it's 50/4= between 12 and 13 so it allocates 13 4k chunks of memory to satisfy this allocation request after walking through free list hash buckets to look for right size bucket, correct? from shared pool or shared pool reserved for larger allocation like this?
my thing is when oracle uses reserved pool for allocation what's the size it think is larger? larger than 4k or something else.

so anything smaller than 4k, it assigns one chunk from normal shared pool (subheap) which should be enough to satisfy the request, it would definitely creates fragmentation in my shared pool if the size is 1k or 2k and if it has to allocates many more like this. correct?

oracle also says that before throwing ORA-4031 it goes through several iteration to look for adequate space to allocate, first it goes through shared pool free list buckets if it misses there then it searches on free list bucket for reserved pool if it misses there then it tries LRU and see if it can ages out any memory structures, it does this so many times not sure the exact number of how many times, and then if it does not find the chunk throws the error. should LRU not suppose to age out something here as the algorithm says least recently used. or it is because that time all the data structures that LRU list having were being used by the other server processes.

i know my application uses bind variables but we still see this errors. my shared pool size is 500MB total. out of this 25 mb is kept for reserved pool when i query both pool to see how much free space i have, i get 3MB free on reserved pool and when i flush shared pool it gives me 166MB free before it says 5MB or something free in shared pool.
how would i go in this situation to resolve this?

thanks
matt
Tom Kyte
July 30, 2012 - 9:35 am UTC

version and platform?

A reader, July 30, 2012 - 2:51 pm UTC

Oracle version 9.2.0.8 running on solaris 10 SUNW, SPARC-Enterprise server
Tom Kyte
July 30, 2012 - 4:06 pm UTC

how many cpus do you have.

A reader, July 31, 2012 - 7:54 am UTC

total 16 CPU's with 2.8Ghz

ORA-4031

A reader, January 07, 2013 - 6:29 pm UTC

Tom,
When ORA-4031 is encountered, the most common advise is "fix badly written code". My questions are:
a) What am I looking for in the code?
b) Is there a way to monitor something which can point me to the bad code?
c) Is flushing shared pool a good idea to work around 4031?

Thanks...


Tom Kyte
January 14, 2013 - 11:36 am UTC

a) lack of use of bind variables in an oltp application, which leads to the shared pool being FLOODED with unique, non-shareable sql.

b) look for string concatention which leads not only to this problem (no binds) but also introduces a massively huge security risk called sql injection into your code.

c) it is like using an atom bomb to crack a walnut. The walnut certainly gets cracked but at what cost? No, this is not a viable approach.

Very useful information for Java Programmers too

vriti, September 04, 2013 - 4:32 am UTC

I had the same issue and i was also not using bind avriable.Thanks for such a nice explaination :)

Raised this issue during executing du in asmcmd

Mojtaba Dastouri, October 11, 2020 - 7:05 am UTC

I tried to run du command on the backup datagroup of our datagaurd and I faced with the same error:
ORA-04031: unable to allocate 4000 bytes of shared memory ("shared pool","unknown object","sga heap(1,0)","kglsim heap")

Would you please let me know how to resolve this issue on datagaurd?

Thanks for your quick and complete answer in advance.
Connor McDonald
October 12, 2020 - 3:47 am UTC

Check out MOS note 1450745.1

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library