Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Marcel.

Asked: July 01, 2002 - 5:18 pm UTC

Last updated: May 12, 2011 - 7:56 am UTC

Version: 9.2.0.1

Viewed 10K+ times! This question is

You Asked

Hi Tom

I would like to take advantage of the new pga_aggregate_target
that is available in 9.2 to allow dynamic automatic management of the sessions pga-memory allocation.
My question is: how does the currently available pga-memory affect the execution plan of newly parsed statements? And if it does: do the execution plans of cached statements change when there is no longer the same amount of memory available?
Concrete scenario:
Session 1 connects, parses and executes a join over three tables.
Since session one is the only session, it gets all pga memory allocated, and the optimizer decides to do a hash join.
Then n other sessions connect, all of which execute the same join at the same time.
Now each of these sessions has only 1/n of the pga-memory available, therefore generating more hash partitions, which would make a nested loop the better access path.
What will the optimizer do in such a scenario?

Thanks for any thoughtfull comments on this.


and Tom said...

I need to change my response to this one.... After researching a bit (as prompted by the review), I mis-represented the setting initially.


The optimizer looks at the setting of PGA_AGGREGATE_TARGET when using auto memory mgmt. This value, along with the other init.ora settings that affect the optimizer, are taken into consideration. External events that are current "happening" do not affect the optimizer (for all it knows, these events will not be happening when it comes time to actually run the query). You do have the ability to gather statistics on the 'system' itself to have the actual system use impact the generated query plans -- but then you must pick some representative times at which to sample the system.





Rating

  (50 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Chris, July 02, 2002 - 3:09 am UTC

Hi Tom,

I, like Marcel, was under the impression that the pga_aggregate_target was a global setting for the instance, not per process. The 9iR2 concepts manual tends to confirm a global setting.

</code> http://otn.oracle.com/docs/products/oracle9i/doc_library/release2/server.920/a96524/c08memor.htm#20461 <code>

Could you please confirm if this is a global setting for the entire instance of a per session limit.

Thanks in advance.

pga_aggregate_target not shared for all processes?

Marcel Duerr, July 02, 2002 - 3:41 am UTC

Hmm,
Is this really tha case?
I understand the manuals different.
For example in the database reference manual, page 1-127
it is stated that: "pga_aggregate_target spacifies the aggregate memory available to all server processes.." and further down "When setting this parameter you should examine the total amount of memory of your system that is available to Oracle and subtract the SGA. You can assign the remaining memory to PGA_AGGREGATE_TARGET."
This implies that all processes need to share pga_aggregate_target, since there is no more memory left.
Or should I read "You can assign the remaining memory divided by the expected number of processes to PGA_AGGREGATE_TARGET." ??


pga_aggregate_target

Marcel Durr, July 02, 2002 - 10:53 am UTC

Ok, I understand that the actual number of sessions will not change any cached execution plans.
If i understand you correct, you mean that execution plans get changed whenever I compute new system stats using dbms_stats.gather_system_stats. This would lead to a huge amount of reparsing, since ALL cached statements would have to be discarded from the cache?
Also if I got it correctly, then gather_system_stats rates cpu power to i/o power of the system (roughly spoken). What I miss here is the amount of available memory.
I guess I just don't understand how the optimizer takes the memory into consideration. Is it the amount of pga memory available at parsing time, or is it a fixed fraction of pga_aggregate_target? And how does the avaliable memory influence execution plans (if at all)?
Thanks for putting some light on this.

Tom Kyte
July 02, 2002 - 2:34 pm UTC

Only *newly generated* plans will be affected by new system stats (eg: you would have to flush the shared pool to get all subsequent parses to look at the new system stats)

It is the pga_aggregate_target that it looks at -- not the amount available at parse time (which is not representative of the system over time).

sum(PGA_ALLOC_MEM) and sum(PGA_MAX_MEM)

A reader, October 23, 2002 - 6:09 am UTC

Hi

If I have pga_aggregate_target set to 20MB how come I can see sum(PGA_MAX_MEM) from v$process around 30 or even more MB? In the other hand sum(PGA_ALLOC_MEM) stays very similar to 20MB.....

Tom Kyte
October 23, 2002 - 7:23 am UTC

PGA_MAX_MEM is the max a process had allocated which is not the same as "i currently have allocated"

it is the most they ever had, not necessarily what they currently have. So, you could have 30 processes -- each having had 1m at some point (not all at the same time bear in mind). Now, each has 0.5 meg currently.

Can't oracle sense the statistics change?

Tony, May 29, 2003 - 3:11 am UTC

What I understand from your answer is that old quries and their plans become incorrect once new statistics is collected. So, we need to flush the shared pool after every run of statistics collection so that new plans will be generated for quries. This is equalent to restarting the database. isn't it?. Can't Oracle sense the change in statistics and generate new plans automatically?
DBAs don't restart the db as they don't want to loose the quries and their plans in the shared pool. Tom, Correct me if I'm wrong.


Tom Kyte
May 29, 2003 - 8:15 am UTC

No, Oracle will automagically invalidate the plans for you. you analyze, we take care of the rest.


who said to flush the shared pool?

pga_aggregate_target

Reader, January 29, 2004 - 11:51 am UTC

(1) If I set 1G for this parameter, is it like 1G memory is reserved for pga or when needed oracle could get it from OS up to the limit of 1G? In otherwords, is pga_aggregate_target considered as HWM? Thanks.

(2) What if the server does not have 1G memory for pga_aggregate_target and will it give any error? Thanks.

Tom Kyte
January 29, 2004 - 1:30 pm UTC

1) it is dynamic and not preallocated.

2) only if a request to the OS for memory fails would you get an ora-4030

pga_aggregate_target

Venkat, April 08, 2004 - 5:53 am UTC

Dear Tom,

As regards the default value for pga_aggregate_target,

MetaLink Note:223299.1 says:

Please Note that if you create a new 9i Database, the default value for this parameter will be 25 MB, which wil not be sufficient for most Production Databases.

MetaLink Note:147806.1 says:

The value you can be set in Bytes, kilobytes (K), megabytes (M) or gigabytes (G). The default value is 0.

Which is correct?
Also let me know which is tunable and untunable part of PGA?

Thanks,
Venkat



Tom Kyte
April 08, 2004 - 10:09 am UTC

maybe it should say "if you create a new 9i database using DBCA, DBCA will set the default value for this parameter to 25mb

PGA_AGGREGATE_TARGET and direct path reads/writes

Arun Mathur, June 22, 2004 - 2:34 pm UTC

Tom,

The other day an operator reported a slow response time in one of the database reports. When doing a level 12 trace, it showed significant amount of wait events for direct path read/writes. So, it made me focus on the PGA_AGGREGATE_TARGET parameter. This morning, he ran the same report using the same parameters to the stored procedures, and told me it ran smoothly. I did another level 12 trace, and this time it showed no direct path read and writes. Could this mean that the size of the work area allocated yesterday was smaller than the input data size, and that this morning there was enough to meet the inputs size? The database server has 4G of memory, and according to Metalink Note 223730.1, I set the PGA_AGGREGATE_TARGET value using the DSS formula:

For DSS systems
PGA_AGGREGATE_TARGET = (<Total Physical Memory > * 80%) * 50%

, and this value (1.5G) has not changed since April. The solution/troubleshooting spots are probably looking at me right in the face, but I'm having problems looking back!

Take care,
Arun


Tom Kyte
June 22, 2004 - 9:22 pm UTC

absolutely -- that is what the pga_aggregate_target is designed to do.

You tell it how much RAM to use and it will adjust the amount used by each session based on the number of concurrent sessions.

the alternative is to have a fixed sort/hash area size -- meaning if you have 1 session using it, it'll be "X" and if you have 1,000 sessions using it you will have "X" -- so you have to size for peak (or expect to swap like mad).

the pga aggregate target is designed to allocate all available memory "nicely" -- sort of like adjusting sort/hash area size up and down during the as as the number of sessions go up and down.

Ora-04030 error

A reader, November 30, 2004 - 3:31 am UTC

Hi Tom,
If I have pga_aggregate_target > what is defined in
ulimit -a (for oracle user)
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) unlimited
memory(kbytes) 118920
coredump(blocks) 2097151
nofiles(descriptors) unlimited
Is there chance of getting ora-04030 error, when the pga memory required by all sessions increases 118920K, even system still have physical memory, but process (oracle) memory maximum limit is reached? If this is true, then (provided the physical memory is available), one should set the oracle process memory maximum limit to alteast pga_aggregate_target?
Thanks

Tom Kyte
November 30, 2004 - 7:41 am UTC

the pga_aggregate_target is a target amount of ram across all processes.

ulimit is a process by process limit.

so, if your pga_aggregate_target is 1gig, that is 1gig summed up over many processes, it does not mean an individual process is going to get 1gig (far from it)




Clarification

A reader, November 30, 2004 - 2:37 pm UTC

Hi Tom,
So, If a unix user (ulimit) has 100M memory limitation, and the pga_aggregate_target > 100M, and there is only one unix user on the system using oracle. If the process needs more than 100MB pga_aggregate_target, it will fail (ora -04030)because process memory requirement can not exceed 100M limitation.
Is this correct?

Tom Kyte
November 30, 2004 - 8:19 pm UTC

with a pga_aggregate_target of 100m, it is extremely doubtful you would come near to approaching 100m of pga in a single process.


but yes, obviously, if the pga of a process exceeds the ulimit size, the OS will stop giving it memory.

but pga aggregate target is set up for many sessions, it would not really be reasonable to expect a single process to gobble it all up, Oracle holds way back to allow for at least 20'ish users.

Ora-04030 - pga_aggregate_target or session sort_area_size and hash_area_size problem?

Prashant, December 08, 2004 - 9:37 pm UTC

Hi Tom,

If I hava PGA aggregate of 2G.
And I do the following for some of the sessions.
ALTER SESSION SET WORKAREA_SIZE_POLICY=MANUAL;

ALTER SESSION SET SORT_AREA_SIZE = 209715200;

ALTER SESSION SET HASH_AREA_SIZE = 419430400;

And I plan to have atleast 16 jobs each one of them having the changed session attributes running concurrently. Would this be the reason why am I getting a Ora-04030 error or should I look into the code?

Thanks and Regards
Prashant


Tom Kyte
December 09, 2004 - 1:05 pm UTC

if you have 16 jobs, each with 400m of hash area, that would be 6400 meg of ram - and they could be sorting as well

yes, it would be something that could cause an ora-4030. Or you might have a ulimit in place that prevents a process from allocating that much ram as well.

Prashant, December 09, 2004 - 3:27 pm UTC

Hi Tom,

You mentioned "It would be something that could cause an ora-4030" or in this case is it the only reason why I get this error or do I have to peek at something else because the ulimit is set to unlimited.


Thanks
Prashant

Tom Kyte
December 09, 2004 - 3:33 pm UTC

certain OS's will only let you allocate so much, you could be running out of memory, you could be 32bit, and so on.

did you actually perhaps run out of ram.

Need more info.....

Prashant, December 09, 2004 - 10:47 pm UTC

Hi Tom,

Where can I read more into this...
<<certain OS's will only let you allocate so much, you could be running out of
memory, you could be 32bit, and so on.
>>

I don't think we ran out of RAM, because I was monitoring the load and there was still plenty of free memory.

Thanks
Prashant

Tom Kyte
December 10, 2004 - 1:02 am UTC

32bit...

OS limiting you below "unlimited" regardless (eg: AIX does that as I recall)....

did you ask your OS specialists?

ulimit,

A reader, July 19, 2005 - 2:40 pm UTC

what is ulimit? Is this applicable for Windows, Linux and Unix? How to find the values in each OS?
What is the rule (if any) before setting the PGA or SGA for our databases based on ulimit value?

I just tried ulimit command on UNIX box, it said "unlimited". What does it mean?

All these questions are helpful for me to target the 4030 error.


thanks,

Tom Kyte
July 19, 2005 - 5:36 pm UTC

ulimit is a unix resource managment thing.

ulimit is usually the way to find the ulimit settings.


that meant all were set to unlimited, there are no limits. man ulimit would be useful to learn more.




pga memory management

Samuel, August 23, 2005 - 6:47 pm UTC

From, </code> http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96533/memory.htm#49321, <code>

<quote>When the size of the work area is smaller than optimal, the response time increases, because an extra pass is performed over part of the input data. This is known as the one-pass size of the work area. Under the one-pass threshold, when the size of a work area is far too small compared to the input data size, multiple passes over the input data are needed. <quote>

does one pass mean one time the sort is pushed to the disk to complete the sort? does multi pass mean, multiple times it has to go to disk? could you please clarify? Thanks.

Tom Kyte
August 24, 2005 - 9:02 am UTC

</code> http://www.nocoug.org/download/2003-08/how_cbo_works.ppt <code>

lots of good stuff, including pictures that discuss this topic.

Alberto Dell'Era, August 24, 2005 - 11:50 am UTC

<quote>
</code> http://www.nocoug.org/download/2003-08/how_cbo_works.ppt <code>
</quote>

This paper by Jonathan Lewis is *excellent*, the best presentation I've ever read - lots of light bulbs switching on!
Thank you for showing us and Jonathan for writing it of course!

Tom Kyte
August 24, 2005 - 6:12 pm UTC

(he is smarter than the average bear, as Yogi the bear was fond of saying..)

increased pga_aggregate_target and started getting ORA-04030

A reader, October 11, 2005 - 5:50 pm UTC

Hi

Working in 9.2.0.6 Windows 2003, 3GB RAM

Yesterday I increased a database parameters (it was way undersized), increased db cache to 512m, shared pool from 128m to 256m and pga_aggregate_target from 24mb to 512mb. I then added a statspack job every 30 minutes.

This Java (Servlets/JSP) application has been running fine for the last few months however after these modifications the database started to getting ORA-04030 in the web and when I try to logon using SQL*PLUS I get ORA-04031.

How can I get ORA-04030 if I still have 1G free memory in the server (only 2GB was used)?



Tom Kyte
October 12, 2005 - 6:50 am UTC

it is *windows*, windows is 32bit in general - unless you are using a special version of it - or you enable "a little more memory" to be used by /awe switches at boot time and such.

windows apps address about 2gig of ram in general. you just pushed those limits since this is a single process - all of the pga's are in the process, along with the sga.

PGA and sort togather

dost, November 21, 2005 - 2:23 pm UTC

SQL> SHOW PARAMETER PGA

NAME                                 TYPE    VALUE
------------------------------------ ------- ------------------------------
pga_aggregate_target                 unknown 3221225472
SQL> SHOW PARAMETER SORT

NAME                                 TYPE    VALUE
------------------------------------ ------- ------------------------------
_sort_elimination_cost_ratio         integer 5
nls_sort                             string  binary
sort_area_retained_size              integer 1048576
sort_area_size                       integer 65536
SQL> SHOW PARAMETER WORK

NAME                                 TYPE    VALUE
------------------------------------ ------- ------------------------------
workarea_size_policy                 string  AUTO
SQL> 

Tom,
my DBA has set above settings  . Please suggest on above setting and do you think we need to comment sort parameters 

Tom Kyte
November 21, 2005 - 3:13 pm UTC

I'd ask them why they were compelled to set an undocumented init.ora parameter.

in 10g - the pga_aggregate_target is used (since you are workarea size policy of auto) in all connection types.

in 9i, the pga_aggregate_target would be used for dedicated server connections and the sort_area_size for shared server connections.


as long as you want to use upto about 3.2 gig of memory for workareas, I see nothing to comment on. only you know if you want to use that.

direct path write

dost, November 22, 2005 - 3:30 am UTC

Tom,

My god It was a horrible day...
WHy ? :(
I generated stats
execute dbms_stats.gather_schema_stats(ownname=>'FPA',cascade=>true, estimate_percent => dbms_stats.auto_sample_size,degree=> 10,method_opt=>'for all indexed columns size 1');
which cause bad performance .
caused direct path write wait events

What is that?



Tom Kyte
November 22, 2005 - 8:33 am UTC

you did this in your test system right, no harm done to your poor unsuspecting end users as you made a huge change from using the rule based optimmizer to using the cost based optimizer right?


you do realize this is a fairly huge change don't you? sort of like dropping in a new version of your application. something that needs to be tested so you can find the right set of statistics for your data - and to fix any queries that did not work so well.

ORA-4030

Guido Smet, January 11, 2006 - 9:25 am UTC

IT gave me a little better understanding of some of the workings going on. If I am already at the max Windows allows what else can be done? Still looking for that answer.

Tom Kyte
January 12, 2006 - 10:35 am UTC

reallocate your memory (shrink your SGA)

use magic switches to permit you to use a little more than 2gig (/uae and others - metalink has some notes - sort of a windows OS question though)

use the 64bit edition of windows.

what's possible memory used by Oracle processes

jianhui, January 24, 2006 - 12:33 am UTC

Hello Tom,
I have around 40GB SGA in a 64GB memory machine, the maximum PGA is 5GB from v$sysstat. But we still see 60GB swap disk usage constantly and free real physical memory is less than 2GB. I am just wondering what oracle processes still using more virtual memory to cause high swap disk usage. The machine is only running one single oracle instance. Could you think of anything oracle processes could cause virtual memory usage other than SGA & PGA?

Tom Kyte
January 24, 2006 - 8:07 am UTC

you will see "swap" mapped to processes normally, this is to be expected.

You have the text of the processes in memory, you have the file system cache - in memory, you have the stacks of the oracle processes - in memory, you have the operating system - in memory.

They are not "using" swap, they have their backing stored allocated (pre-allocated) from swap, but they are likely not using it.




swap and oracle memory

jianhui, January 24, 2006 - 11:42 am UTC

Thanks Tom, I checked some related solaris document, yep they are saying the same thing about the swap usage. Thank you for your expertise.

So at least i need to have swap 1x of the physical memory, right? (some linux suggests 1.5-2x of physical memory, not sure solaris is similar or not).

Tom Kyte
January 24, 2006 - 8:48 pm UTC

swap is typically 1 to 2x real memory...

PGA_AGGREGATE_TARGET % USED FOR CREATE INDEX

Reader, March 23, 2006 - 3:29 pm UTC

Hi tom,
my question is: how much memory (from pga_aggregate_target) is used to order data on a create index?
I have configurated 1.5G on PGA_AGGREGATE_TARGET but when create indexes oracle use temporary tablespace for segments greatter than 200m.

Thanks

Tom Kyte
March 23, 2006 - 4:14 pm UTC

the amount of available memory given to you via the pga_aggregate_target varies by release...

but it would be the same amount as is used for a sort of any kind.




Pga Advisory

Lena, March 24, 2006 - 9:15 am UTC

Hi Mr.Kyte

I got a statspack with snap of 60 min. with the following result.
I cant understand what oracle is tring to advise.
All the values look the same. its always 100% in case of the PGA
(or time saved factor 1 in case of the shared pool).
I dont no if its matter, but the load on database was light
at the time the report was generated.

Thank you.



PGA Memory Advisory for DB: xxxx
-> When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value
where Estd PGA Overalloc Count is 0

Estd Extra Estd PGA Estd PGA
PGA Target Size W/A MB W/A MB Read/ Cache Overalloc
Est (MB) Factr Processed Written to Disk Hit % Count
---------- ------- ---------------- ---------------- -------- ----------
25 0.1 35.7 0.0 100.0 1
50 0.3 35.7 0.0 100.0 0
100 0.5 35.7 0.0 100.0 0
150 0.8 35.7 0.0 100.0 0
200 1.0 35.7 0.0 100.0 0
240 1.2 35.7 0.0 100.0 0
280 1.4 35.7 0.0 100.0 0
320 1.6 35.7 0.0 100.0 0
360 1.8 35.7 0.0 100.0 0
400 2.0 35.7 0.0 100.0 0
600 3.0 35.7 0.0 100.0 0
800 4.0 35.7 0.0 100.0 0
1,200 6.0 35.7 0.0 100.0 0
1,600 8.0 35.7 0.0 100.0 0
-------------------------------------------------------------
Shared Pool Advisory for DB: xxxx
-> Note there is often a 1:Many correlation between a single logical object
in the Library Cache, and the physical number of memory objects associated
with it. Therefore comparing the number of Lib Cache objects (e.g. in
v$librarycache), with the number of Lib Cache Memory Objects is invalid

Estd
Shared Pool SP Estd Estd Estd Lib LC Time
Size for Size Lib Cache Lib Cache Cache Time Saved Estd Lib Cache
Estim (M) Factr Size (M) Mem Obj Saved (s) Factr Mem Obj Hits
----------- ----- ---------- ------------ ------------ ------- ---------------
64 .6 18 3,312 91 1.0 15,858
80 .7 18 3,312 91 1.0 15,858
96 .9 18 3,312 91 1.0 15,858
112 1.0 18 3,312 91 1.0 15,858
128 1.1 18 3,312 91 1.0 15,858
144 1.3 18 3,312 91 1.0 15,858
160 1.4 18 3,312 91 1.0 15,858
176 1.6 18 3,312 91 1.0 15,858
192 1.7 18 3,312 91 1.0 15,858
208 1.9 18 3,312 91 1.0 15,858
224 2.0 18 3,312 91 1.0 15,858
-------------------------------------------------------------


Tom Kyte
March 24, 2006 - 10:01 am UTC

your system was idle, not doing any significant work.

You need to do this when the system is experiencing some significant load.

Else, no advice about how to size things up or down is possible.

PGA_AGGEREGATE_TARGET

Prem, May 18, 2006 - 4:49 am UTC

Hello Tom,

I have few douts on pga_aggregate_target

1) If I increase the pga_aggregate_target value will it effect the SGA or outside the SGA.

2) In advance if i increase more pga_aggregate_target value will be any impact on RAM.

Thanks In advance
Rgds
Prem

Tom Kyte
May 19, 2006 - 9:08 am UTC

1) it is not related to the SGA, it is process memory.

2) of course, you would open up the opportunity to use more RAM in each process for sorting/hashing.

Why do you say....

Srinivas Narashimalu, July 25, 2006 - 5:02 pm UTC

Tom,

This is related to an answer given by you to one of the questions asked above. I don't understand why you say moving from RBO to CBO here. Can you please explain why you say that?



I generated stats
execute dbms_stats.gather_schema_stats(ownname=>'FPA',cascade=>true,
estimate_percent => dbms_stats.auto_sample_size,degree=> 10,method_opt=>'for all
indexed columns size 1');
which cause bad performance .
caused direct path write wait events

What is that?




Followup:
you did this in your test system right, no harm done to your poor unsuspecting
end users as you made a huge change from using the rule based optimmizer to
using the cost based optimizer right?


you do realize this is a fairly huge change don't you? sort of like dropping
in a new version of your application. something that needs to be tested so you
can find the right set of statistics for your data - and to fix any queries that
did not work so well.



Tom Kyte
July 25, 2006 - 5:25 pm UTC

I did not follow these sentences:

... I don't understand why you say moving from RBO to CBO here. Can you please explain why you say that? ...

and there is entirely insufficient data supplied to even begin to look at that last question.

sorry for the insufficient data

A reader, July 26, 2006 - 9:31 am UTC

Tom,

My question is with respect to one of the questions asked which was answered by you in the same thread. The question was asked on 22-Nov-2005 by a reviewer "dost".

I don't understand why you say "you are changing from RBO to CBO that's why it's resulted in poor performance" as he had only gathered statistics.

Thanks,


Tom Kyte
July 26, 2006 - 11:32 am UTC

I said that because that is what happened. They gathered statistics, under choose (the default optimizer mode in 9i and before) that would invoke the CBO.

They made a MAJOR change
Without testing.

The only thing that I can be assured of when someone does that is.....

You will have a very bad day.

when you make a fundemental change like that - one of three things will happen:

things will go faster
things will go slower
things will not change at all apparently.


Some of their queries would go faster
Some would go slower
Some would not change


You will notice the slower queries immediately. You will tend to not notice at that time that some queries are running faster - the slow ones tend to catch your attention.

Thanks!

Srinivas Narashimalu, July 26, 2006 - 12:22 pm UTC

Thanks Tom!

RBO & PGA_AGGREGATE_TARGET

A reader, August 31, 2006 - 9:36 am UTC

Hi Tom,

Can one use PGA_AGGREGATE_TARGET with RBO?

If your answer is "yes", will the behaviour be different of setting the parameter with CBO while using RBO?

If answer is "no", can you please explain why?

As per my understanding PGA_AGGREGATE_TARGET parameter is not directly related with CBO or RBO but the parameter controls memory allocation for Oracle processes, is it right?

we are on 9.2.0.5 (Still using RBO!)


Thanks

Thanks,


Tom Kyte
August 31, 2006 - 9:55 am UTC

yes, the RBO never looked at sort_area_size to figure out a plan, so it is entirely unaffected by the pga_taggregate_target setting

which should be clue as to why the RBO should not be used :)

the CBO looks at the various settings and does different things based on them.

dbms_stats errro

jimm, September 15, 2006 - 2:30 pm UTC

SQL> execute sys.dbms_stats.gather_table_stats(ownname=> 'SRTDATA',tabname=>'SRTCHECKLISTLOG',method_opt=>'for all indexed columns size 1',cascade=> true,degree=>3);
BEGIN sys.dbms_stats.gather_table_stats(ownname=> 'SRTDATA',tabname=>'SRTCHECKLISTLOG',method_opt=>'for all indexed columns size 1',cascade=> true,degree=>3); END;

*
ERROR at line 1:
ORA-12801: error signaled in parallel query server P001
ORA-04030: out of process memory when trying to allocate 8192 bytes (sort
subheap,sort key)
ORA-06512: at "SYS.DBMS_STATS", line 10070
ORA-06512: at "SYS.DBMS_STATS", line 10084
ORA-06512: at line 1 

Tom,
Please help me to resolve this issue.

show parameter work
workarea_size_policy                 string
AUTO

show parameter pga
pga_aggregate_target                 unknown
524288000

show parameter sort
nls_sort                             string
sort_area_retained_size              integer
0
sort_area_size                       integer
65536 (note: this sort_area_size parameter is commented out  in the init.ora but it must take this value by default )

show parameter sga
lock_sga                             boolean
FALSE
pre_page_sga                         boolean
FALSE
sga_max_size                         unknown
1656089828

show sga
Total System Global Area 1656089828 bytes
Fixed Size                   457956 bytes
Variable Size             603979776 bytes
Database Buffers         1048576000 bytes
Redo Buffers                3076096 bytes


whats size of temporary tablespace?
2 GB

Before running dbms_stats:
SQL> select serial#, spid, pga_used_mem, pga_alloc_mem, pga_max_mem from v$process
        where spid = 8196; <-- obtained spid from previous cmd.

SERIAL#        SPID        PGA_USED_MEM        PGA_ALLOC_MEM        PGA_MAX_MEM
59        8196        249352                333556                464628

SQL> select * from v$temp_space_header;
TABLESPACE_NAME        FILE_ID        BYTES_USED        BLOCKS_USED        BYTES_FREE        BLOCKS_FREE        RELATIVE_FNO
TEMP                1        482344960        58880                1614807040        197120                1

During run of

SQL> execute sys.dbms_stats.gather_table_stats(ownname=> 'SRTDATA',tabname=>'SRTCHECKLISTLOG',method_opt=>'for all indexed columns size 1',cascade=> true,degree=>3);
BEGIN sys.dbms_stats.gather_table_stats(ownname=> 'SRTDATA',tabname=>'SRTCHECKLISTLOG',method_opt=>'for all indexed columns size 1',cascade=> true,degree=>3); END;
*
ERROR at line 1:
ORA-12801: error signaled in parallel query server P001
ORA-04030: out of process memory when trying to allocate 8192 bytes (sort
subheap,sort key)
ORA-06512: at "SYS.DBMS_STATS", line 10070
ORA-06512: at "SYS.DBMS_STATS", line 10084
ORA-06512: at line 1

During this cmd (before it errored out):

SQL> select serial#, spid, pga_used_mem, pga_alloc_mem, pga_max_mem from v$process
        where spid = 8196;

SERIAL#        SPID        PGA_USED_MEM        PGA_ALLOC_MEM        PGA_MAX_MEM
59        8196        655476                748828                748828

SQL> select * from v$temp_space_header;
TABLESPACE_NAME        FILE_ID        BYTES_USED        BLOCKS_USED        BYTES_FREE        BLOCKS_FREE        RELATIVE_FNO
TEMP                1        482344960        58880                1614807040        197120                1

After this cmd had finished (errored out):

SQL> select serial#, spid, pga_used_mem, pga_alloc_mem, pga_max_mem from v$process
        where spid = 8196;

SERIAL#        SPID        PGA_USED_MEM        PGA_ALLOC_MEM        PGA_MAX_MEM
59        8196        525088                683292                748828

SQL> select * from v$temp_space_header;
TABLESPACE_NAME        FILE_ID        BYTES_USED        BLOCKS_USED        BYTES_FREE        BLOCKS_FREE        RELATIVE_FNO
TEMP                1        482344960        58880                 1614807040        197120                1

 

Tom Kyte
September 16, 2006 - 2:25 pm UTC

step a) don't use parallel right now, that'll let you gather
step b) work with support to figure out if this is an OS restriction (ulimit) or something else.

probably a ulimit problem on unix
or you are using windows and you really did run out of ram with 32bit addressing.

ORA-04030: out of process memory

Sheshadri, October 09, 2006 - 4:44 am UTC

Hi Tom,

We are getting
"ORA-04030: out of process memory when trying to allocate 61464 bytes (QERHJ hash-joi,kllcqas:kllsltba)
ORA-06512: at "SFRTTCS.PROCESS_NEW_APPLICATION", line 476
ORA-06512: at line 1"
in our database while performing some simple tests.

Our database is running on AIX-Based Systems (64-bit) and
the database version is 10.1.0.4.0.

Our RAM size is 6GB.

Ulimit settings are as follows-

$ ulimit -a
core file size (blocks, -c) 1048575
data seg size (kbytes, -d) 131072
file size (blocks, -f) unlimited
max memory size (kbytes, -m) 32768
open files (-n) 2000
pipe size (512 bytes, -p) 64
stack size (kbytes, -s) 32768
cpu time (seconds, -t) unlimited
max user processes (-u) 2048
virtual memory (kbytes, -v) unlimited

Following are some of oracle parameters settings.

sga_max_size = 516M
sga_target = 0
statistics_level = TYPICAL
pga_aggregate_target = 24M
workarea_size_policy = AUTO
shared_servers=1
Processes = 500

The error always comes for PROCESS_NEW_APPLICATION package.

We are performing some tests on 4 tables on which the above package is built and these 4 tables have just 30K records each.

We tried setting SGA_TARGET to 4G and PGA_AGGREGATE_TARGET to 1G (Without changing ulimits) but still we got the same error.

Do you think we have to change ulimit or is there a possibility of a bug?

Thanks & Regards
Sheshadri

Tom Kyte
October 09, 2006 - 8:39 am UTC

that pga aggregate target, while NOT the cause of the 4030, is way small do you not think? you do know that is a setting for ALL SESSIONS and in 10g would be used for shared servers as well as dedicated servers.

It is either a ulimit issue (my guess would go with that, you are showing SOME ulimit settings, but we don't know if they are for the server processes or not) or a product issue.

Number of records and such - not meaningful, if your plsql code has a bug in it - you could easily exhaust all real memory. Just filling up a plsql table for example consumes ram (from the sga in shared server, pga in dedicated - but we don't really know if this is shared or dedicated)

ORA-04030: out of process memory

A reader, October 09, 2006 - 9:35 am UTC

Thanks for the response tom.

Our database is configured for shared server environment.
We initially thought PGA_AGGREGATE_TARGET & SGA size are small and then we increased PAT to 1G and SGA_TARGET to 4G.

But still we are getting the same error.

Regarding the PL/SQL, the test is running successfully for around 700 cycles but failing after. So I guess it may not be an issue with PL/SQL.

Regarding the ulimit,
$ulimit -Ha is giving
max memory size (kbytes, -m) 32768

$ulimit -Sa is giving
max memory size (kbytes, -m) unlimited

I am really sorry tom, I don't have more info on this.

Can we try again by setting SGA_TARGET to 4G and PGA_AGGREGATE_TARGET to 1G?

Or is there any other parameters particular to shared server environment we need to look?

Tom Kyte
October 09, 2006 - 10:08 am UTC

...
Our database is configured for shared server environment.....

but it is also configured for dedicated server :) So, until you query server in v$session to confirm, we don't know really (client can pick which one they want). A database ALWAYS does dedicated server, it may optionally also do shared server.

and shared server for a long running process is *not smart*, not at all - you should definitely NOT use shared server for anything that takes more than a very short period of time (I might even say "longer than a second")

if the test is running for 700 "cycles" (is that "iterations? loops? records? not sure what a "cycle" is) and then fails - problem could very well be in plsql - you run for a while, run out of memory. You filled it up and it takes 700 cycles to fill it.


you are NOT running out of SGA memory - this is process memory (malloc(), sbreak(), memmap() type stuff) NOT shared memory. Increasing SGA could likely increase the risk.

The pga aggregate target controls how much memory oracle would LIKE to use, it does not put a hard limit on anything.



ORA-04030: out of process memory

A reader, October 09, 2006 - 12:48 pm UTC

Hi,
V$session shows as the server as dedicated.

I am executing a package(NEW_APPLICATION.EVALUATE_RULES) 2000 times in a loop. while executing, It executes successfully for around 650 times and then gives this error.
*******************************************************
*
ERROR at line 1:
ORA-20762: Error in Rule 249 dynamic SQL
Package: NEW_APPLICATION
Procedure: EVALUATE_RULES
ORA-04030: out of process memory when trying to allocate 127000 bytes (QERHJ
hash-joi,kllcqas:kllsltba)
ORA-06512: at "SFRTTCS.PROCESS_NEW_APPLICATION", line 476
ORA-06512: at "SFRTTCS.TESTEVAL", line 8
ORA-06512: at "SFRTTCS.TESTLOOP", line 5
ORA-06512: at line 1
*******************************************************

I have changed ulimit to unlimited.

PGA_AGGREGATE_TARGET = 100M
SGA_MAX_SIZE=516M
LARGE_POOL_SIZE=50M
SHARED_POOL_SIZE=180M

Do you recommend any other changes? Please let me know if u need any other info.

Thanks & Regards
Sheshadri


Tom Kyte
October 09, 2006 - 1:46 pm UTC

...
V$session shows as the server as dedicated....

see :)

I rather suspect you might have a bug in your plsql - very well could be. Look at it, are you filling up say a plsql table or something (eg: calls to dbms_output, your own table)... Are you leaking cursors (which will leak workareas as well as workareas (memory) is not releasble until the cursor closes...)

ORA-04030: out of process memory

A reader, October 10, 2006 - 5:43 am UTC

Hi,

Pl/SQL is written by developers. They are using a type (which is used as an OUT parameter in the Procedure). There are no explicit cursors defined in the procedure and there are no DBMS_OUTPUT calls.

In the session, if I get this error once, it is not allowing me to execute the procedure again. I have to comeout of the session and execute the same in another one.

I am not able execute this procedure beyond 600 times in single session.

Each time I run the procedure, PGA,Max PGA,UGA & Max UGA memory keep on increasing and at some point it fails, Is this the right behaviour of memory? or It has to release the memory everytime?

Is there a way in which I can check the leak in workarea or check the cursors.

Thanks & Regards
Sheshadri



Tom Kyte
October 10, 2006 - 8:16 am UTC

sounds like a memory leak, please utilize support for this one - if you can pare the code way way way WAY down to reproduce, I'll be glad to take a look.

ORA-04030: out of process memory

A reader, October 10, 2006 - 8:06 am UTC

Hi Tom,

We tried running the same procedure in a different database installed on different system, there we could able to run it for 5000 times and pga,pga(max),uga,uga(max) growth rate was almost half compared to our devl.database and it was much more faster.
I am really confused abt, whether it is a PL/SQL code problem or database configuration or product bug.
Please let me know if you want me to check anything else.

Thanks & Regards
Sheshadri


Tom Kyte
October 10, 2006 - 11:55 am UTC

please utilize support as stated, sounds like a possible memory leak.

ORA-04030: out of process memory

A reader, October 12, 2006 - 6:11 am UTC

Thanks for your time and suggestions tom.
I think there was an issue with the stored procedure.
Yesterday we got the new release and now we are getting any error. I don't know what changes they made..I am comparing both codes now.

I wanted to know few things about PGA -

1. Currently my PGA_Aggregate target is 25M, and session is in dedicated mode. But when I was testing my procedure by running it in loop, my sessions's PGA memory was 333M and PGA Memory Max was 335, UGA - 329 and UGA Max -330.Can you explain me about this?

2.I tried to clear up some memory using when my PAT was 25M and session's PGA Mem - 333M, Max PGA - 335M using dbms_session.free_unused_user_memory;, but even after this my PGA Mem, Max PGA, UGA,Max UGA was showing the same figures.

Thanks & Regards
Sheshadri

Thanks again.
Sheshadri

Tom Kyte
October 12, 2006 - 8:25 am UTC

1) there is tunable and untable pga memory, we can control only your sort/hash/bitmap workareas. We cannot and will not stop you from filling a plsql table up with lots of stuff (consuming UGA memory which is in the PGA when using dedicated server).

if you have access to my book Expert Oracle Database Architecture, see the chapter on memory management, i cover this in depth.

2) that would mean.......

you didn't have anything to free, you were using it.

ORA-04030: out of process memory

JDam, November 26, 2006 - 1:52 pm UTC

Tom,
If I have a store procedure that insert,delete and update rows from one database to another database using execute_inmediate for the DML could this end in ORA-04030. Both database are 9.2.0.6

Tom Kyte
November 26, 2006 - 2:26 pm UTC

anything you do could result in an out of memory situation

if and when you run out of memory, you run out of memory.

could it be that you

a) did something to allocate huge chunks of PGA
b) and then did an execute immediate that just put you over the edge

sure - does that mean the execute immediate is the cause?
No, not really, it was everything beforehand

could the statement being executed be the cause?
Unlikely, but not impossible.

SAP recommendation on PGA_AGGREGATE_TARGET

Suraj Sharma, March 28, 2007 - 8:05 am UTC

Tom,

According to SAP recommendations the PGA_AGGREGATE_TARGET and _PGA_MAX_SIZE parameters should be set as under:

PGA_AGGREGATE_TARGET
40% of the memory (standalone DB)
20% of the memory (otherwise)
maximum 600 MB (WINDOWS)

_PGA_MAX_SIZE
400MB ab PGA_AGGREGATE_TARGET > 4 GB
600MB ab PGA_AGGREGATE_TARGET > 8 G
800MB ab PGA_AGGREGATE_TARGET > 12 GB

I found some information on the Oracle site at https://jonathanlewis.wordpress.com/2010/08/26/dmca

As per this link all this information below belongs to 10g
PGA region in Oracle Database 10g is very important because it governs the speed of sorting operations and SQL hash joins. You may want to dynamically change the pga_aggregate_target parameter when any one of the following conditions are true:

* Whenever the value of the v$sysstat statistic "estimated PGA memory for one-pass" exceeds pga_aggregate_target, you want to increase pga_aggregate_target.

* Whenever the value of the v$sysstat statistic "workarea executions ¿ multipass" is greater than 1 percent, the database may benefit from additional RAM memory.

* It is possible to over-allocate PGA memory, and you may consider reducing the value of pga_aggregate_target whenever the value of the v$sysstat row "workarea executions¿optimal" consistently measures 100 percent.

Is there something similar in 9i or something different?

Thanks,
Tom Kyte
March 28, 2007 - 12:12 pm UTC

you must query SAP for what they need to have set in order to make you supportable via SAP. You are using a 3rd party application there.



Suraj Sharma, March 29, 2007 - 12:59 am UTC

Tom,

I am sorry to not make my question quite understandable. I wanted to ask you that according to the link above ( https://jonathanlewis.wordpress.com/2010/08/26/dmca ) we may want to dynamically change the pga_aggregate_target parameter when any one of the following conditions are true:

* Whenever the value of the v$sysstat statistic "estimated PGA memory for one-pass" exceeds pga_aggregate_target, you want to increase pga_aggregate_target.

* Whenever the value of the v$sysstat statistic "workarea executions multipass" is greater than 1 percent, the database may benefit from additional RAM memory.

* It is possible to over-allocate PGA memory, and you may consider reducing the value of pga_aggregate_target whenever the value of the v$sysstat row "workarea executions optimal" consistently measures 100 percent.

These recommendations are for Oracle 10g. My question is whether these recommendations also apply with Oracle 9i?

I am sorry again for making my question bit complicated rather than simple and straight forward.

Tom Kyte
March 30, 2007 - 11:56 am UTC

use the pga aggregate target advisor in statspack OR use AWR/ADDM in 10g to have the database tell you what your optimal setting would be based on your workload history.


increase pga_aggregate_target??

Ricardinho, June 24, 2008 - 4:20 pm UTC

hi again Tom

Oracle 10g is installed on windows 2003 server which has 4Gb physical ram.
I sometimes got
ORA-04030: out of process memory when trying to allocate 8512 bytes (pga heap,kgh stack.)
ORA-04030: out of process memory when trying to allocate 16396 bytes (QERHJ hash-joi,QERHJ Bit vector)
ORA-12012: error on auto execute of job 1

Do I need to increase pga_aggregate_target in this scenerio??

Here is my configuration:
sga_max_size big integer--> 2G
sga_target big integer ---> 2G
pga_aggregate_target big integer--> 1G



Tom Kyte
June 24, 2008 - 5:42 pm UTC

if you did, you would get the ora-4030 FASTER

4030 means "the OS refused to give us more", if you make the pga aggregate target larger - you are telling us "use more OS memory"

if you are 32bit, unlikely you are getting anywhere near 3gb of memory to play with, how are you configured OS wise.

ORA-4030 pga allocated exceeds PGA_AGGREGATE_TARGET

carlos, June 24, 2008 - 10:08 pm UTC

There was one procedure using XML functions and didn't close the XMLDocs created, it was doing loops and creating a xmldoc for each rows in the loop.

After some minutes executing the procedure PGA allocated reach about 5 gb, even with PGA_AGGREGATE_TARGET set to 800 mb.

So here is my question, PGA_AGGREGATE_TARGET value is a limit or is just as tip to oracle knows how much use?
I ask it because the procedure runs until consume all memory (Ram and SWAP) and raise the ora-4030.


thanks for your time.

temp space

A reader, June 25, 2008 - 3:32 pm UTC

Tom,

I have very big tables one has 60Million records and one is inline view which will generate 50M records and then join between this 2 tables and top of this some grouping. Everyday I have to process for 12 months of data, final table generae very few records around 2Million. When I porcess one month data it is not using Temp table space, When my job is running (only my job-no other job is running) and I process each 12 month individualy and it is not using temp space at all, but when i process 12 months at a time it is using Temp tablespace and that is correct because of high volume of data. My questions is Which one is correct method, I know instaed of looping (12 times), one single INSERT is more powerful. but here while processing for a single month, process is happening within memory only and I have to make loop only for 12 times (12 Instert), second while running 12 INSERT It require 41 minuts while for single insert it took 48 minuts. This job is going to run in night only and only this job. today I am processing only 12 months but in future I may process 60 months in that case it will execute 60 insert so Can you tell me which one is better way to process?
Tom Kyte
June 25, 2008 - 3:50 pm UTC

perhaps, if you are a batch job, you should use manual memory management and up the amount available to yourself. You can alter your SESSION to use manual memory management and bump your hash/sort areas up.

I'd prefer to do that before writing procedural code.

temp space

A reader, June 25, 2008 - 4:07 pm UTC

I am using auto PGA, i have oracle 10g, will oracle not take care automatically?
Tom Kyte
June 25, 2008 - 5:05 pm UTC

automatic pga memory management is all about "sharing"

you don't want to share, you are the only game in town


automatic pga memory management will restrict you from using all of the pga aggregate target, in fact - you'll get maybe 5% of it - it wants to be able to use that target for ALL USERS, but if you are the only user, sharing isn't in the game plan. Manual lets you say "use this, just do it"

temp space

A reader, June 25, 2008 - 6:54 pm UTC

Thanks tom for prompt reply.

when job is running in the night that time it is only one job but during day time application can be access by multiple user so that time I want to share memory. so is it possible that during night data load i can make it mannual and alter the session and give full memory and once load is done change it to AUTO so every one can share?

Second one of the observation is when i process data in the loop say start Jan 07 to Dec 07, every month process time increasing. (no source data volume is not more then previous month), i am not worried for that but today i try to load data for 18 months (starting from Jan07 to Jun 08) so when it start Mar 08 processing it is using temp space around 10GB (may be data volume is high) BUT when i run individual process only for Mar 08 it is not using temp space at all. is it something like that after each insert memory is not free and because of that it is trying to use temp space ? i.e. when I process data in the loop, jan 07 - total memory require X whcih is less than available memory Y, then for Feb=X+1 which is still letss then Y, in Dec07=X+12 which is still less then Y but when it start Apr08=X+15 which is grater then Y so use temp space BUT if i run only Apr it require only X so menas when i run in the loop, everytime it is adding some extra eforts (memory)? am i correct understanding?

Tom Kyte
June 26, 2008 - 10:26 am UTC

you can, for your SESSION ONLY, use manual.

you can set this with alter session, it'll only affect you. A database can do both manual and automatic memory management at the same time.


The more you process at a time, the more memory you would need for sorting and such, yes.

pga_aggregate_target instance startup

Suraj Sharma, June 29, 2009 - 3:18 am UTC

Always-helpful Tom,

I have a question here though. Suppose I set my sga_max_size=2GB and pga_aggregate_target=1GB and my OS has total memory of 16 GB. During my instance startup will this Database occupy total of 3GB of memory from OS (2GB for SGA and 1GB for PGA)

One of my collegue told that Oracle would take memory as and when it requires in the case of PGA, but I believe that it should acquire it right from the instance startup.

I know the fact that it can grow more than the the parameter pga_aggregate_target.

Thanks,
Suraj Sharma





Tom Kyte
July 06, 2009 - 6:06 pm UTC

pga is process memory, the p stands for process.

Now, since when you start up, all of the processes that will be part of your instance over it's life are NOT STARTED (eg: the dedicated server you need five months from now is NOT started when you start the instance) - how could the memory be pre-allocated for it?



pga memory is allocated AS IT IS NEEDED over time, it is released when it is no longer needed.

Your instance would take 2gb of "shared memory" and the background processes would allocate some PGA memory - but not the 1gb of pga memory.

Your colleague is correct.

Alexander, July 27, 2009 - 3:34 pm UTC

Tom,

We have an Oracle application for our HR system. It uses a lot of pl/sql, and therefore a lot of pga memory. Since pga target is meaningless when it comes to pl/sql programs, is there anyway to stop Oracle from using up all the memory on the server and bringing it to it's knees? We had a customer get impatient with a job they kicked off, so they kicked it off again and it used up about 68 gig of memory.
Tom Kyte
July 27, 2009 - 8:39 pm UTC

... it uses a lot of pl/sql, and
therefore a lot of pga memory. ...

the two things do not lead to eachother in general, please don't say that - just because you use plsql does not mean you will use a lot of pga.

If you have code that is using 68gb of memory - you have a serious issue, that is bad code - not "we are using plsql therefore 68gb of ram is unavoidable". What are you doing in your code that causes this? do your developers read out entire tables (thousands, 10's of thousands or even more rows big) into plsql table types??


You could use shared server, the UGA would be in the large pool of the SGA - it would stop the "we used all machine memory" error, but you would then be plagued with ora-4031 unable to allocate x bytes of large pool memory errors instead.

I would say you have a problem in the code - the code is inefficient - and the code needs to be looked at.

This is not a plsql problem, I can do this (use all memory) in C, C++, Java, Visual Basic - whatever.

Alexander, July 28, 2009 - 10:05 am UTC

Is there anything else that can cause your pga memory to skyrocket past your pga target other than pl/sql programs? It was my understanding that operations like ad hoc querys will try to be "targeted" by the pga parameter, but for pl/sql programs Oracle will not stop the process in the middle of it and will try to get more memory from the OS if it can.

This is an Oracle application, developed by Oracle.
Tom Kyte
July 28, 2009 - 8:51 pm UTC

Your understanding is correct

However, my statement is "if your developers wrote plsql code that requires 68gb of RAM, then your developers are doing something really bad in the code like loading up an array of thousands of rows - meaning, if they stopped using plsql and started using java - they would STILL need 68gb of memory - meaning - you need to educate the developers on how to write efficient code in any language"


PLSQL will not consume 68gb of ram unless you are doing something really silly in the code, like loading a million row table into a variable.

What oracle application process name written by oracle is this - throw us a bone here, give us some information.

Alexander, July 29, 2009 - 10:29 am UTC

I heard you loud and clear, which is why I said "this is Oracle's code...", in other words, your developers not ours.

I'm not sure of what the application would be known to Oracle as, just what we refer to it as. It's an HR system, I think it's called "oracle apps".

The package or proc that sucked it all up is n_get_tx_data_pub_w.get_adj if that helps at all....
Tom Kyte
August 03, 2009 - 2:11 pm UTC

... n_get_tx_data_pub_w.get_adj ...

that'll be your code, as in, your code - not even Oracle Applications code.

Not our naming standard at all.

suggest you read the code out of the dictionary and hope your developers tagged it with their identity.

Your company wrote that code. Or, your company paid a contractor to write that code.

Alexander, August 03, 2009 - 3:26 pm UTC

Is Oracle's stuff in the "APPS" schema?
Tom Kyte
August 04, 2009 - 1:58 pm UTC

Your stuff can be there to, if that is what you mean.

Just because it is in the SYS schema for example, does not mean "it is Oracle's"


Plop that API into google, tell me how many hits you get back. If it were ours, you would see people talking about it, see documentation on it, etc. Drop it into metalink, tell me how many hits you get.

Read the code out - if it is not

a) wrapped
b) unwrapped but with a big standard "this is oracle code" header

it isn't ours.

Estd PGA Cache Hit % in AWR report

Raja, May 12, 2011 - 4:56 am UTC

Hi Tom,

It will be great to get an help on how the value for "Estd PGA Cache Hit %" is calculated "PGA Memory advisory" section of an AWR report.

I am trying to get the exact formula for this. It will be great to have some pointers.

Thanks

Tom Kyte
May 12, 2011 - 7:56 am UTC

You will not get an exact formula and it wouldn't really be useful to you if you had it.

The estimation is based on us keeping tracking of what we did in the past (your workload), how often we had to sort to disk, spill to temp - and how often we think we'd be able to SKIP that spilling to disk - to do things in memory - if we had the additional memory.

It is not a "formula" inasmuch as an estimation based on what we experienced during the time the AWR report was run for.

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.