Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, joon.

Asked: March 23, 2001 - 5:29 pm UTC

Last updated: September 09, 2013 - 9:06 am UTC

Version: 8.1.6.2

Viewed 50K+ times! This question is

You Asked

Server have problem in memory, It have 1GB memory and 500M swap space.
Usually 300 user connected.

os : HP-UX B.11.00 A 9000/800 32 bits

Here are parameter for sort and hash

sort_area_size 500k
sort_area_retained_size 500k
hash_area_size=0
Enabled hash join.

User connect dedicated server.

So I check v$sessstat view using following sql.

select s.osuser osuser,s.serial# serial,se.sid,n.name,
max(se.value) maxmem
from v$sesstat se,
v$statname n
,v$session s
where n.statistic# = se.statistic#
and n.name in ('session pga memory','session pga memory max',
'session uga memory','session uga memory max')
and s.sid=se.sid
group by n.name,se.sid,s.osuser,s.serial#
order by 2
;

some user's pga like this all day,pag memory didn't shrink.
username name size
dougammc session pga memory 22318856
mmcdouga session pga memory max 22318856
mmcdouga session uga memory 408312
mmcdouga session uga memory max 1046160

some user's pga shrinked.
username name size
imacfarl session pga memory 1760960
imacfarl session pga memory max 10328424
imacfarl session uga memory 473920
imacfarl session uga memory max 1133288

1. which init parameter dicide session pga memory?

2. How can I found user run or not followin sql

alter session set sql_area_size=10000000;
alter session set sql_area_retained_size=1000000;

3. Can I assume username "dougammc" run following sql,
because so much pga memory allocated this user.

alter session set sql_area_size or
alter_session_retained_size


thanks.






and Tom said...

The PGA holds alot more then sort area's. The PLSQL data segment is there. There are hundreds, thousands of things that could be there. Consider this:

ops$tkyte@DEV816> @mystat pga
old 4: and lower(a.name) like '%' || lower('&1')||'%'
new 4: and lower(a.name) like '%' || lower('pga')||'%'

NAME VALUE
------------------------------ ----------
session pga memory 144688
session pga memory max 144688

ops$tkyte@DEV816> declare
2 type myarray is table of varchar2(255) index by binary_integer;
3 x myarray;
4 begin
5 for i in 1 .. 1000 loop x(i) := rpad('x',255,'x'); end loop;
6 end;
7 /

PL/SQL procedure successfully completed.

ops$tkyte@DEV816> @mystat pga
old 4: and lower(a.name) like '%' || lower('&1')||'%'
new 4: and lower(a.name) like '%' || lower('pga')||'%'

NAME VALUE
------------------------------ ----------
session pga memory 479044
session pga memory max 479044

Now, I certainly haven't done a sort. There are thousands of things that will affect the size of the PGA...

So, can I shrink that?


ops$tkyte@DEV816> exec dbms_session.FREE_UNUSED_USER_MEMORY

PL/SQL procedure successfully completed.

ops$tkyte@DEV816> @mystat pga
old 4: and lower(a.name) like '%' || lower('&1')||'%'
new 4: and lower(a.name) like '%' || lower('pga')||'%'

NAME VALUE
------------------------------ ----------
session pga memory 158400
session pga memory max 479044

ops$tkyte@DEV816>

Yes -- BUT unless you are using MTS it is an artificial shrink!! You haven't given any memory back to the OS.

In dedicated server mode the PGA is allocated in the process heap. This heap grows as needed. You PGA is allocated using C malloc's and such in this heap. The heap really can never shrink if you shrink the PGA, all you have done is freed up some memory in the middle of the process heap -- but there is always some bytes of memory at the far end of the heap that will prevent it from shrinking. So the amount of memory allocated to the process remains unchanged. You might think -- yes, but I've given back so much memory to the process to reuse now. No, you haven't done that either because the PGA is itself a heap with free and used chunks in it. All you've done in this case is make us run slower because we'll have to rebuild that PGA heap again real soon.

The size of the PGA is totally dependant on that process does. You can try putting a free unused user memory in there, but unless you are using MTS it won't make a difference. Under MTS, the PGA is allocated out of the LARGE_POOL (if you have it configured) in the SGA and is part of a bigger heap. There that free memory call would have some effect.

300 users, 1 gig of ram, you might be a candidate for MTS. If you decide to test that path -- make sure to allocate a large enough LARGE_POOL to avoid trashing your shared pool (and remember that the pga's will be in the sga so make the sga really big, 800 meg or more on a 1gig system would not be unreasonable with MTS).

Rating

  (50 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

What is process heap?

A reader, January 19, 2002 - 3:11 pm UTC

As subject

these technical words are really confusing me... sigh

thank you

Tom Kyte
January 19, 2002 - 5:57 pm UTC

A process heap is the memory used by a process. Memory is managed in a "heap" (see
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:531274103649 <code>
and look for "what does heap table mean" for a quickie definition of a heap).

Process memory is the memory in use by an OS process. A process heap is just the heap of memory its using.

Database is very slow after few minutes

A reader, February 21, 2002 - 4:07 am UTC

Hello TOM,
How was your holiday?. We had very big problem using Oracle 9i on SUSE linux 7.2. The database is very slow now. But it was working very fast with oracle 8i and SUSE Linux 7.0. Is there any special paramaters to consider in Oracel 9i?.
I will give you all the parameters now. If i set wrong size please correct me.

About Suse settings:
SuSE Linux 7.2 (i386) - Kernel 2.4.7-4GB.


Database is very slow after few minutes

SIVABABU, February 21, 2002 - 5:03 am UTC

Hello TOM,
How was your holiday?. We had very big problem using Oracle 9i on SUSE linux 7.2. The database is very slow now. But it was working very fast with oracle 8i and SUSE Linux 7.0. Is there any special paramaters to consider in Oracel 9i?.
I will give you all the parameters now. If i set wrong size please correct me.

About Suse settings:
SuSE Linux 7.2 (i386) - Kernel 2.4.7-4GB.
SHMMAX=4147483648
SHMMNI=4096
SEMMSL=350
SEMMNS=32000
SEMOPM=200
SEMMNI=228
and Our Initsid.Ora file has..
db_block_size=8192
db_cache_size=67108864
java_pool_size=117440512
large_pool_size=151048576
shared_pool_size=520440512
processes=150
sort_area_size=1524288
timed_statistics=FALSE
We installed with dedicated server and running Our ERP software.
If i will shutdown the database and start it again. it is realy fast. But after few times it goes slow. We tried ANALYSE and ALTER SYSTEM FLUSH SHARED_POOL etc. But no helpful for us.
Can u please explain why it is so slow and what i have to tune for Oracle 9i.

If it is not possible to solve it we will go to again down oracle 8i and SUSE Linux 7.0.

WE ARE WORKING LAST ONE WEEK AROUND THIS PROBLEM. BUT NO SOLUTION.

with regards,
Sivababu

Confused

H, July 24, 2002 - 12:02 pm UTC

Tom, you said :
"If you decide to test that path -- make sure to allocate a large enough LARGE_POOL to avoid trashing your shared pool (and remember that the pga's will be in the sga so make the sga really big, 800 meg or more on a 1gig system would not be unreasonable with MTS)."

And you also said (in question titled PGA/UGA MEMORY)
"The PGA is never allocated out of Oracle's SGA – it is always allocated locally by the process or thread."

Which is true?


Tom Kyte
July 24, 2002 - 10:47 pm UTC

In dedicated server mode, the UGA is allocated in the PGA (in the process, not the SGA)

In MTS (aka shared server) mode, the UGA is allocated in the SGA (NOT in the process, in the SGA). If you have a large pool, the UGA will be allocated in there, if not -- it'll be in the shared pool.


The PGA is *never* in the SGA. The UGA is sometimes in the PGA, sometimes in the SGA.

So, all of the above -- is true ;)

What about hash area?

Alex Daher, November 14, 2002 - 11:55 am UTC

Tom,

What about the hash_area_size? Where is it allocated from?

Regards,
Alex

Tom Kyte
November 14, 2002 - 7:38 pm UTC

pga

And...

Alex Daher, November 19, 2002 - 9:56 pm UTC

Another easy one...

Once hash area is alocated, it's never released to OS, only
to use of PGA, right? Just like sort_area_size?

Cheers!
Alex

Tom Kyte
November 19, 2002 - 10:16 pm UTC

it is like the sort area size -- however to say it is "not released" is an overstatement if you ask me.

Please clarify.

Yogeeraj, November 19, 2002 - 11:23 pm UTC

hello,

You said:
<quote>
300 users, 1 gig of ram, you might be a candidate for MTS.
</quote>

Please explain how we would decide whether we have to go the MTS way or the DEDICATED server one. Here, it seems like you base yourself on the number of users and available RAM...

In Chapter 2, you wrote (more about OLTP, transactions and their duration):
<quote>
This is rule number one for MTS - make sure your transactions are short in duration. They can be frequent, but they should be short (as characterized by OLTP systems). If they are not, you will get what appears to be a total system slowdown due to shared resources being monopolized by a few processes. In extreme cases, if all of the shared servers are busy, the system will appear to be hang.

So, MTS is highly appropriate for an OLTP system characterized by short, frequent transactions. In an OLTP system, transactions are executed in milliseconds - nothing ever takes more than a fraction of a second. MTS on the other hand is highly appropriate for a data warehouse. Here, you might execute a query that takes one, two, five, or more minutes. Under MTS, this would be deadly. If you have a system that is 90 percent OLTP and 10 percent 'not quite OLTP', then you can mix and match dedicated servers and MTS on the same instance. In this fashion, you can reduce the number of processes on the machine dramatically for the OLTP users, and make it so that the 'not quite OLTP' users do not monopolize their shared servers.
</quote>

Would you also recommend a "mix and match of dedicated servers and MTS on the same instance" in systems - "75% OLTP and 25% 'not quite OLTP'" with a very large user community (300 - 400 users, 4 GB RAM)?

thank you in advance for the clarifications and guidance. I am currently reviewing my system with has had a considerable increase in the number of users recently. (Maybe your next long awaited book on Performance Tuning would definitely have helped me a lot ;) )

Regards
Yogeeraj

Tom Kyte
November 21, 2002 - 11:46 am UTC

Here was my thought process.

1-3meg of ram per user. Say 3m. That is 900meg of ram. That is almost the entire machine. Bummer. Not much left for an SGA.

So, what can we do? If we enable MTS and set shared servers to 10, we'll have at most 10 concurrent users (hence the need for SHORT transactions) but we'll also have ONLY 10 pga's (sort areas and other stuff). We'll still have 300 UGA's (session specific memory) and they'll ALL be in the large_pool in the SGA now, not dynamically allocated from the OS. We *may* be able to fit them all in there now given that we have 290 LESS pga's to deal with -- maybe.

If you have 4gig of ram, you might not be using MTS for the same reason. You need 1 to 2 gig of ram for the dedicated servers maybe (large, I rounded up) and you have 1gig left over for an SGA and 1 gig for everything else.

I would definitely mix MTS and dedicated server processes -- if you have long running batch jobs, they almost certainly NEED dedicated servers.


Some clarification

Vijay Sehgal, November 20, 2002 - 8:54 am UTC

Could you throw some light on deciding factors for Dedicated/Mutithreaded server setups

Tom Kyte
November 21, 2002 - 12:33 pm UTC

see above... memory, number of processes on the server, duration of transactions...

non-standard block size buffers

Reader, March 12, 2003 - 3:10 am UTC

Tom,
when do we need to set the non-standard block size buffers?..

DB_2K_CACHE_SIZE,DB_4K_CACHE_SIZE,DB_8K_CACHE_SIZE,DB_16K_CACHE_SIZE,DB_32K_CACHE_SIZE ...what should i set as my non-standard block size when i have a block size of 8k. is it always neccessary to set the non-standard block size buffers!!! or its dependent on ????

please through some info..

Thanks


Tom Kyte
March 12, 2003 - 7:51 am UTC

only if you actually use them.

if you use a single block size, don't set them
if you want to use more then one block size, set them

reader

reader, March 12, 2003 - 8:25 am UTC

Tom, i need some more info from you...

Under what circumstances do we need to set different blocksize?...hope the blocksize is defined at tablespace level - am i right here?. Please, explain me with a scenario where we may need different block size - if u can spare some time on this!

and if i use different block size, how should the setting of DB_nK_CACHE_SIZE be calculated?.

Thanks for your time and effort...

Tom Kyte
March 12, 2003 - 8:41 am UTC

you would use it primarily to transport a tablespace from a small blocksize database (OLTP) to a big blocksize database (DW) to perform an ETL (extract/transform/load)

You would not use it day to day generally. I've yet to see a site using more then one block size day to day.




Thanks for your prompt reply - as always !

reader, March 12, 2003 - 9:44 am UTC


"1-3meg of ram per user"

MEHMOOD, May 26, 2004 - 8:58 am UTC

Dear Tom:

As you said that

============================
"1-3meg of ram per user"
============================

how can we calculate this requirement??

Tom Kyte
May 26, 2004 - 11:38 am UTC

I benchmark since it is *very* much application specific.

do lots of simple small sql? smaller
do lots of big queries with lots of sorts? bigger
do lots of plsql with no state? smaller
do lots of plsql with huge in memory collections? bigger
and so on...

A reader, May 26, 2004 - 12:17 pm UTC

Tom,

In a RAC OLTP environment, where indexes have higher propensity of contention do you think its a good idea to have different block size for that tablespace.

Thanks.

Tom Kyte
May 26, 2004 - 3:57 pm UTC

no.

partitioning to spread the workload out -- sure.

maybe, just maybe a reverse key index here and there -- sure.

maybe using random in key generation (like I do on asktom, for a different reason) -- sure.

Default memory per session

VA, February 19, 2005 - 4:36 pm UTC

In dedicated server mode, does every connected session take up some amount of memory in the server's SGA/PGA regardless of any work they do? In other words, if I have 100 inactive sessions, are they using up memory on the server?

Thanks

Tom Kyte
February 19, 2005 - 5:37 pm UTC

in dedicated server mode, each process has *some* pga yes (some OS memory, every process from "ls" to "oracle" does).

they use resources of the server in the SGA as well (well, ls won't but any oracle process will)

if you have 100 inactive sessions, yes, you have memory being used.

How much?

A reader, February 19, 2005 - 10:45 pm UTC

"if you have 100 inactive sessions, yes, you have memory being used"

1. Is this statement true regardless of dedicated or shared server?

2. If so, for each, how can I quantify how much memory is being used?

Thanks



Tom Kyte
February 20, 2005 - 9:42 am UTC

1) yes, they will have some SGA resouces in the form of UGA memory allocated and a session state.

2) v$sysstat can be used to see the pga memory (in dedicated server mode, the uga is inside the pga, this is what you want). and it can be used to see uga memory (in shared server mode this is what you want, it is the portion of the large pool (you best be using a large pool in this mode) they are using)

Any formulas for PGA?

Orlando Reyes, March 10, 2005 - 1:45 pm UTC

Hi Tom,

You said: “The PGA holds a lot more then sort area's. The PLSQL data segment is there. There are hundreds, thousands of things that could be there.”

Now, Donald K. Burleson, I’m sure you’ve herd of him, says that PGA is equal to OS Overhead (1MB for Unix, 2MB for Windows) + sort_area_size + hash_area_size.

I ran some tests in Unix, (dedicated server) and the numbers coming from v$session, v$sesstat, v$statname are very close to this formula.

So my question is, would the OS overhead account for the other many things in the PGA? Is this formula close to reality? What are your comments on this?

Thanks a lot as usual.

Orlando



Tom Kyte
March 10, 2005 - 7:32 pm UTC

(if you quote someone -- you should provide a reference - I'm responding to you and your claim that they are "close to this formula")


I'm sorry -- but there is no way on earth you could come close to that except by accident.

the pga is equal to the amount of memory you have allocated.  period.  nothing more, nothing less.

did you know that sort area size is allocated *as needed* - if you set it to 512m, it won't allocate it all at startup?

did you know that you can have more than one sort area going at a time?


the pga is most definitely NOT "(1MB for Unix, 2MB for Windows) + sort_area_size + hash_area_size", definitely NOT.



ops$tkyte@ORA9IR2> show parameter area_size

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
bitmap_merge_area_size               integer     1048576
create_bitmap_area_size              integer     8388608
hash_area_size                       integer     1073741824
sort_area_size                       integer     536870912
workarea_size_policy                 string      MANUAL
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> @mystat pga
ops$tkyte@ORA9IR2> set echo off

NAME                                VALUE
------------------------------ ----------
session pga memory                 229020
session pga memory max             294556

<b>one might be concerned that if it were sort area size/hash area size, i'd be using 1.5gig of ram, but I'm not</b>


ops$tkyte@ORA9IR2> begin
  2  for i in 1 .. 10000 loop my_pkg.l_data(i) := 'x'; end loop;
  3  end;
  4  /

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2> @mystat pga
ops$tkyte@ORA9IR2> set echo off

NAME                                VALUE
------------------------------ ----------
session pga memory                4030108
session pga memory max            4095644

<b>but that plsql table is taking a bit of ram...</b>

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> set autotrace traceonly statistics
ops$tkyte@ORA9IR2> select * from all_objects order by 1,2,3,4,5,6,7,8,9,10,11,12;

28018 rows selected.


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
     114922  consistent gets
          0  physical reads
          0  redo size
    1957903  bytes sent via SQL*Net to client
      21036  bytes received via SQL*Net from client
       1869  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      28018  rows processed

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> @mystat pga
ops$tkyte@ORA9IR2> set echo off

NAME                                VALUE
------------------------------ ----------
session pga memory                4104812
session pga memory max            7578220




<b>and it appears my sort used 3m -- out of 512 it was allowed to -- but gave it back (in 9i)</b>

 

Etimating PGA better

Orlando Reyes, March 11, 2005 - 1:01 pm UTC

Okay Tom,

Here is the reference for the quote: </code> http://www.dbazine.com/burleson9.shtml <code>

I also did some other testing and the results confirmed you answer. The reason I though I was getting close results was because I was dealing with averages, and there was not much activity on my connections. This is a web application and uses a connection pool that keeps many connections open and most of the time inactive.

Here is what I did, I connected as.
SQL*Plus: Release 10.1.0.3.0 - Production on Fri Mar 11 10:50:06 2005

Copyright (c) 1982, 2004, Oracle. All rights reserved.


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production

SYSTEM@PINFR01> show parameter area_size

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
bitmap_merge_area_size integer 1048576
create_bitmap_area_size integer 8388608
hash_area_size integer 2097152
sort_area_size integer 1048576
workarea_size_policy string AUTO

Then ran the query:
-----------------------------------------------------------------------------------------
select substr(s.username,1,10) usern,s.serial# serial,se.sid,substr(n.name,1,25),
max(se.value) maxmem
from v$sesstat se,
v$statname n
,v$session s
where n.statistic# = se.statistic#
and n.name in ('session pga memory','session pga memory max',
'session uga memory','session uga memory max')
and s.sid=se.sid
and username = 'SYSTEM'
group by n.name,se.sid,s.username,s.serial#
order by 2;


USERN SERIAL SID SUBSTR(N.NAME,1,25) MAXMEM
---------- ---------- ---------- ------------------------- ----------
SYSTEM 2820 25 session pga memory 1126240
SYSTEM 2820 25 session pga memory max 6702912
SYSTEM 2820 25 session uga memory 536320
SYSTEM 2820 25 session uga memory max 6423040

Then I ran

declare
type myarray is table of varchar2(255) index by binary_integer;
x myarray;
begin
for i in 1 .. 1000 loop x(i) := rpad('x',255,'x'); end loop;
end;
/

for 1000, 10000, 100000, and 1000000 and reran the previous query in between; the
'session pga memory' and 'session pga memory max' changed as follows:

1000 =========================================================================
SYSTEM 2820 25 session pga memory 1060704
SYSTEM 2820 25 session pga memory max 6702912

10000 ========================================================================
SYSTEM 2820 25 session pga memory 3910360
SYSTEM 2820 25 session pga memory max 6702912

100000 =======================================================================
SYSTEM 2820 25 session pga memory 32489280
SYSTEM 2820 25 session pga memory max 32489280

1000000 ======================================================================
SYSTEM 2820 25 session pga memory 318262008
SYSTEM 2820 25 session pga memory max 318262008

So I can see how the PGA changes depending on the actual task being executed.

The reason I asked the question is because I have read several books by D. Burleson and
I think he has a lot of Oracle knowledge, but when I saw his formula I tough of validating it a bit more.

Question is, when trying to determine the amount of PGA used for all users, Could we take an average of PGA's using v$sesstat, v$statname and v$session and generate and estimate? Would that be close enough?

Also, as you can see, (at the top) this DB is using "workarea_size_policy = AUTO", being this the case, do we really need to worry about the 'PGA * HWM' or just try tuning the AGGREGATE_TARGET parameter is more than enough?

All I am trying to do is to give the Unix team a more educated number of the amount of RAM needed for this DB, we know the SGA so, want to estimate PGA better.

Thanks as usual.

Orlando





Tom Kyte
March 12, 2005 - 8:18 am UTC

In order to know the pga you need, you need to use the pga you are going to use

That is, unless you want to guess, you need to benchmark your application to see what you do in real life.

The person that fills a 5,000,000 entry plsql table for "faster lookups" will use tons more pga than the person that uses a simple faster join.

The system that uses pga_aggregate_target can estimate the memory used for sorting and hashing very well (that target number) and then must only add in what they use for other stuff (plsql variables and the amount of pga used by cursor areas and such).


So, you benchmark your application (eg: run it against representative data, using the real application, exercising it as much as you can). Then you observe what it used.


Unless the books you read actually prove to you the things they say, you might question them. I can say anything I want - it means nothing unless I show you how to see it yourself (and remember things change, any book that doesn't show you how to measure and see what the author is telling you - well, that book is not so useful for you must now sit down and pretty much point by point ask yourself "how can I prove this")


Proof, Proof and Proof right?

Orlando Reyes, March 13, 2005 - 12:29 pm UTC

Hi Tom,

I get your point, and that is why I like your books a lot, because most of the stuff you talk about is basically proven there.

In any case, sometimes books or papers are not 100% accurate and that is why we have to do our homework.

For what I’ve seen, I think I’ll go with Oracle’s PGA automatic option and try to benchmark my aggregate target parameter as well as I can. I think it is the best option out there for most DB’s.

Thanks a lot and have a great one.

Orlando


Tom Kyte
March 13, 2005 - 6:34 pm UTC

how about verification, verification, verification.


Process wise PGA usages

Rajesh Bhatia, March 29, 2005 - 5:56 pm UTC

Hi,

Is the below statement correct for Oracle 9i.

Please comment.

=================================
Oracle does not allow any single process to take more than 5% of pga_aggregate_target.. once you cross 5 then it through the sort to the temp segment.. so if you have 6g as pga_agg_target then any process can get only a max of 300m
=================================

Rajesh

Tom Kyte
March 29, 2005 - 8:18 pm UTC

not technically.

Oracle does not allow a single workarea to be larger than about 5% of the pga_aggregate_target.  A single sql statement might have need of multiple workareas.  A process might have multiple statements with multiple workareas.

consider how the workareas accumulate as I open cursors:

ops$tkyte@ORA9IR2> show parameter work
 
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
workarea_size_policy                 string      AUTO
ops$tkyte@ORA9IR2> show parameter pga
 
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
pga_aggregate_target                 big integer 26214400
ops$tkyte@ORA9IR2> select 26214400*0.05/1024/1024 from dual;
 
26214400*0.05/1024/1024
-----------------------
                   1.25
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> variable a refcursor
ops$tkyte@ORA9IR2> variable b refcursor
ops$tkyte@ORA9IR2> variable c refcursor
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> @mystat "session pga memory max"
ops$tkyte@ORA9IR2> set echo off
 
NAME                                VALUE
------------------------------ ----------
session pga memory max             286044
 
ops$tkyte@ORA9IR2> declare
  2          l_rec all_objects%rowtype;
  3  begin
  4          open :a for select * from all_objects order by 1,2,3,4,5,6,7,8,9,10,11,12,13;
  5          fetch :a into l_rec;
  6  end;
  7  /
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> @mystat2 "session pga memory max"
ops$tkyte@ORA9IR2> set echo off
 
NAME                                    V DIFF
------------------------------ ---------- ----------------
session pga memory max            2064028        1,777,984
 
ops$tkyte@ORA9IR2> declare
  2          l_rec all_objects%rowtype;
  3  begin
  4          open :b for select * from all_objects order by 1,2,3,4,5,6,7,8,9,10,11,12,13;
  5          fetch :b into l_rec;
  6  end;
  7  /
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> @mystat2 "session pga memory max"
ops$tkyte@ORA9IR2> set echo off
 
NAME                                    V DIFF
------------------------------ ---------- ----------------
session pga memory max            2653852        2,367,808
 
ops$tkyte@ORA9IR2> declare
  2          l_rec all_objects%rowtype;
  3  begin
  4          open :c for select * from all_objects order by 1,2,3,4,5,6,7,8,9,10,11,12,13;
  5          fetch :c into l_rec;
  6  end;
  7  /
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> @mystat2 "session pga memory max"
ops$tkyte@ORA9IR2> set echo off
 
NAME                                    V DIFF
------------------------------ ---------- ----------------
session pga memory max            3178140        2,892,096
 

 

v$process.PGA_FREEABLE_MEM

Sami, November 02, 2005 - 12:14 pm UTC

Dear Tom,

Why the PGA_FREEABLE_MEM is not released even after executing  dbms_session.FREE_UNUSED_USER_MEMORY;
 
SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE    10.2.0.1.0      Production
TNS for Linux: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production

SQL>  select spid,PGA_USED_MEM,PGA_ALLOC_MEM,PGA_FREEABLE_MEM,PGA_MAX_MEM from v$process p,v$session s where s.paddr=p.addr and s.username=user;

SPID         PGA_USED_MEM PGA_ALLOC_MEM PGA_FREEABLE_MEM PGA_MAX_MEM
------------ ------------ ------------- ---------------- -----------
4500               851693       1676965           524288     6019369

SQL> show user
USER is "HR"
SQL>  exec dbms_session.FREE_UNUSED_USER_MEMORY;

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL> select spid,PGA_USED_MEM,PGA_ALLOC_MEM,PGA_FREEABLE_MEM,PGA_MAX_MEM from v$process p,v$session s where s.paddr=p.addr and s.username=user;

SPID         PGA_USED_MEM PGA_ALLOC_MEM PGA_FREEABLE_MEM PGA_MAX_MEM
------------ ------------ ------------- ---------------- -----------
4500               847529       1676965           524288     6019369

SQL>

 

Tom Kyte
November 03, 2005 - 6:25 am UTC

there could be many reasons - first, the memory is freeable - not *currently unused*. second, memory is allocated in a heap - if you allocate 50 bytes, allocate 100 bytes, allocate 50 bytes and then "free" the 100 bytes - you cannot really "free" the 100 bytes - there is 50 bytes of allocated storage after it.

PGA_ALLOC_MEM is larger than pga_aggregate_target

A reader, August 16, 2006 - 9:52 am UTC

Hi

I have pga_aggregate_target set to 16MB however when I check PGA_ALLOC_MEM from v$process it shows this:

SUM(PGA_ALLOC_MEM)
------------------
146918389

140MB almost! For 329 sessions including background processes.

V$PGASTAT shows this

NAME VALUE
---------------------------------------- ----------
aggregate PGA target parameter 16777216
aggregate PGA auto target 4194304
global memory bound 838656
total PGA inuse 156880896
total PGA allocated 210289664
maximum PGA allocated 210486272
total freeable PGA memory 1900544
PGA memory freed back to OS 20971520
total PGA used for auto workareas 0
maximum PGA used for auto workareas 868352
total PGA used for manual workareas 0
maximum PGA used for manual workareas 0
over allocation count 948
bytes processed 141249536
extra bytes read/written 97136640
cache hit percentage 59.25

I dont understand how come total PGA inuse and total PGA allocated can be larger than pga_aggregate_target!

May you throw some lights?

Tom Kyte
August 16, 2006 - 10:31 am UTC

do you realize that the aggregate target is for the ENTIRE SYSTEM!

you are saying "16 mb to be shared by 329 sessions"


I have a feeling you'd be much happier with a MUCH MUCH higher setting.

and a statspack taken during your "normal operating time" for 15-30 minutes would give you a nice report telling you what your likely optimum size is.

(and the target is a target, we can and will exceed the target)

<quote src=Expert Oracle Database Architecture>
Using PGA_AGGREGATE_TARGET to Control Memory Allocation

Earlier, I wrote that “in theory” we can use the PGA_AGGREGATE_TARGET to control the overall amount of PGA memory used by the instance. We saw in the last example that this is not a hard limit, however. The instance will attempt to stay within the bounds of the PGA_AGGREGATE_TARGET, but if it cannot, it will not stop processing; rather, it will just be forced to exceed that threshold. 
Another reason this limit is “in theory” is because the workareas, while a large contributor to PGA memory, are not the only contributor to PGA memory. Many things contribute to your PGA memory allocation, and only the workareas are under the control of the database instance. If you create and execute a PL/SQL block of code that fills in a large array with data in dedicated server mode where the UGA is in the PGA, Oracle cannot do anything but allow you to do it. 
Consider the following quick example. We’ll create a package that can hold some persistent (global) data in the server:

ops$tkyte@ORA10G> create or replace package demo_pkg
  2  as
  3          type array is table of char(2000) index by binary_integer;
  4          g_data array;
  5  end;
  6  /
Package created.

Now we’ll measure the amount of memory our session is currently using in the PGA/UGA (I was using dedicated server in this example, so the UGA is a subset of the PGA memory):

ops$tkyte@ORA10G> select a.name, to_char(b.value, '999,999,999') value
  2    from v$statname a, v$mystat b
  3   where a.statistic# = b.statistic#
  4     and a.name like '%ga memory%';
 
NAME                           VALUE
------------------------------ ------------
session uga memory                1,212,872
session uga memory max            1,212,872
session pga memory                1,677,900
session pga memory max            1,677,900

So, initially we are using about 1.5MB of PGA memory in our session (as a result of compiling a PL/SQL package, running this query, etc.). Now, we’ll run our query against BIG_TABLE again using the same 256MB PGA_AGGREGATE_TARGET (this was done in an otherwise idle instance; we are the only session requiring memory right now):

ops$tkyte@ORA10GR1> set autotrace traceonly statistics;
ops$tkyte@ORA10GR1> select * from big_table order by 1,2,3,4;
50000 rows selected.
 
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        721  consistent gets
          0  physical reads
          0  redo size
    2644246  bytes sent via SQL*Net to client
      37171  bytes received via SQL*Net from client
       3335  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      50000  rows processed
ops$tkyte@ORA10GR1> set autotrace off

As you can see, the sort was done entirely in memory, and in fact if we peek at our session’s PGA/UGA usage, we can see how much we used:

ops$tkyte@ORA10GR1> select a.name, to_char(b.value, '999,999,999') value
  2    from v$statname a, v$mystat b
  3   where a.statistic# = b.statistic#
  4     and a.name like '%ga memory%';
 
NAME                           VALUE
------------------------------ ------------
session uga memory                1,212,872
session uga memory max            7,418,680
session pga memory                1,612,364
session pga memory max            7,838,284

The same 7.5MB of RAM we observed earlier. Now, we will proceed to fill up that CHAR array we have in the package (a CHAR datatype is blank-padded so each of these array elements is exactly 2,000 characters in length):

ops$tkyte@ORA10G> begin
  2          for i in 1 .. 100000
  3          loop
  4                  demo_pkg.g_data(i) := 'x';
  5          end loop;
  6  end;
  7  /
PL/SQL procedure successfully completed.

Upon measuring our session’s current PGA utilization after that, we find something similar to the following:

ops$tkyte@ORA10GR1> select a.name, to_char(b.value, '999,999,999') value
  2    from v$statname a, v$mystat b
  3   where a.statistic# = b.statistic#
  4     and a.name like '%ga memory%';
 
NAME                           VALUE
------------------------------ ------------
session uga memory              312,952,440
session uga memory max          312,952,440
session pga memory              313,694,796
session pga memory max          313,694,796

Now, that is memory allocated in the PGA that the database itself cannot control. We already exceeded the PGA_AGGREGATE_TARGET and there is quite simply nothing the database can do about it—it would have to fail our request if it did anything, and it will do that only when the OS reports back that there is no more memory to give. If we wanted, we could allocate more space in that array and place more data in it, and the database would just have to do it for us.
However, the database is aware of what we have done. It does not ignore the memory it cannot control; rather, it recognizes that the memory is being used and backs off the size of memory allocated for workareas accordingly. So if we rerun the same sort query we see that this time we sorted to disk—the database did not give us the 7MB or so of RAM needed to do this in memory since we had already exceeded the PGA_AGGREGATE_TARGET:

ops$tkyte@ORA10GR1> set autotrace traceonly statistics;
ops$tkyte@ORA10GR1> select * from big_table order by 1,2,3,4;
50000 rows selected.
 
Statistics
----------------------------------------------------------
          6  recursive calls
          2  db block gets
        721  consistent gets
        728  physical reads
          0  redo size
    2644246  bytes sent via SQL*Net to client
      37171  bytes received via SQL*Net from client
       3335  SQL*Net roundtrips to/from client
          0  sorts (memory)
          1  sorts (disk)
      50000  rows processed
ops$tkyte@ORA10GR1> set autotrace off

So, because some PGA memory is outside of Oracle’s control, it is easy for us to exceed the PGA_AGGREGATE_TARGET simply by allocating lots of really large data structures in our PL/SQL code. I am not recommending you do that by any means—I’m just pointing out that the PGA_AGGREGATE_TARGET is a more of a request than a hard limit. 
</quote> 

where is UGA located when using automatic PGA

A reader, August 16, 2006 - 12:55 pm UTC

Hi

when we use automatic PGA where is UGA located, in the PGA pool or in the server process?

Tom Kyte
August 16, 2006 - 3:51 pm UTC

PGA is *always* in the process (P stands for process/private).

If you are using shared server, the UGA is in the large pool (shared pool if no large pool configured) in the SGA

If you are using dedicated server, the UGA is in the PGA.

Very good

Zafar Iqbal, September 11, 2006 - 5:40 am UTC

1)
Can one session have more than One PGA?

1.1)
If more than one PGA in case of Shared server where the Sesssion memory and UGA reside.

2)
If some body revoke/grant privileges on perticular table will shared SQL area(Using that tbale) will be invalited or for that user in Session memory it will be invalidated.

Tom Kyte
September 11, 2006 - 9:59 am UTC

1) well, it can migrate from pga to pga but it'll have access to one pga at a time (shared server migrates process to process)

1.1) did not make sense.

2) the cursor in the shared pool will be invalidated and the next parse of that sql statement will result in a hard parse.

Nice and clear answer but still doubt

Zafar Iqbal, September 12, 2006 - 1:36 am UTC

Quote --"well, it can migrate from pga to pga but it'll have access to one pga at a time (shared server migrates process to process)"

1)
IS session directly attached with PGA or through process One of these could be true or both
a)SESSION-->PROCESS-->PGA(Only process is having own PGA)
b)SESSION-->PGA(session is having own PGA)-->PROCESS-->PGA(process is having own PGA)

If a) is true then in case of one process which is having many sessions then where the session memory(UGA) resides of so many sessions.

2)
What does it mean --"shared server migrates process to process"
-- Could you elaborate please

Tom Kyte
September 12, 2006 - 8:24 am UTC

1) a process has a pga. processes own pga's. pga stands for "process" or "program" global area. it is memory allocated by a process (or thread).


UGA memory resides in one of two areas:

a) in the pga when using dedicated server, then a session "owns" a process from start to finish. The session will "own" that pga, the session does NOT migrate from process to process.

b) in the sga when using shared server, since a session doesn't own a process, the session migrates from process to process (and hence pga to pga)

2) it means you are using a pool of processes, like a connection pool. See concepts guide (or any of the books I've written up) for pictures and the like. You use many processes - you migrate from process to process, your session does not OWN a single process. You are sharing a pool of them.

Very very informative

Zafar Iqbal, September 13, 2006 - 2:59 am UTC

<QUOTE> "in the pga when using dedicated server, then a session "owns" a process from
start to finish. The session will "own" that pga, the session does NOT migrate
from process to process."

Correct me please

1)In dedicated server one session will have only one process throughout the life span of session. It can never be attached with another process.

2)In dedicated server I was thinking that after using one process It will throw the process and acquire a new process for another work. If I kill the process the session will be gone.

3) Session can never be directly attached with PGA always through process because process is having PGA.

4) In PGA there is “Runtime area” and “Persistent area” . I think “Persistent area” is UGA. There is a cursor text, bind info, cursor state. I think cursor text, bind info, cursor state all these reside in “Persistent area”. In runtime area sorting, bitmap action, and union take place


Tom Kyte
September 13, 2006 - 2:31 pm UTC

1) yes
2) why would you think that, it isn't the way it works.
3) huh?
4) tell you what - I describe memory architecture in my book Expert Oracle Database architecture, describe it over many pages - you might find it to be useful

when sorts goes to disk

Manish Sharma, September 19, 2006 - 9:25 am UTC

Hi Tom,
In 8174 , the value of pga memory max in v$sesstat increases till oracle is doing the operations in memory and if those operations/ sorts goes to disk that is not included in this parameter value.

Please confirm ??

Thanks


Tom Kyte
September 19, 2006 - 2:44 pm UTC

what parameter? not sure what you are asking.

when sorts goes to disk

Manish Sharma, September 20, 2006 - 9:56 am UTC

Hi Tom,
I mean in 8174 when you are analyzing/ monitoring the pga memory used by any process by using v$sesstat for that session , we check pga memory max & pga memory used.

I want to know if during this process run sorts goes to disk instead of being in memory , still this pha memory max in v$sesstat for this session would reflect the usage of temp space or this only reflects all the pga which is used in memory.

Thanks


Tom Kyte
September 20, 2006 - 3:21 pm UTC

do you have access to my current book "Expert Oracle Database Architecture?"

there I describe how the memory allocations changes radically between 8i and 9i and how in 9i with dynamical workareas - we can allocate and later deallocate and shrink the pga - because we are using very different methods for memory allocation.

whereas in 8i, you typically just see a pga grow - and never shrink.

when disks goes to sort

Manish Sharma, September 20, 2006 - 9:58 am UTC

I meant session pga memory max & session pga memory

Thanks

session pga memory and memory max

Roderick, September 20, 2006 - 6:56 pm UTC

Those are statistics not parameters. "session pga memory" reflects the current size of a process pga while "session pga memory max" was the largest that the session ever was (oracle processes can free memory too). It is not directly representative of some absolute limit on memory allocation. Parameters like sort_area_size or pga_aggregate_target are what determine how much can be sorted in memory before interim results are spilled to disk (as mentioned in Tom's book).

For example, you can make the pga memory size statistics fpr a session really large if you populate large PL/SQL arrays or do big bulk collections. That activity is independent of sorting and the sort parameters.

PGA USAGE IN DEDICATED SERVER AND MTS

rajesh, September 22, 2006 - 12:17 am UTC

Hi Tom,

Oracle version: 8.1.7.4
For dedicated server connection
1. PGA components
2. When does Oracle release PGA memory ?
3. How can I find the memory usage for each components of PGA ?

For MTS sessions
4.PGA components ?
5.When does Oracle release PGA memory ?
6.How can I find the memory usage for each components of PGA ?

Thanks,
Rajesh


Tom Kyte
September 22, 2006 - 2:51 am UTC

do you have access to my book "Expert one on one Oracle" - things like this are covered.


pga memory in 8i typically "grows", but cannot shrink effectively because we are using things like sbreak() and malloc(). The OS would be responsible for paging out pga bits of memory we are not using as needed.



PGA cache Hit ratio

A reader, September 25, 2006 - 11:13 am UTC

Hi Tom,

i have 9i database working in MTS mode

could you please tell me the right way to detect the right size for hash_area_size ?

i know that in MTS there should be not much sorts, but how i could garante that my parameter hash_area_size is set correctly ?

2)in DEDECATED connection database could you tell me how the pga cache hit % is being calculated ? in my statspack , i have in-memory sorts 99% while pga cache hit ratio 86% does that make sense ( could you please explain ) ?

3) does the in-memory sorts represent sorting or does it include hashing as well ?

many thanks


PGA_ALLOC_MEM is larger than pga_aggregate_target" confused"

A reader, September 29, 2006 - 10:29 am UTC

Hi Tom,
in your answer about the question "PGA_ALLOC_MEM is larger than pga_aggregate_target" , you mentioned part from your book #expert oracle database architecture#

i have this book but that example is a little confusing , when you run the PL/SQL to fill the array , i can understand that the session pga should increase , but why the session uga increased as well ?

what i understood is that the PL/SQL will be done in the PGA not the UGA ,am i correct ? will you please explain ?

you mentioned that "The PGA holds alot more then sort area's. The PLSQL data segment is there.
There are hundreds, thousands of things that could be there."
now i can see the PL/SQL go in the UGA , so now i am confused , will you please clear that ?

thanks

Tom Kyte
September 30, 2006 - 7:31 am UTC

plsql variables are allocated in the uga, the uga - in dedicated server is contained in the pga.


under shared server, that uga memory would be coming out of the sga, not the pga.

Pro*C PREFETCH and PGA MEMORY

Vinod, October 03, 2006 - 5:19 pm UTC

Tom,
Would prefetch option of pro*c program affect pga/uga memory consumption of that session? Would prefetch=100 cause more pga/uga memory than prefetch=10?

Thanks

Tom Kyte
October 03, 2006 - 7:16 pm UTC

when you benchmarked it..... what happened?

(something about fishing..)

Pro*C PREFETCH and PGA MEMORY

Vinod, October 04, 2006 - 1:47 pm UTC

I did not find much differenc in PGA/UGA memory usage with different prefetch option. For a sample pro*c program with default prefetch
SID STATISTIC# VALUE NAME
---------- ---------- ---------- ----------------------------------------------------------------
45 15 87072 session uga memory
45 16 87072 session uga memory max
45 20 255632 session pga memory
45 21 255632 session pga memory max

with prefetch=50

SID STATISTIC# VALUE NAME
---------- ---------- ---------- ----------------------------------------------------------------
89 15 88088 session uga memory
89 16 88088 session uga memory max
89 20 255632 session pga memory
89 21 255632 session pga memory max

with prefetch=500

SID STATISTIC# VALUE NAME
---------- ---------- ---------- ----------------------------------------------------------------
45 15 87072 session uga memory
45 16 87072 session uga memory max
45 20 255632 session pga memory
45 21 255632 session pga memory max



Tom Kyte
October 04, 2006 - 5:19 pm UTC

and there you go.

it'll affect it if the client is plsql.

the client memory is always affected (it has to cache those rows)

Total Allocating Memory (SGA+PGA)

Sandro, October 11, 2007 - 12:22 pm UTC

Hi Tom,
with this statement...
select nvl(d
          ,'Total Memory Usage....') "Mem Type"
       ,round(sum(b) / 1024 / 1024) "MB"
  from (select 'System Global Area:' d
              ,sum(bytes) b
          from v$sgastat
        union all
        select 'Program Global Area:' d
              ,value b
          from v$pgastat
         where name = 'total PGA allocated') s
 group by rollup(d)


- ...to calculate total memory allocated by Oracle (from "oracle prospective") on my machine at specific instant can I use it?
- on linux environment how I can verify this memory allocation computation from "oracle prospective"?
- how I can calculate MAX total memory allocated by Oracle (from "oracle prospective") on my machine since startup time?

Thanks in advance.

When to use PGA_AGGREGATE_TARGET, WORK_AREA_POLICY and SORT_AREA_SIZE

NAVENDU BADAL MOHANTY, December 16, 2009 - 6:57 am UTC

Hi Tom,

Please let me know when to use PGA_AGGREGATE_TARGET, WORK_AREA_POLICY and SORT_AREA_SIZE. What is the difference between using WORK_AREA_POLICY and SORT_AREA_SIZE. Can then be active at the same time...

Regards
Navendu
Reliance Industries Limited
Mumbai, INDIA
Tom Kyte
December 16, 2009 - 10:18 am UTC

https://www.oracle.com/technetwork/issue-archive/2009/09-sep/o59asktom-082712.html
"More on the PGA"

basically, use work area policy automatic - unless you are the only game in town (batch job, you own the machine and do not want to share across sessions since you are the ONLY THING), then you could if you want alter your session and use manual.

If you are interested in the differences - read the 11.2 concepts manual or Expert Oracle Database Architecture, both go into some level of detail.

How would i register in your Q&A forum

NAVENDU BADAL MOHANTY, December 16, 2009 - 6:59 am UTC

Hi Tom,

Please let me know how can i be a member of your Question and Answer forum...

Regards
Navendu
Reliance Industries Limited
Mumbai, INDIA

Tom Kyte
December 16, 2009 - 10:19 am UTC

done, you are in.

Seriously, it is just open to the public, anyone can come here.

Thanks a lot TOM

Navendu Mohanty, December 17, 2009 - 1:51 am UTC

Hi TOM,

Your co-operation is appricable. You are a genius and you are like as it is well said "A friend in need is a FRIEND indeed".

With Repect
Navendu

Unable to reproduce the sort on disk after PGA_AGGREGATE_TARGET is exceeded

A reader, March 12, 2010 - 3:56 am UTC

I am using dedicated server mode and my

[oracle@xponent ~]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jan 1 05:32:41 2002

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

sys@ORAXPO>show parameter pga_aggregate_target

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
pga_aggregate_target big integer 165M

sys@ORAXPO>show parameter shared_servers

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
max_shared_servers integer 0
shared_servers integer 0

sys@ORAXPO>select name from v$shared_server;

no rows selected

sys@ORAXPO>conn scott/tiger
Connected.
scott@ORAXPO>
scott@ORAXPO>
scott@ORAXPO>
scott@ORAXPO>select a.name , to_char(b.value,'999,999,999') value
2 from v$statname a, v$mystat b
3 where a.statistic# = b.statistic#
4 and a.name like '%ga memory%';

NAME VALUE
---------------------------------------------------------------- ------------
session uga memory 1,282,276
session uga memory max 1,405,728
session pga memory 1,896,020
session pga memory max 2,027,092

scott@ORAXPO>set autotrace traceonly statistics
40702 rows selected.


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
572 consistent gets
569 physical reads
0 redo size
2288502 bytes sent via SQL*Net to client
30228 bytes received via SQL*Net from client
2715 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
40702 rows processed

scott@ORAXPO>set autotrace off
scott@ORAXPO>select a.name , to_char(b.value,'999,999,999') value
2 from v$statname a, v$mystat b
3 where a.statistic# = b.statistic#
4 and a.name like '%ga memory%';

NAME VALUE
---------------------------------------------------------------- ------------
session uga memory 1,282,276
session uga memory max 5,458,840
session pga memory 1,633,876
session pga memory max 5,893,716

scott@ORAXPO>begin
2 for i in 1..100000 loop
3 demo_package.g_data(i) := 'x';
4 end loop;
5 end;
6 /

PL/SQL procedure successfully completed.

scott@ORAXPO>select a.name , to_char(b.value,'999,999,999') value
2 from v$statname a, v$mystat b
3 where a.statistic# = b.statistic#
4 and a.name like '%ga memory%';

NAME VALUE
---------------------------------------------------------------- ------------
session uga memory 313,021,844
session uga memory max 313,021,844
session pga memory 313,716,308
session pga memory max 313,716,308

scott@ORAXPO>set autotrace traceonly statistics;
scott@ORAXPO>select * from big_table order by 1,2,3,4;

40702 rows selected.


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
572 consistent gets
569 physical reads
0 redo size
2288502 bytes sent via SQL*Net to client
30228 bytes received via SQL*Net from client
2715 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
40702 rows processed

scott@ORAXPO>

I am not sure why this sort is still happening in memory even I have already exceeded the pga memory from 165M. Is there something I am missing?
Tom Kyte
March 12, 2010 - 3:36 pm UTC

It could well be because 40,000 rows isn't very big?

and we don't know how the query was processed

and we don't know if you are using automatic pga memory management

manual vs auto

Andrew Markiewicz, April 02, 2010 - 11:26 am UTC

Tom,
If the instance is running under workarea_size_policy=manual for many users, is the PGA memory used less efficiently or is it just easier to administer using auto?

In particular I am wondering if there is a difference between the manual and auto workarea modes for releasing the memory to be reused or if they will hang on to the allocated memory until the session ends. It seems to me the auto mode could use the memory more like a pool, but I can't find anything to verify that.

We currently run in manual mode and I don't have an available db to test the workarea_size_policy=auto.

Thanks
Andrew

Tom Kyte
April 05, 2010 - 1:17 pm UTC

the memory is probably being used less efficiently. not because of reuse/release - but because the memory used is static in size.

which means it is under utilizing memory when there are few users.
it is over utilizing memory when there are many users.
and only when there are a perfect number of users is it using it to maximum effect.

DBA

A reader, April 07, 2010 - 2:07 pm UTC

Setting PGA_AGGREGATE_TARGET to a nonzero value will automatically set WORKAREA_SIZE_POLICY parameter to AUTO by Oracle.

Setting PGA_AGGREGATE_TARGET to 0 will automatically set WORKAREA_SIZE_POLICY parameter to MANUAL. This means that SQL workareas are sized using the *_AREA_SIZE parameters.

It sounds you didn't use/set PGA_AGGREGATE_TARGET. Otherwise it is hard to imagine to set PGA_AGGREGATE_TARGET to non-zero and set WORKAREA_SIZE_POLICY = Manual.
Tom Kyte
April 13, 2010 - 7:33 am UTC

... Otherwise it is hard to
imagine to set PGA_AGGREGATE_TARGET to non-zero and set WORKAREA_SIZE_POLICY =
Manual....

really? why? people do non-intuitive things all of the time.


I'm not really sure who you were voicing this to, but - it is not out of the realm of belief.

pga_aggregate_target advice,

A reader, June 25, 2010 - 10:56 am UTC

In 11g R2 doc, I saw this line
PGA_AGGREGATE_TARGET = 3 * SGA_TARGET.

I am in 10gR2 and our database(3-node RAC) is 20 TB in size. Do you recommend the same formula in ours as well?

Couple of months ago, I executed a query on my 10gR2 database to advice how big my pga should be and it said 1.2 GB. If the above formula works for 10gR2 as well, will it not be a conflict? My current SGA_TARGET is 10 GB and the physical memory in my server is 128 GB.

Thanks,

Tom Kyte
July 06, 2010 - 10:51 am UTC

ok, in context

http://docs.oracle.com/docs/cd/E11882_01/server.112/e10578/tdpdw_system.htm#sthref34

that is specifically a data warehouse guide, that is definitely a "first cut in case you have no idea where to start".

It is a starting point, not a finishing point. The PGA advisor is best suited to tell you what size would benefit you the most - after it watches your system for a while.

pga_aggregate_target,

A reader, July 06, 2010 - 4:35 pm UTC

Thanks Tom. In that case, I will continue the same size (2 GB) for my PGA_AGGREGATE_TARGET rather than 30 GB (3 *10 GB, 10 GB is my SGA TARGET size) since my pga adviser recommended 1.2 GB.

Thanks,
-

persistent and runtime area

A reader, October 02, 2010 - 1:26 am UTC

Hi Tom,

I would like to ask why we would need Persistent and runtime area for PGA? What is the basic difference between them in terms of naming convention?
Tom Kyte
October 04, 2010 - 1:38 am UTC

it is really "session duration memory" (persistent) and "call duration memory" (runtime)

it just has to do with the duration of how long the memory is needed. Some memory (like sort areas) are only needed during the CALL (the execution of a sql statement that does sorting for example), not the entire SESSION (plsql global variables would be needed for the entire session).


Persistent and runtime area

A reader, October 04, 2010 - 1:15 pm UTC

Hi Tom,

Which of these areas are larger in PSA and why? Can we get their respective sizes?
Tom Kyte
October 05, 2010 - 1:54 am UTC

the only logical answer is "it depends", think about it - if you have lots of global variables in plsql and do not sort ever - which would be larger? If you don't use plsql at all and open 500 queries that all do large sorts - which would be larger?

it depends on what you are doing.

You can use the pga advisors to see information regarding memory use, v$mystat will give you pga information as well. You cannot really "see" the call duration stuff as it comes and goes rapidly. You can see your pga memory use at any time using the above methods.

lost in statistics

Basil, November 08, 2010 - 10:17 am UTC

Hi Tom,

we have a Unix machine with 12 databases(dedicated server).I would like to summ SGA+PGA and find the RAM usage.
Which statistics I have to use for evaluating the RAM usage ?

I know
v$sysstat(session pga memory,session pga memory max)
v$process(pga_alloc_mem, pga_max_mem)
v$pgastat(total PGA allocated,maximum PGA allocated).

You mentioned the PGA once allocated is not necessary returned to OS - is this reflected into "session pga memory" or "session pga memory max" ?

Anyway both statistics seem to have less meaning in v$sysstat(all your examples refer to v$sesstat)so maybe is better to use the corresponding stats from v$pgastat or v$process

Here are some numbers which confuse me - which one reflect the RAM(virtual memory) usage -'total PGA allocated' or 'max PGA v$process' ?


select name,round(value/1024/1024) MB from v$pgastat where name in ('total PGA allocated','maximum PGA allocated')
union all
select 'alloc v$process',round(sum(pga_alloc_mem)/1024/1024) MB from v$process
union all
select 'max PGA v$process',round(sum(pga_max_mem)/1024/1024) MB from v$process
union all
select name,round(value/1024/1024) MB from v$sysstat where name like '%pga%'

NAME                   ,MB
=============================
total PGA allocated    ,  16133
maximum PGA allocated  ,  24159
alloc v$process        ,  16121
max PGA v$process      ,  18822
session pga memory     , 512003
session pga memory max ,9166532

Thank you in advance.

regards,
Basil

Tom Kyte
November 08, 2010 - 12:09 pm UTC

... we have a Unix machine with 12 databases(dedicated server). ...

well, if you've any amount of stuff I've written, you already know what I think about that setup.


session pga memory is what you are using right now - if the os lets us shrink, it would be reflected there. session pga memory max is the biggest it has gotten at some point in the past.

v$pgastat would be fine to use on an aggregate level.


total pga allocated would be the current total pga allcoated. see
http://docs.oracle.com/docs/cd/E11882_01/server.112/e17110/dynviews_2096.htm#REFRN30180

for definitions.


max's are just that - maximum values observed at SOME TIME. They wouldn't be what you are apparently looking for.

session uga memory

ajeet, September 13, 2012 - 6:15 am UTC

Hi Tom

I have following statistics in one of AWR reports of a database.
begin end
session uga memory 5,341,722,223,440 5,376,095,948,504

total memory avaliable on server is 200 GB.
how it is possible that any given point of time uga value is ~500 GB? and is it really possible that session uga memory can increase by 32 gb in an hour run ?

in the top sql, i see very simple sql's the take less than 1 minute of elapsed time. they insert data in a table which has a LOB column.

server details

Linux x86 64-bit 16 8 2 188.92

database version is oracle 11.2.0.3

please give some insight on this please.

Regards
Tom Kyte
September 14, 2012 - 6:42 pm UTC

is
it really possible that session uga memory can increase by 32 gb in an hour run
?


it could do that in a few milliseconds.


does your machine appear to be "out of memory".

how many processes/sessions do you have
are they all active


this is very possible, if you have lots of connections - but most are idle - they could just be all paged out. Real memory isn't the limit, your virtual memory is.

Jon T, December 12, 2012 - 8:04 am UTC

So are you saying that unless you're using MTS there's really no point in calling dbms_session.FREE_UNUSED_USER_MEMORY?
Tom Kyte
December 17, 2012 - 3:27 pm UTC

well, there could be cases today in the 21st century with automatic memory management where the pga memory is not malloc()'ed - but that would be for total automatic memory management (when you don't tell us the SGA and PGA size - just the memory target)

Memory RAM

RD, September 04, 2013 - 4:56 am UTC

Hi Tom,
What I am struggling with is that 'memory used by the pga will never be released back to the OS' as in this thread. Does that mean I will require frequent reboots just because our server will eventually run out of memory (dedicated mode being used by us).
Our pga is set to 6GB (on a server with 64GB RAM and 30GB free), the pga has grown to 18GB now and is still growing. In this case it is only a matter of time till next reboot.
Don't believe this was the way Oracle was really designed to work ...

Tom Kyte
September 09, 2013 - 9:06 am UTC

no, you do not need frequent reboots because

a) processes come and processes go over time typically, that memory is freed
b) OS's page out memory that is allocated but not used. so if you have a process that was not used for a while - it won't really be consuming memory
c) for processes that are active, they need that memory - rebooting would just make them allocate it all over again.

rebooting would be a really bad idea.


how are you measuring the PGA use.

pga_aggregate_target

Brian, May 14, 2015 - 11:44 pm UTC

if, on a system, pga_aggregate_target was undersized (say set to 1gb but actual usage was seen at 10gb) where the discrepency was due the database using a lot of pl/sql memory arrays etc, would this mis-alignment of the target to reality cause os level swapping at all if the OS had more than enough ram to hold the total allocated PGA?
As far as I can see, it would (possibly) just cause sorts/hashes to favour going to TEMP (if the requested space exceeded the value of global memory bound) instead of processing in RAM - is my understanding correct?

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library