Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Connor McDonald

Thanks for the question.

Asked: September 04, 2008 - 10:53 am UTC

Last updated: April 20, 2017 - 8:03 am UTC

Version: 11.1.0

Viewed 10K+ times! This question is

You Asked

Hi Tom,
I am trying to understand For Automatic Memory Management in 11g.We have MEMORY_TARGET parameter.

1)Can you please explain how MEMORY_TARGET considers SGA_MAZ_SIZE if auto shared memory management is enabled.

If i keep sga_target and pga_aggregate_targer to 0 it should take 60/40 ratio respectively .But what If I had set sga_max_size, will it mean sga_target will go upto sga_max_size only.
Also what happens If I dont set sga_max_size.
How does this work.

2)If I pre-set
shared_pool_size
db_cache_size
large_pool_size
java_pool_size so that it will atleast start with the minimum set values and then be autotuned as needed to increase but will not decrease below the pre-set value. Is this statement correct ?


3) Any pointers to docs for this.

Thanx

and Tom said...

If you enable automatic memory management of both sga and pga - do not expect any ratio's, you are saying "go figure it out", and we do. We are free to do whatever we like - at whatever size we like.

If you set any of the memory parameters, that will set the "minimum" size for that component - yes.


http://docs.oracle.com/cd/B28359_01/server.111/b28310/memory003.htm#ADMIN11011

Rating

  (23 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

What about sga_max_size and memory_target

A reader, September 05, 2008 - 2:53 pm UTC

Can you please elaborate on this
"But what If I had set sga_max_size, will it mean sga_target will go upto sga_max_size only.
"

Is this true for auto management and memory_target is set.

So it will go max upto sga_max_size and rest will be taken by PGA

say I set sga_max_size to 1.5 gb
and memory_target is 2.5 gb

Then sga_target is can increase up to 1.5 max .

I want to understand the relation between memory_target and sga_max_size.
with 10g we had on sga_max_size and sga_target
Now with 11g we have memory_target and memory_max_size so sga and pga are autotuned.Its bit tricky for me.
Tom Kyte
September 05, 2008 - 5:05 pm UTC

tell me, do you want to do totally automatic? then just set one or two parameters (memory_target memory_max_size)

do you want to control the sga and pga separately and have them auto tuned? then just set the pga aggregate target and sga target/max size

do you want to set lower bounds in either mode? use the individual settings.

do you want to cap the sga? well, think about not using the memory target then, obviously you want to control it. If you were to "cap" it, you wouldn't really be letting it do its thing.

if you find yourself setting lots of individual memory parameters - you should perhaps think about "not using the automatic" stuff - you seem to want to control it.

Minimum size of SGA for 10.2.0.1

Jozef Wardega, September 09, 2008 - 8:59 pm UTC

Hi Tom,
need to create few databases and I'm limited by memory on the server.
I was looking but could not find it.
minimum size for 10.2.0.1 on Solaris SPARC, if you could.
Regards
Jozef Wardega
Tom Kyte
September 11, 2008 - 10:33 am UTC

It will entirely depend on the feature set you enable.

You will basically setup a buffer cache (you size it, it'll be done in granules - 4, 8, 16m granules. The size of a granule is determined by the size of your SGA (this sounds recursive to a degree, as the size of the SGA is dependent on the granule size). You can view the granule sizes used for each pool by querying V$SGA_DYNAMIC_COMPONENTS.

You'll set up a shared pool.

you might be able to skip the java pool, but likely not - depending on feature set used.

You can skip the large pool and streams pool probably.

I'm running my seminar instance in a 512mb virtual machine with an sga target of 160mb. Likely - 128mb would be the SMALLEST you would practically want to consider.

Automatic Memory, shared servers and Large Pool

Emad Kehail, October 16, 2008 - 7:43 am UTC

Hello Tom,

We had a 32 bit server with Two Quad Core CPUs and 4 GB RAM. We know the maximum memory Oracle can allocate on 32 bit server is 3 GB with the option /3Gb.

We are using Oracle 9i R2 with shared servers and we are planning to upgrade to Oracle 11g R1 and use shared server as well.

Is it good to:

1. Let Oracle totally and automatically manage the memory
2. Let Oracle automatically manage the memory but specify a minimum value for the large pool to be sure sessions PGA are in the large pool not the shared pool
3. Manually manage the memory as in Oracle 9i

Thanks


Tom Kyte
October 17, 2008 - 8:52 pm UTC

the only answer is "it depends"

with such tight memory constraints, I'd probably use #3 - and review the advisors to see how the different SGA segments should be sized over time and manually resize online during a period of relative inactivity.

memory_target

joel, February 22, 2009 - 2:45 pm UTC

so Tom correct me if i am worng
if i set memory_target to 8g and memory_max_target to 12g
my instnace can never grow in memory more then 12g
no matter how much clients i connect to the DB (all clients of coarse use PGA), is there any other memory allocation for a client besides PGA , because i do see memory on the box gets eaten up if connect a lot of clients
(we use 11.0.1.7 and solaris 10)
thank you

Tom Kyte
February 22, 2009 - 5:43 pm UTC

nope, that will be a TARGET.

It will *try* to keep the maximum memory use under 12gb. It may or may not be able to do that, but that will be the TARGET.


If you want to fine grained control this, you would use shared server whereby the session memory (UGA) which is normally in the PGA will be in the SGA instead, then and only then could you put a hard limit on the amount of session memory (but still - not pga memory)

MEMORY_TARGET and SGA_MAX_SIZE

Aneesh, September 30, 2009 - 7:05 am UTC

Hi Tom,

could you please answer the below 2 queries .Thanks in advance.

Oracle Version :- 11g
OS :- HPUX 11.31


1)
I am unable to set SGA_MAX_SIZE to '0' to use MEMORY_TARGET parameter to take control of SGA limit.I could able to do alter system set SGA_MAX_SIZE=0 scope=spfile.After restarting the database it comes to some values.

2)If oracle parameter MEMORY_MAX_TARGET = 10GB for an instance ,then 10GB shared memory segments are allocated at the time of instance started or gradually the use of MEMORY_TARGET value ?


Regards,
Aneesh
Tom Kyte
October 07, 2009 - 8:27 am UTC

1) so? you are using automatic memory management, we set things - the caches, everything, to whatever we feel like. If you set memory_target, you need set nothing else. Just don't set it at all. Unset it.


2) if using memory_target - that controls sga and pga, it is highly "not ever likely" that the sga will be 10gb as some of that memory would necessarily be set aside for the pga.

but it would be OS dependent as to whether a backing store for the maximum sga size would be allocated upon instance startup. Even if it is however, it is just told to the OS we are wanting to be able to be this big - we don't touch that memory, we don't use it, it stays out there in the page files until we do. In other words - even if "allocated" it really doesn't matter - it doesn't get paged in or anything until we USE it.

automatic memory in 11g

A reader, December 01, 2009 - 12:35 pm UTC

Hi Tom,

Our database version is 11.1.0.7 and os is sun solaris v10
We have 16g of RAM

My settings are 
SQL> show parameter memory

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
hi_shared_memory_address             integer     0
memory_max_target                    big integer 12032M
memory_target                        big integer 8G
shared_memory_address                integer     0
SQL> show parameter sga

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
lock_sga                             boolean     FALSE
pre_page_sga                         boolean     FALSE
sga_max_size                         big integer 12032M
sga_target                           big integer 0
SQL> show parameter pga

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
pga_aggregate_target                 big integer 0
SQL> show parameter shared

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
hi_shared_memory_address             integer     0
max_shared_servers                   integer
shared_memory_address                integer     0
shared_pool_reserved_size            big integer 57042534
shared_pool_size                     big integer 192M
shared_server_sessions               integer
shared_servers                       integer     1


we are only setting the memory_target and memory_max_target  parameters as we want it should be completely automatic

(i see that oracle sets the shared pool size and the sga_max_size i guess its a default behavior , as i don’t set it in the parameter file, am i correct?)  

What concerns me is that we are getting ora 4030 and it seems that we have enough memory still available 
 
I ran top command and it show that there is 1.5g still free and from the database I ran the following queries at the time that i was getting the ora 4030 

SQL> SELECT     substr(component,1,20) as component, current_size, min_size, max_size
  2  FROM       v$memory_dynamic_components
  3  WHERE      current_size != 0
  4  /

COMPONENT                                CURRENT_SIZE   MIN_SIZE   MAX_SIZE
---------------------------------------- ------------ ---------- ----------
shared pool                                1476395008  939524096 1811939328
large pool                                   67108864   67108864   67108864
java pool                                    67108864   67108864   67108864
SGA Target                                 5637144576 5637144576 5637144576
DEFAULT buffer cache                       3892314112 3556769792 4429185024
PGA Target                                 2952790016 2952790016 2952790016

6 rows selected.

SQL> select sum(PGA_USED_MEM)/1024/1024/1024 as used ,
  2  sum(PGA_ALLOC_MEM)/1024/1024/1024 as alloc,
  3  sum(PGA_FREEABLE_MEM)/1024/1024/1024 as freeable,
  4  sum(PGA_MAX_MEM)/1024/1024/1024  as max from v$process
  5  /

      USED      ALLOC   FREEABLE        MAX
---------- ---------- ---------- ----------
1.18489848 2.22516501 .266784668 3.48046755

i added the current_size of shared pool+large poo+java pool+DEFAULT buffer cache+sum(PGA_MAX_MEM)
it all adds up to around 9g , so it seems to be plenty of memory still availeble , it is below memory_max_target 


So I am a bit puzzled as to why we are getting ora 4030 ? shouldnt it be automatic 

Can you please clarify? 

Thank you 


Tom Kyte
December 01, 2009 - 1:53 pm UTC

what ulimits do you have set? for the oracle account? for users in general?



top might show you have 1.5 gb free right now, but - where you looking at top at the moment the 4030 happened? When the 4030 ( out of process memory - OS returned "no more memory for you") happened, that session and perhaps others would 'go away' and release perhaps a large amount of memory.


pga memory can be untunable - we can control sort/hash areas, but we cannot control a process that does something like:

declare
  type array is table of long index by binary_integer;
  data array;
begin
  loop
     data( data.count+1 ) := rpad( '*', 32000,'*' );
  end loop;
end;
/

A reader, December 02, 2009 - 9:04 am UTC

%:> whoami
oracle
%:> ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 10
stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) 29995
virtual memory (kbytes, -v) unlimited


but it seems that from within the db there was still enough memory so why didnt the pga grab some memory
we see from the sum(PGA_MAX_MEM) that it doesnt grab more pga it keeps it at close to 3g why doesnt it expand

Tom Kyte
December 04, 2009 - 9:08 am UTC

how did you connect - over the network (what is the owner running and the ulimit of the listener?) or directly without a network (what is the ulimit of the account that ran this)

You cannot tell "within" the db if there is enough PGA, PGA is OS memory.


A reader, December 09, 2009 - 11:01 am UTC

I was on the box no network connection and the ulimit output is from the account that ran it

also what do you mean by "PGA is OS memory"
let me tell you what my understanding of memory_target is and please correct me if I am wrong ,

we tell the DB I know you need 2 huge areas of memory SGA and PGA I am giving you memory_max_target=12g make yourself comfortable with this amount feel free to give and take from one area to the other and if you exhaust all of this amount (12g in our case) go to the os and grab more memory (which makes me wonder why in our case did it try to grab more memory when it still had enough )

Is there any other memory segment that is not included in the memory_target that uses os memory outside of the memory_target ?
does a client process have 2 portions to it one that sits in the db allocated out of memory_target and the other portion is os memory like any other os (unix) client (of course its not shared by any other process) and in that portion happens all the sorts which can make this process grow and shrink ? (if this theory is true I would understand our 4030 error)

Tom Kyte
December 10, 2009 - 2:30 pm UTC

PGA is not in the SGA, PGA is process memory allocated dynamically by the process (dedicated server) as it runs.


it isn't going to readjust the SGA to give more to the PGA for a single user, what was your process doing? It sounds like your single process was requesting gigabytes of memory - does that sound reasonable?

ASMM

A reader, December 10, 2009 - 11:22 am UTC


Automatic Memory Management and PGA and ORA-4030 Error

John, April 25, 2010 - 11:07 pm UTC

Tom,

I am having hard time explaining to my Boss about the ORA-4030 errors. We are using Automatic Memory Management. Currently the "Total Memory Size" is at 13G and the "Maximum Memory Size" is at 13G. Every time we encounter ORA-4030 errors, I trace these problems down to two reasons. First, WebLogic not releasing the session it has established from weblogic server. When this happens, the number of sessions goes up until the Oracle runs out of memory. Second, when some user uses un-thinkable SQL that uses large memory. My Boss believes he can tune this by adjusting SGA_MAX_SIZE and SGA_TARGET, and PGA_AGGREGATE_TARGET. Am I wrong not to believe that this problem can be tuned?
Tom Kyte
April 26, 2010 - 9:02 am UTC

Un-thinkable SQL should not be using "large memory" - it should use a reasonable amount and then start using temp.

If you have hundreds/thousands of orphan sessions all with large PGA's - that could do it.

But - short of a bug - a single sql statement should not consume a large amount of in memory PGA without starting to use temp.

during sorting

John, April 26, 2010 - 12:45 pm UTC

Tom,

Thank you very much for all the posts and helpful information. You are correct about sql statement consuming a large amount of memory in PGA. I got mixed up with other database in my company. The ORA-4030 error only occurs with the database which has the weblogic server leaving orphan sessions. The database with wicked sql statement gets very slow when user executes the sql statement.

I have a question about the PGA. During sort, does server process copies the whole block or just the rows requested by the user to the PGA? Thank you.
Tom Kyte
April 26, 2010 - 1:06 pm UTC

the server process (dedicated/shared server process) will read blocks from the SGA/disk into the PGA - get what it needs, move that into a temporary structure (up to their allocated sort/hash workarea) and page the data as needed from this temporary structure in memory out to disk.

So, blocks are processed - to get the rows and columns we need - to be placed into temp to be sorted/hashed/aggregated/whatever'ed

explain please

A reader, April 26, 2010 - 1:12 pm UTC

Sir,
Can you please explain bit more in detail ?
"
the server process (dedicated/shared server process) will read blocks from the SGA/disk into the PGA - get what it needs, move that into a temporary structure (up to their allocated sort/hash workarea) and page the data as needed from this temporary structure in memory out to disk. "

Thank you very much


Tom Kyte
April 26, 2010 - 1:33 pm UTC

tell me what part needs more detail.

select deptno, count(*) from emp group by deptno.

say that full scans emp. You must therefore:

a) read blocks from the SGA or disk (into the SGA) to get what it needs - the data for emp, specifically the deptno column. So, therefore it reads blocks into it's pga and gets the deptno column

b) it must then manage the data it just retrieved in some structures so as to have the deptno data aggregated and a count maintained.

c) if that stuff in (b) exceeds the memory allocated to this process for that operation, it'll stuff overflow onto disk.

A reader, April 26, 2010 - 1:50 pm UTC

select deptno, count(*) from emp group by deptno.

say that full scans emp. You must therefore:

a) read blocks from the SGA or disk (into the SGA) to get what it needs - the data for emp, specifically the deptno column. So, therefore it reads blocks into it's pga and gets the deptno column

----->If the table is stored in 1000 blocks -are all the blocks read into to PGA ? how this is managed in both shared / dedicated connection ? 'reading' means just reading the block (OR) taking the copy of the block into PGA memory ?

-----> if this is not an aggregate/order by query - lets say - simple slect * from emp - then Is it necesary to read the blocks into PGA (or) can we send the rows directy to the client from blocks in SGA ?


b) it must then manage the data it just retrieved in some structures so as to have the deptno data aggregated and a count maintained.

-?Here do we use some normal 'data tructures' to do the aggregate/order by ?

c) if that stuff in (b) exceeds the memory allocated to this process for that operation, it'll stuff overflow onto disk.
--->Say it is a sort - Do we send the rows from directly from 'blocks' to the client (or) from 'data structures' in TEMP ?

Thank you very much

Regards

Tom Kyte
April 26, 2010 - 1:56 pm UTC

the blocks are processed like they would be for any query - we retrieve blocks from the buffer cache (putting them there if need be) into our server process as we need them.


We use data structures our developers found helpful to organize the data, that is all - I don't know what a "normal" data structure is ;)


if we had to sort the data in memory/disk before sending the data back - it cannot possibly come straight from the blocks. It'll come from temp.

If we used an index to read the data sorted - then it could come back straight from the blocks, not from temp.

A reader, April 26, 2010 - 2:09 pm UTC

Many thanks Tom , normal ' data structure ' - i meant to say
'Linked lits/double linked lists etc '

ok thanks Tom once again you are very helpful

many thanks


Tom Kyte
April 26, 2010 - 2:19 pm UTC

it is in 'normal' data structures then - because that is pretty much everything in memory :) normal.

A reader, April 26, 2010 - 2:09 pm UTC

Many thanks Tom , normal ' data structure ' - i meant to say
'Linked lits/double linked lists etc '

ok thanks Tom once again you are very helpful

many thanks


A reader, April 26, 2010 - 2:23 pm UTC

Sorry Tom,I might be bit irritating you - extremely sorry for that but i want to know clearly :
select * from emp order by emp_no desc
(say the emp table is stored in 1000 blocks)- assume no block of emp is in SGA ..

1.Using the db_multiblock_io - Server process will read the blocks into SGA (n=64) , 64 blocks at a time .

2.Server process will get those first 64 blcoks into PGA

3.Server process will get next 64 blcoks into SGA ..........

not able to understand exactly ? how it happens..

Many many thanks



Tom Kyte
April 26, 2010 - 3:43 pm UTC

server would get the blocks into the SGA with multiblock IO

then server would do logical IO's, one at a time, to get the blocks to process the query. The 64 blocks it wants from the SGA are not next to each other - they are scattered all over the buffer cache.

Brilliant article

Marty, November 16, 2012 - 5:39 am UTC

Thanks Tom, this article ris very nice.

Would this "elevator explanation" be correct (explain the SGA/PGA in the time it takes to get to the next floor):

A user runs a query which either finds the blocks it needs already in the SGA or fetches the blocks needed from disk into the SGA. Then the desired rows/data is taken from those blocks in the SGA and pulled over to the PGA for sorting and/or grouping. If more that a "reasonable amount" of memory is required then the sorting and/or grouping spills over to TEMP space on disk and ultimately returns the rows to the user.

More or less correct there?
Tom Kyte
November 19, 2012 - 10:03 am UTC

that is in general how it works, yes.

assumptions: conventional path reads (not direct path), dedicated server.

AMM

A reader, December 21, 2012 - 3:58 am UTC

Hello Tom,
I've seen a few articles now that don't recommend using AMM, also a colleague at work went on a database tuning course recently and was advised against using AMM by both the instructor and students.
Are there any issues that you are aware of when using AMM?
What are the pros and cons of using AMM?
Thanks
Tom Kyte
January 02, 2013 - 12:40 pm UTC

If you have a team of DBA's that are going to be actively monitoring the database (eg: it is a high visibility database, more or less constantly monitored, big, heavily used) you probably don't want fully automatic memory management. This is a database you spend a lot of time making sure it is 100% available, consistent, constant, reliable. You don't want it to reconfigure itself at 3pm on Friday for example. You want the DBA's to use the memory advisors to make informed decisions on how to set the memory - and then schedule those changes for some maintenance window.

If you have a database that sits in the corner, no one is on top of it 24x7, is is not high visibility - it is not worth your time to manually tune and monitor - by all means, use it.

But do consider setting reasonable "lower end" values for the various pools if you do. For example, if you have 2gb of memory for the database, set the db_cache_size to a reasonable value which we will not go below - same for the shared pool and the PGA memory. Give us the rest to figure out where to allocate to put a limit as to how small the other bits can get.

Hugepages

Sokrates, January 03, 2013 - 4:35 am UTC

say, we are on Linux

- what is the technical root cause that AMM is not compatible with Hugepages, see ID 749851.1 ?
Tom Kyte
January 04, 2013 - 3:01 pm UTC

with automatic memory management (both SGA and PGA are managed *together*, we are not talking about automatic SGA memory management or automatic PGA - but the feature whereby BOTH SGA+PGA are auto-tuned together with one memory setting) we allocate memory very differently - we have to be able to turn "pga memory into sga memory" and vice versa. The approach we take is not usable with huge pages. The memory allocation routines used to map memory are not huge pages compatible in that mode.

this is also why this feature is not available on all operating systems - the ability to change between sga and pga memory is very OS specific.

AMM

Tony, January 11, 2013 - 9:23 am UTC

Thank you Tom, very interesting reply.

I had the db_cache_size set to 2.5gb, as advised by app vendors, SGA target is 4gb, I wanted to decrease the db_cache_size down to 500mb to allow memory management more room to move, when I did this we started seeing memory issue's

WARNING: Heavy swapping observed on system in last 5 mins.
pct of memory swapped in [2.01%] pct of memory swapped out [2.49%].
Please make sure there is no memory pressure and the SGA and PGA
are configured correctly. Look at DBRM trace file for more details.

Set it back to 2.5gb and the issue went away, very odd.
Why would memory management do this?

Thanks
Tom Kyte
January 15, 2013 - 9:11 am UTC

well, this is pure conjecture....


You had a db cache of 2.5 gb. You weren't using all of it - you were using some small percentage of it. That left 1.5gb to be used by all of the other components. Let's assume the other components were active - touching all of their memory - but the buffer cache - maybe it was using 75% of the available memory. So in total you had an active SGA of ~3.4gb

Now you downsized the buffer cache to 500mb - leaving 3.5gb for the rest. You had a very active shared pool (assumption for conjecture) - it used much of the 3.5gb and the rest was used by the other components. The 500mb of buffer cache was very active now - all of it. So you were now touching 4gb of memory.


If the conjecture is true (the db cache advisor would lend credence to it if it says that downsizing it from 2.5gb would not affect caching), then you should really consider shrinking either the PGA or SGA or getting more memory. That is because you are in a situation whereby if your buffer cache becomes "busy" and we start touching all if it - you'll start paging/swapping as you've overcommitted the memory on this machine.

AMM Limitations

Suraj Sharma, April 02, 2014 - 4:31 pm UTC

Hi Tom,

I have been trying to implement AMM in 11gR2 instances on our Exadata boxes where we have 5-6 instances running on the same box, but I have been getting a lot of NO for the same due to following reasons:

1. AMM is not supported with Hugepages [Refer notes 749851.1, 1134002.1 ]
Q. Does HugePages supported when we have multiple instances running on same box?
2. AMM can over take hosts in a manner to cause an outage.

Please let me know if you recommend AMM considering its exadata box, many instances running and having HugePages set.

Thanks for being always helpful..

Suraj

AMM in 12.2

Rajeshwaran, April 19, 2017 - 11:59 am UTC

Team,

I am installing 12.2 on a windows 64-bit machine and while configuring the memory during installation, set "Enable Automatic memory management" option check box and provided allocated memory as 1GB, but got this below error from installer.

INS-35178 : The Automated Memory management option is not allowed when the Total physical memory is greater than 4GB

The total physical memory in that machine is 7GB.

could you help us to understand the technical reason behind this restriction in 12.2?
Connor McDonald
April 20, 2017 - 7:47 am UTC

There's a nice blog by Frank on that

https://blog.dbi-services.com/12cr2-dbca-automatic-memory-management-and-databasetype/

but I cant really add much to that because I've never liked/used AMM anyway.

AMM in 12.2

Rajeshwaran, April 20, 2017 - 7:50 am UTC

....
because I've never liked/used AMM anyway
....


could you please elaborate, why dont you like AMM?
Connor McDonald
April 20, 2017 - 8:03 am UTC

1) no hugepages support.... that's typically a show stopper for any enterprise application

2) I dont think the co-location of the parameters makes sense. Except in extreme circumstances, I dont think I'd ever want to even consider an explosion of pga being able to rob memory out of my sga.