Skip to Main Content
  • Questions
  • Automatic Shared Memory Management (ASMM)

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Robert.

Asked: April 29, 2008 - 3:25 pm UTC

Last updated: December 01, 2008 - 6:15 am UTC

Version: 10.2.0

Viewed 1000+ times

You Asked

Tom,

What are your thoughts on using Automatic Shared Memory Management (ASMM) in 10.2.0 database?

Do you use ASMM on 10.2 databases under your influence?

Thanks,

Robert.

and Tom said...

Depends on the nature of the database...

Database you don't have time to micromanage - absolutely, I'll use everything that starts with "A" I can.

Database you have a team of DBA's watching round the clock - probably not as many "A"'s - automatic undo management, absolutely, automatic pga - yes, most of the time, with overrides for batch jobs perhaps (session by session), segment space management - mostly auto, locally managed tablespaces with automatic extent sizing - sure. Automatic SGA sizing - maybe not, I've got a good understanding of the needs on this database and would like to retain that control - I have the advisors to guide me and I'll make the changes on my own schedule (I can resize the SGA online), but I might not want the SGA to resize by itself at 3pm on Monday...

Rating

  (6 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Thanks, Tom, for sharing your experience!

Robert, April 30, 2008 - 1:35 pm UTC


ASMM - monitoring its usefulness

HF, November 24, 2008 - 11:52 pm UTC

We are on 10.2.0.3 and undertaking a round of performance tuning.

Looking through wait events at times where the system is performing poorly, I found a significant amount of "Buffer Exterminate" waits.

When I look at v$sga_resize_ops I see the memory manager shunting 64Mb of ram between the Default and shared pools every minute, and then generally moving it straight back one minute later. Over time this does trend to a total shift of about 250Mb between the two pools (ie at 09.00 the default_pool:shared_pool split is 2.5Gb:1.5Gb at 12.00 the split is 2.75Gb:1.25Gb then by 17.00 we are back to 2.5Gb:1.5Gb)

We are an OLTP system by day with sizeable amounts of data being processed / imported / exported / reported on during the overnight batch.

Anecdotally on the web I can see people suggesting that ASMM should be tweaking a couple of times per day rather than a tens of times an hour.

I am a little new to Oracle and so am struggling to come up with a good set of metrics to (dis)prove the efficacy of the ASMM. Subjectively it is difficult to see the value in shuttling memory backwards and forwards every 60 seconds, but in CPU times I guess 60 seconds is an eternity and so this may be a really good case of Just-In-Time memory management.

I know there are no hard and fast rules (otherwise they would be already codified in the manager), but I would appreciate your thoughts on things I could investigate / monitor if I were to turn this off in a test environment.

Perhaps asking the question in a different way, what are the symptoms of an undersized Default or an undersized shared pool. If I turn off the ASMM, what metrics would suggest that we are being hurt by it.
Tom Kyte
November 25, 2008 - 10:25 am UTC

are you using bind variables properly in your OLTP system - 1.5gb sounds huge if you have a stable working set.

I would concur that a memory movement every minute is rather exceptional - you must be right on the edge, a tipping point, for both pools and it is almost like a "tie" for the resources.

I would not want my SGA resizing that rapidly, it does take quite a while (cpu timewise) to do that reorganization - and doing it every few minutes is too high.

Over the course of the entire day - does it deviate very far from the 2.5/1.5 split - that seems like a pretty fair way to slice it up - and you would monitor the advisors (cache and shared pool) to see if a recommendation for "more" is being made and what the estimate of reduced work that "more" would provide.

Also, measure your current workload (commits/second, executes/second, cpu time used, IO's performed - logical and physical...) now, make a change, measure again and make sure they are "better"

And better requires thought - if the cpu time goes UP after you make a change - that could be a) very good, b) indifferent, c) very very bad.

it could be good because you don't have a lot of sessions waiting on things, so they are doing more work.

it could be indifferent because it was just a trade off - you are doing the same amount of work - just using a little more cpu (contention was moved from a wait event to a latch spin event for example)

it could be really bad because contention was moved in a huge way to a latch spin event


If I had AWR/ADDM - I'd be using that to monitor the before/after. Otherwise, statspack will be useful - start taking a bunch of 15 minute snapshots - do change - take more - compare.

Got Free Memory but want more...

HF, November 25, 2008 - 8:28 pm UTC

Thanks Tom. With respect to your question on the shared pool being 1.5Gb, I took a timestamped snapshot of V$sgastat every minute for 30 minutes where pool = 'shared pool'.

During this time the memory manager continued moving 64Mb between the default and shared pool every minute or so.

I looked at the top four "name" values when sorted by bytes descending in v$sgastat during this time.

During the 30 minute snapshot the same four names were in the top four: free memory, KGH: NO ACCESS, kglsim object batch, sql area.

Free memory ranged from a value of 850Mb through to 1000Mb.
KGH: No Access started at 160Mb and slowly declined to 110Mb
sql area spiked frequently between 40Mb and 110Mb
kglsim stayed statis at 46Mb.

I researched KGH: No Access and discovered that it was a casualty of "failed" ASMM transactions.

I thought that maybe we had set our minimum sizes for the pools too high and that maybe the Shared pool was being forced to be artifically high, but the shared_pool_size parameter is set to 480M.

I checked the shared_pool_reserve_size which is set to 90M. I am unsure if the number of concurrent users is "suggesting" to the ASMM that it needs to keep the shared pool size at n x 90M where n is the average number of users or something.

Is there a way to determine why the ASMM believes the shared pool needs so much free memory ?

Tom Kyte
November 28, 2008 - 3:58 pm UTC

look into v$sql, you'll see what is basically in there.

not sure where the n*90m concept came into play? The reserved size is held for large allocations, it isn't what to reserve per user or anything.


ASMM and sga_target

A reader, November 25, 2008 - 11:07 pm UTC

Tom,
I was doing some testing with ASMM and have a question. Suppose I have a system with sga_target=2GB. I do not set sga_max_size. In such situation, sga_max_size automatically gets the sga_target value. Now I allocate 500MB to db_keep_cache. I was expecting that sga_target would be reduced to 1.5GB and memory allocated to automatically tuned memory. Instead, I found that sga_target remains unchanged at 2GB but 500MB is subtracted from db_cache_buffer only. This is not the way it is explained in the docs. The docs lead me to believe that keep pool being manually managed is allocated 500MB and the remaining memory given to sga_target so it can be allocated to automatically managed memory structures.

Can you please clarify how ASMM would work in such a situation? Oracle 10.2.

Thanks...
Tom Kyte
November 28, 2008 - 4:10 pm UTC

when you use sga_target, it is documented that if you set the other memory parameters, they become the LOWER BOUND for that segment of the SGA.

It is working as expected. You said "Oracle, use 2gb of shared memory please - but also please make the block buffer cache at least 500mb of that - you are free to make it larger than 500mb (you are doing ASMM after all, with a bit of a hint from us where to start)"


... This is not
the way it is explained in the docs....

you would probably want to post a reference to the part you found confusing so we could read it in context.

The fact is - the keep pool is NEVER managed by ASMM, if you use the keep, recycle or different block size caches - they are ALWAYS manual.

http://docs.oracle.com/docs/cd/B19306_01/server.102/b14220/memory.htm#CNCPT810

that is what I would suggest reading to get the idea behind it.

ASMM and SGA_TARGET

A reader, November 29, 2008 - 11:10 pm UTC

Tom,
I did read the documentation. Here is from Performance Tuning Guide:
==========================
http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/memory.htm#sthref410

The memory for the KEEP pool is not a subset of the default pool.
==========================

Please also see the Metalink article ID 257643.1. Quote:
==============
Below are the manual SGA size parameters :

DB_KEEP_CACHE_SIZE
DB_RECYCLE_CACHE_SIZE
DB_nK_CACHE_SIZE (n = 2, 4, 8, 16, 32)
LOG_BUFFER
STREAMS_POOL_SIZE

Manual SGA parameters are specified by the user, and the given sizes precisely control the sizes of their corresponding components.

When SGA_TARGET is set, the total size of manual SGA size parameters is subtracted from the SGA_TARGET value, and balance is given to the auto-tuned SGA components.
==============

From the Concepts Manual that you pointed to:
===================
Manually Managed SGA Components

There are a few SGA components whose sizes are not automatically adjusted. The administrator needs to specify the sizes of these components explicitly, if needed by the application. Such components are:

Keep/Recycle buffer caches (controlled by DB_KEEP_CACHE_SIZE and DB_RECYCLE_CACHE_SIZE)

Additional buffer caches for non-standard block sizes (controlled by DB_nK_CACHE_SIZE, n = {2, 4, 8, 16, 32})

The sizes of these components is determined by the administrator-defined value of their corresponding parameters. These values can, of course, be changed any time either using Enterprise Manager or from the command line with an ALTER SYSTEM statement.

The memory consumed by manually sized components reduces the amount of memory available for automatic adjustment. For example, in the following configuration:
SGA_TARGET = 256M
DB_8K_CACHE_SIZE = 32M


The instance has only 224 MB (256 - 32) remaining to be distributed among the automatically sized components.

===================

My confusion stems from the following:

Why all the memory allocated to KEEP pool was taken out of buffer pool only? Since ASMM should automatically manage the buffer pool, shared pool, large pool, java pool and streams pool, taking out 500MB should have had an effect on size of other pools also.

Thanks...
Tom Kyte
December 01, 2008 - 6:15 am UTC

The SGA is "a size"

you allocate various "non-managed components" like the keep pool or the 8k pool. That comes from the SGA - which is "a size"

The sga_target IS THAT SIZE.

So you said "sga - be 256m, by the way, take 32m of that and be the 8k pool, now manage the rest of the components using 256-32mb of memory"



Not entirely sure where the confusion comes from?

You tell us how large the sga should be - 256m

You tell us how large various components should start at, or should be (your 8k buffer) - that is sliced off of the SGA.


That is exactly what the bolded text you hilited said:

...
When SGA_TARGET is set, the total size of manual SGA size parameters is subtracted from the SGA_TARGET value, and balance is given to the auto-tuned SGA components.
....


you hilighted the words that exactly describe what you saw...
then the rest is automagically assigned where ever we want.

Girish, April 03, 2014 - 5:16 am UTC

Hi Tom,

If I am correct only four parameter are managed automatically if we use ASMM(automatic shared memory management) and these are
(When SGA_TARGET is set)
DB_CACHE_SIZE
SHARED_POOL_SIZE
LARGE_POOL_SIZE
JAVA_POOL_SIZE

And my question is is DB_BLOCK_BUFFERS is also part of ASMM ? or it needs to set manually

Thanks,
Girish