Skip to Main Content
  • Questions
  • Memory parameters - simple and auto tune

Breadcrumb

May 4th

Question and Answer

Connor McDonald

Thanks for the question, Info.

Asked: June 09, 2019 - 12:35 pm UTC

Last updated: January 29, 2020 - 9:20 am UTC

Version: 12.2.0.1

Viewed 1000+ times

You Asked

Hi Tom,

We have 16 GB of memory on Windows server. Database is Oracle 12.2.01. 6.6 GB is using Oraclekernel exe.

We have set:
SGA_MAX_SIZE = 10016 M
SGA_TARGET = 6016 M
PGA_AGGREGATE_LIMIT = 6000 M
PGA_AGGREGATE_TARGET = 2900 M

I hope it's simple question we sent. It is:
* how to set these params to give to an application (which uses Oracle DB) as much as possible memory with auto tune of memory.

I have a feeling that we didn't put parameters right, and there is maybe so memory unused/free.

Thank you, I very appreciate an answer - I read a lot of doc but I can't find the right numbers (I always "lock" myself with impossible to startup the database, and then corect parameters and start with initora text file).
I am still struggling with parameters. Please help.

Regards,
Darko

and Connor said...

I think you might be running a bit close to the wire here.

Yes, our *target* is to keep utilization to sga_target + pga_agg_target (9G in your case), but there is also the capacity for us to *exceed* that - roughly up to sga_max_size + pga_agg_limit, which in your case, comes to 16G which is not leaving a lot of room for the OS and anything else on the box.

So without any other information, I'd be inclined to lower SGA_MAX_SIZE and PGA_AGGREGATE_LIMIT a little as a "just in case" measure.

Having said that, we actively track usage of the memory and have some advisors to give you recommendations on how your memory is going.

Check out

V$DB_CACHE_ADVICE
V$PGA_TARGET_ADVICE
V$PGA_TARGET_ADVICE_HISTOGRAM
V$SGA_TARGET_ADVICE
V$SHARED_POOL_ADVICE

views in the database from time to time whilst its running. They can give you some information about how much memory is being used, but also the *how* its being used. These views can tell if you perhaps you need to shift from memory from SGA to PGA to vice versa.

Rating

  (7 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

How to change parameters?

A reader, January 07, 2020 - 7:09 pm UTC

Hi Tom,
please help me a little bit.

Now we have following memory parameters:
SGA_MAX_SIZE = 10016 M
SGA_TARGET = 6016 M
PGA_AGGREGATE_LIMIT = 6000 M
PGA_AGGREGATE_TARGET = 2900 M

In last follow up you propose us to decrease SGA_MAX_SIZE and PGA_AGGREGATE_LIMIT a little bit.

After change
(new SGA_MAX_SIZE is 9516 M,
new PGA_AGGREGATE_TARGET is 2500 M
and new consequentially PGA_AGGREGATE_LIMIT is 5000 M), which results to:
SGA_MAX_SIZE = 9516 M
SGA_TARGET = 6016 M
PGA_AGGREGATE_LIMIT = 5000 M
PGA_AGGREGATE_TARGET = 2500 M

Should it be ok with that parameters?
sga_target + pga_agg_target = 6016 + 2500 = aprox. 8516 M
sga_max_size + pga_agg_limit = 9516 + 5000 = aprox. 14516 M

Maybe, could be better to increase server memory from 16 GB to 32 GB or more???
Can we JUST double all parameters if we get 32 GB of RAM (from before 16 GB)?

Can all those above parameters cause us that queries is running first time in 50 seconds, second time just in 1 second?
Can change of parameters sga_max_size + pga_agg_limit do some change in queries responses?

We checked execution plan - there are good (with index search, fast full scan, etc, BUT NOT FULL TABLE SCANS), sometimes
we use BIND VARIABLES, but everytime the same story - in first query needs 50 second, from second running on just 1 second.
How to prevert executuon plan to not "fall" from memory??? Can we "pin" execution plans in order to be there where query is executed?

We read all V$DB_CACHE_ADVICE
V$PGA_TARGET_ADVICE
V$PGA_TARGET_ADVICE_HISTOGRAM
V$SGA_TARGET_ADVICE
V$SHARED_POOL_ADVICE
but I can't figure it out what to change. What is the best hack to observe and do some changes?
Can I attach some files?

Another topic - system admins claims that Veeam for Oracle must execute every 10 minutes and there is new archivelog created (every 10 min.).
I think that slows the database a lot because every 10 minutes all dirty blocks are moved to disk (very expensive because we don't have SSD disk)?
We have 4 x 500 MB REDO LOGS, but archivelogs are just 10 - 40 MB in sicer when they are created every 10 minutes.
Can this also be an issue?

And, how can we find a problem/ADVICE in views:
V$DB_CACHE_ADVICE
V$PGA_TARGET_ADVICE
V$PGA_TARGET_ADVICE_HISTOGRAM
V$SGA_TARGET_ADVICE
V$SHARED_POOL_ADVICE

Regards,
Darko


Connor McDonald
January 08, 2020 - 1:09 am UTC

OK, as far as I can tell, you've asked 9 questions in this *followup* :-)

Maybe, could be better to increase server memory from 16 GB to 32 GB or more???


Nothing to stop from doing that. RAM is cheap

Can we JUST double all parameters if we get 32 GB of RAM (from before 16 GB)?


Roughly speaking yes, but the advisors can tell you the values.

Can all those above parameters cause us that queries is running first time in 50 seconds, second time just in 1 second?


*Possibly*. First execution is entirely I/O and second one is getting all the data from memory. But I stress, that is only *one* possible reason.

Can change of parameters sga_max_size + pga_agg_limit do some change in queries responses?


If they were I/O bound and now are memory based, then *possibly*.

The advisors aren't hard to interpret

SQL> desc V$DB_CACHE_ADVICE
 Name                          Null?    Type
 ----------------------------- -------- -------------
 ID                                     NUMBER
 NAME                                   VARCHAR2(20)
 BLOCK_SIZE                             NUMBER
 ADVICE_STATUS                          VARCHAR2(3)
 SIZE_FOR_ESTIMATE                      NUMBER
 SIZE_FACTOR                            NUMBER
 BUFFERS_FOR_ESTIMATE                   NUMBER
 ESTD_PHYSICAL_READ_FACTOR              NUMBER
 ESTD_PHYSICAL_READS                    NUMBER
 ESTD_PHYSICAL_READ_TIME                NUMBER
 ESTD_PCT_OF_DB_TIME_FOR_READS          NUMBER
 ESTD_CLUSTER_READS                     NUMBER
 ESTD_CLUSTER_READ_TIME                 NUMBER
 CON_ID                                 NUMBER


It shows you for a size_factor (ie, increase or decrease) from the current size, how much improvement you expect, ie, the number of estimated reads you will need to do from disk (ESTD_PHYSICAL_READS). That goes down as the size factor goes up - you then balance the benefit versus the increase in memory.

Review of answer / parameters

A reader, January 08, 2020 - 10:45 am UTC

Hi Tom,

you are right. Too many questions, because I am afraid and aware that changing these parameters can prevent database from starting and mounting (on test I had to do several memory corrections before I successfully started test database).

I have set on test database these parameters:
SGA_MAX_SIZE = 9516 M
SGA_TARGET = 6016 M
PGA_AGGREGATE_LIMIT = 5000 M
PGA_AGGREGATE_TARGET = 2500 M

And it takes several restarts, before I can give appropriate ratio between PGA_AGGREGATE_TARGET and PGA_AGGREGATE_LIMIT.

Now, just some question (I hope the answer is simple):
- instead that I am working on these memory parameters, can I somehow set MAX amount of GB that database can spet for all activities that database needs (if there is 16 GB of RAM, I can go up to 80% of 16 GB = approx 12 GB) - which parameters can I out to 0 and which one to AUTO and which one to 12 GB?
Can these changes stop production database from startup again after correcting parameters?
Can performing of SQLs drops after changing to AUTO memory handling?

Thank you.

Regards,
Darko
Connor McDonald
January 10, 2020 - 3:51 am UTC

On the vast majority of databases I've seen at customers big and small, they set 2 parameters and 2 only:

sga_target
pga_aggregate_target

In rare case, they may set a db_cache_size to establish a minimum size of their buffer cache, but in most cases, that is not required.

So I would be heading down that path....less complication = less chances of errors.

Review on answers / 2

Darko, January 10, 2020 - 7:12 am UTC

Hi,

I think I'll get to it very soon.

So, I will leave these 2 parameters as they are (maybe double if I get mora RAM):
sga_target = 6016 M
pga_aggregate_target = 2500 M



But, what must I do with parameters:
SGA_MAX_SIZE = ?
PGA_AGGREGATE_LIMIT = ?
MEMORY_MAX_TARGET = ?
MEMORY_TARGET = ?


Must I set them all to 0 (zero?) to enable automatic memory management and leave to database to do all by itself?
Or must I put some values to last 4 parameters. I check all documentations on web (Oracle, portals, discussions), and can't find the right receipt to do it right. I don't want to destroy database production.

Thank you,
after I properly set all database parameters which will use memory as much as needed and when it's needed, I will close this theme. I just want the database use memory as it needed to fulfill all queries.

Regards,
Darko
Connor McDonald
January 13, 2020 - 3:13 am UTC

Leave them entirely out of your spfile, ie

alter system reset <param> scope=spfile;

Review on answer /3

A reader, January 13, 2020 - 2:30 pm UTC

Hi,

if I issue:
alter system reset SGA_MAX_SIZE scope=spfile;
alter system reset PGA_AGGREGATE_LIMIT scope=spfile;
alter system reset MEMORY_MAX_TARGET scope=spfile;
alter system reset MEMORY_TARGET scope=spfile;

and will have defined just:

sga_target = 6016 M
pga_aggregate_target = 2500 M

How is then any limitation on memory on server - we have 16 GB of system memory. Where is then any limit? What if memory due to a lot of SQLs start to fill memory until Oracle fails?

Regards,
Darko


Connor McDonald
January 14, 2020 - 2:35 am UTC

Oracle will not "fail". If there is insufficient memory to do things, we might run a bit slower, but you can monitor that with the advisors

sga_max_size in NON-Solaris

LUIS, January 16, 2020 - 10:01 am UTC

Hi

I have seen many people set a high sga_max_size and a lower sga_target in Linux, Windows, AIX, HP-UX etc.

I wonder what is the point since sga_max_size pre-allocate the memory in the OS, the only exception is Solaris where DISM pre-allocate sga_target and allow sga dynamically increased to sga_max_size. Do you think sga_max_size is redundant in NON-Solaris-DISM platform? If you think it is not redundant can you share some explanation of using them in Linux for example :-?

Thank you

Connor McDonald
January 20, 2020 - 2:46 am UTC

I have seen many people set a high sga_max_size and a lower sga_target in Linux, Windows, AIX, HP-UX etc.


I've seen that as well but never really understood the point of doing so. Others welcome to add their input.

I suppose *conceivably* if you had multiple instances, you could have something like:

- critical instances have an over-allocated sga_max_size
- non-critical instances can be shutdown in emergencies
- critical instance sga_target can now be sized upwards

but that seems a stretch

sga_max_size

LSC, January 22, 2020 - 10:31 pm UTC

Hi

This cannot be done

- critical instances have an over-allocated sga_max_size
- non-critical instances can be shutdown in emergencies
- critical instance sga_target can now be sized upwards


because the memory in NON-SOLARIS is pre-allocated when sga_max_size is set so you don't need to shutdown non-critical instances to free memory and increase sga_max_size for the critical instances because the physical memory is already pre-allocated

For example

Critical Database has sga_target 4G and sga_max_size 10G
NON-Critical Database has sga_target 4G and sga_max_size 5G

physical memory allocation will be 10G + 5G and not 4G + 4G so there is no point using sga_max_size

UNLESS we are talking about DISM in Solaris where

Critical Database has sga_target 4G and sga_max_size 10G
NON-Critical Database has sga_target 4G and sga_max_size 5G

physical memory allocation will be 4G + 4G and you can increase dynamically to 10G + 5G, this is thank to Dynamic ISM (DISM)

only OS where sga_max_size has sense

in others, useless, pointless and the worse is when you ask DBA why they set it in Linux (for example) they say is to increase sga when they needed but hey you are already *wasting* that memory becauase its pre-allocated!




Connor McDonald
January 23, 2020 - 3:05 am UTC

As I said, "its a stretch" but the theory being:

- Critical Database has sga_target 4G and sga_max_size 10G
- NON-Critical Database has sga_target 4G and sga_max_size 5G

(lets say on a 16G machine)

Yes 15G alloc and most probably some of which swapped/paged out because we're close the limits of the machine RAM. The aim would be - the unused memory between target and max size most likely to be paged out.

Then... you clobber the non-critical instance, bump up target on the critical one and that paged out memory gets freedom to come into ram.

(But I said in my first review.... "I've seen that as well but never really understood the point of doing so")

Review on answers / 3

A reader, January 23, 2020 - 7:38 am UTC

Hi Tom,

something is confusing me.

After I will re-set:

alter system reset SGA_MAX_SIZE scope=spfile;
alter system reset PGA_AGGREGATE_LIMIT scope=spfile;
alter system reset MEMORY_MAX_TARGET scope=spfile;
alter system reset MEMORY_TARGET scope=spfile;

and will have defined:

sga_target = 6016 M (and *1.5 when we get 32 GB of RAM)
pga_aggregate_target = 2500 M (and *1.5 when we get 32 GB of RAM)

our process "Oracleinstance.exe" will raise to what value?
Now it is approx. 6900 Mbytes in memory - to what value (and limitation) can Oracle proces "Oracleinstance.exe" reach in Server memory - what's the limit?

Another one question: I have checked V$DB_CACHE_ADVICE.
So there is some benefits to decrease PHYSICAL READS FROM DISK with changing BUFFERS to higher value.
Can it be wrong if I increase "db_cache_size" TO HIGHER VALUE while database running ? Is it right to do that because we set just memory parameters "sga_target" and "pga_aggregate_target"?
Can I spoil something with changing of that parameter?
Thank you!

Regards,
Darko

Regards,
Darko
Connor McDonald
January 29, 2020 - 9:20 am UTC

You should see somewhere between:

6000M and 6000+2500M

this being the amount of memory based on the parameters you set.

However, don't forget that every connection to the database also uses up some memory (some of this is PGA, but other memory will be used etc).

So you might see it slighty larger.

If you set db_cache_size then this sets a LOWER bound on that component of the sga.

So for example, you might see:

sga_target = 6000M

and db cache is using (say) 4200M of that (as automatically decided by oracle).

You might decide "I *always* want at least 4500M", then setting db_cache_size to 4500M means we will never go lower than that.

(Of course, this means we have less flexibility on sizing the other pools int he SGA)

More to Explore

Administration

Need more information on Administration? Check out the Administrators guide for the Oracle Database