Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Andre.

Asked: June 18, 2002 - 3:20 pm UTC

Last updated: May 10, 2017 - 1:27 am UTC

Version: 9.0.1

Viewed 10K+ times! This question is

You Asked

How are the ON, READY and OFF values supposed to be used ? At first I had the impression I had to do: OFF -> READY -> ON, so READY would allocate memory to be used. Should I do: OFF -> ON -> READY (to maintain memory allocated) -> ON ?

and Tom said...

Here are the settings as you would make them in the spfile for a startup:

If you don't want advice -- set to off. No memory allocated in the SGA, no overhead of computing the advice. If you have it off -- you MAY not be able to set it to READY or ON without restarting the database.

If you BELIEVE you may want advice but don't want to incurr the overhead of actually doing the computations right now -- set it to READY. Memory is set aside, but not maintained. You can turn it ON later (and then back to ready to analyze the results). When you boot up with READY, you can go READY/ON/READY/ON at will without error

If you want it on, set it on. You can set it to READY if you like later.


Rating

  (42 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

cont'd

Andre, June 18, 2002 - 4:51 pm UTC

Thanks ! Only a bit awkward... but okay.



cont'd

Andre, June 19, 2002 - 7:54 am UTC

Does db cache advice replace the old calculation to find the buffer cache hit ratio (consistent reads, physical reads, block gets from v$sysstat) ? These values from v$sysstat can still be used combined with db cache advice to tune the db ?

Tom Kyte
June 19, 2002 - 8:07 am UTC

apples and toaster ovens.

You have a cache hit ratio (that is what consistent read etc will give you).

No, once you have that cache hit ratio -- tell me -- what would be the cache hit ratio if we increased the size of the buffer cache? what would it be if we lowered it? You don't know, you have insufficient data. You can guess but that is it.

db cache advice is a more intelligent guess based on facts. It'll tell you if you increase the buffer cache, expect this. If you lower it, expect that.

One computes a cache hit ratio.

The other tells you the likely outcome of resizing the buffer cache, what effect it'll have on physical IO.

Forget about v$sysstat ?

Andre, June 19, 2002 - 10:15 am UTC

Are you telling me to forget about the old v$sysstat method and use only db cache advice ? That's what I always thought, at least.

Tom Kyte
June 19, 2002 - 9:43 pm UTC

No.

I'm saying apples and oranges.

Do you want to know your cache hit ratio. Do the math.

Do you want to know what impact adding 50% to the buffer cache would most like be on your physical reads -- use db_cache_advice.

One things gives you a number -- you are getting a XX.YY% cache hit.

The other says, "you know, if you reduced the buffer cache by 20% you would get the same number of physical IO's" (for example)

cont'd

Andre, June 19, 2002 - 10:46 am UTC

What I understand from above and the documentation is that:

(1) once in possession of the cache hit ratio,
(2) you can use the db cache advice to know if resizing the buffer cache would be effective.

Is that correct ?

Tom Kyte
June 19, 2002 - 9:43 pm UTC

no.

you can do #2 without knowing anything at all about #1.

cont'd

Andre, June 20, 2002 - 12:26 pm UTC

I see your point, but there are some things I am missing.

Up to 8i, the buffer cache hit math was used to know whether rezing the cache would be effective. Now in 9i, this functionality is achieved by the db cache advisory, which I suppose has rendered the old db buffer pool statistics obsolete. I think the math is a mere "curiosity": just to "know" what the XX.YY% is. Period.

1) In short, do you see any usefulness for the v$sysstat db cahe hit ratio math, now that db cache advice seems to be replacing its primary role ?

2) When do I know I won't have any benefits increasing the db cache ? Is it when the physical read factor stays very close to 1.00 ?


Tom Kyte
June 20, 2002 - 1:04 pm UTC

how was the cache hit ratio effective in that regards.

If I told you I had a cache hit ratio of 50% -- tell me,

a) would increasing the buffer cache have any effect?
b) would decreasing the buffer cache have any effect?

The only reasonable answer to a) and b) is "maybe, maybe not, I don't know, insufficient data"

They are (as I keep saying) apples and oranges. One is a metric (cache hit), an overused and abused and many times misleading (based on advice from 'experts') metric. But -- it is just a metric

The other is "advice". If you do this, you can expect that. If you do this otherthing, you can expect this outcome.

1) i disagree with your analysis. Neither is a replacement for the other. The METRIC can be used to validate the advice. The advice is not driven by the metric. The metric might cause you to ask for advice. They are just totally different things.

2) if the number of estimated physical reads = number of reads in the real cache, this ratio becomes 1, so yes -- that would be a point at which you would not consider going.

shared pool advice

Andre, June 20, 2002 - 12:37 pm UTC

In 9iR2, should I forget about all the previous calculations to know the best sizing of the all the shared pool component ? Up to 8i, we would have to compute those percentages (lib cache misses, etc) -- don't I have to do any more of that ?

Tom Kyte
June 20, 2002 - 1:07 pm UTC

this is DB cache advice, not shared pool advice. Nothing different there, other then you can resize it online.

v$shared_pool_advice and v$pga_target_advice

Andre, June 20, 2002 - 1:41 pm UTC

Tom, finally you told me what I wanted to know. As you said, I think the only useful thing about the buffer cache metric NOW is "validating" the resizing done by using the db cache advice -- different from 8i, where I HAD TO USE the metric to perform all the sizing stuff. See my point ?

As to v$shared_pool_advice, from the tuning guide 9iR2:

[Quote]

Shared Pool Advisory Statistics
The amount of memory available for the library cache can drastically affect the parse rate of an Oracle instance. With Oracle9i, Release 2 (9.2) or higher, the shared pool advisory statistics provide a database administrator with information about library cache memory and predict how changes in the size of the shared pool can affect the parse rate.

The shared pool advisory statistics track the library cache's use of shared pool memory and predict how the library cache will behave in shared pools of different sizes. Two fixed views provide the information to determine how much memory the library cache is using, how much is currently pinned, how much is on the shared pool's LRU list, as well as how much time might be lost or gained by changing the size of the shared pool.

The following views of the shared pool advisory statistics are available. These views display any data when shared pool advisory is on. These statistics reset when the advisory is turned off.

V$SHARED_POOL_ADVICE
This view displays information about estimated parse time savings in different sizes of shared pool. The sizes range from 50% to 200% of current shared pool size, in equal intervals. The value of the interval depends on current shared pool size.

Parse time saved refers to the amount of time saved by keeping library cache memory objects in the shared pool, as opposed to having to reload these object.

[End Quote]

There's also v$pga_target_advice, for pga_aggregate_target.

Again, what I would like to know about shared pool advice is if I can forget the previous calculations to find all those percentages (lib cache hit ratio, etc) and rely only on the advice to find a better sizing. We can forget the bd cache ratio to find a better sizing, now we use db cache advice. Can we do the same with v$shared_pool_advice ?

Tom Kyte
June 20, 2002 - 4:03 pm UTC

Haven't played with that at all -- no comment yet.

A reader, August 26, 2002 - 8:14 pm UTC

Hi Tom

What is your Opinion regarding

</code> http://groups.google.com/groups?dq=&hl=en&lr=&ie=UTF-8&threadm=1030379681.28629.0.nnrp-14.9e984b29%40news.demon.co.uk&prev=/groups%3Fhl%3Den%26group%3Dcomp.databases.oracle.server <code>

Jonathan mention about consistent_gets examinations.
What is that?
Thanks

Tom Kyte
August 27, 2002 - 7:25 am UTC

Opinion? It is mostly statement of fact in that thread - opinions don't count.

Anyway, it is when we look at a block.

the result is opposition

sunbird, October 07, 2003 - 12:34 am UTC

my operation system is win2000 and oracle is 9.2
i can switch off/ready/on random,that is off->ready->on,on-rady-of etc... when 'off' or 'ready' or 'on' when startup,and i can't get error message ora-4031
why! thinks

Tom Kyte
October 07, 2003 - 7:56 am UTC

well, you don't want the ora-4031?

do you mean you "get ora-4031's"? if so, all that means is "shared pool was set too small for what you were trying to do, make it bigger"

advisories

Reader, December 24, 2003 - 7:38 pm UTC

The advisories for buffer cache, shared pool, pga target, and mttr are available only in 9iR2 Enterprise edition only if statistics_level=typical (the default) is set? Is my understanding correct Tom? If so, is it correct that these advisories are not available in 9i R1 because statistics_level is a 9i R2 parameter? Thanks.

Tom Kyte
December 25, 2003 - 8:25 am UTC

the advisors are there in 9ir1, just more sophisticated and feature rich in 9ir2.



then when should we increase db cache?

A reader, February 07, 2004 - 3:13 am UTC

Hi

After reading the thread I am getting worried. There is no way to know when should we increase db cache in 8i...?!

I dont advice anyone use v$db_cache_advice by the way, everyday you see a different advice (and database was not shutdown)!

Tom Kyte
February 07, 2004 - 2:40 pm UTC

if you believe your sql to be "as good as it gets", use the cache advisor to size the buffer cache for your working set.

you can use it before then, but as you identify and tune your sql -- just refer to the cache advisor to see how much you can trim off over time.

DB CACHE

Jayesh, April 06, 2004 - 1:28 am UTC

----------------------------------------
session logical reads
physical reads
physical reads direct
physical reads direct (lob)
----------------------------------------
Can u please explain the above terms? I went thru documentationbut could'nt get the proper meaning? What u mean by bypassing buffer cache and direct read?


Tom Kyte
April 06, 2004 - 9:08 am UTC

well, they are what they sound like --

session logical reads -- LIO's performed by a session -- reads from the buffer cache.

physical reads - physical IO's from the data files, reading blocks into the buffer cache.


the last two are the only ones that might be not 100% clear but....

physical reads direct -- reads from temp segments for sorting, no buffer cache.
physical reads direct (lob) -- reads from LOB segments where the lobs are NOCACHE (no buffer cache)

Advice Views

Robert Ware, May 13, 2004 - 11:40 am UTC

Tom,

If we make modifications to our db_cache_size based on the advice from the v$db_cache_advice view do we need to toggle the db_cache_advice parameter from on to ready and back if we made the change via an init.ora change followed by a shutdown/startup?

It is unclear to me whether the advice provided will be skewed by the previous db_cache_size setting and workload inofrmation that was collected if this is not done.

Also, does the statistics_level parameter need to be toggled to reset the advice information in the v$pga_target_advice and v$pga_target_advice_histogram views or is this information accrued only since instance startup?

Thanks!

Tom Kyte
May 13, 2004 - 2:18 pm UTC

no, you don't need to toggle anything. v$ stuff wipes out after a shutdown -- it all resets.



DB_CACHE_ADVICE

Reader, June 15, 2004 - 4:11 am UTC

Hi Tom

I am new on oracle9i.
I have set db_cache_advice=on in my oracle9i on SunOS.
When I see the data in v$db_cache_advice table, it gives me many rows and i don't know how to interpret the result and what is the optimum db_cache_size?

SQL>  select
  2  SIZE_FOR_ESTIMATE, BUFFERS_FOR_ESTIMATE, ESTD_PHYSICAL_READ_FACTOR, ESTD_PHYSICAL_READS
  3* from v$db_cache_advice;

SIZE_FOR_ESTIMATE BUFFERS_FOR_ESTIMATE ESTD_PHYSICAL_READ_FACTOR ESTD_PHYSICAL_READS
----------------- -------------------- ------------------------- -------------------
           1.5352                  393                    1.5474               10476
           3.0703                  786                    1.4405                9752
           4.6055                 1179                    1.1711                7929
           6.1406                 1572                    1.0624                7192
           7.6758                 1965                     1.045                7075
           9.2109                 2358                    1.0322                6988
          10.7461                 2751                    1.0253                6941
          12.2813                 3144                    1.0127                6856
          13.8164                 3537                    1.0051                6804
          15.3516                 3930                         1                6770
          16.8867                 4323                     .8653                5858

SIZE_FOR_ESTIMATE BUFFERS_FOR_ESTIMATE ESTD_PHYSICAL_READ_FACTOR ESTD_PHYSICAL_READS
----------------- -------------------- ------------------------- -------------------
          18.4219                 4716                     .8528                5773
           19.957                 5109                     .7642                5174
          21.4922                 5502                     .7642                5174
          23.0273                 5895                     .7642                5174
          24.5625                 6288                     .7642                5174
          26.0977                 6681                     .7642                5174
          27.6328                 7074                     .7642                5174
           29.168                 7467                     .7642                5174
          30.7031                 7860                     .7642                5174

20 rows selected.

Please explain me in detail.

Regards
 

Reader, June 17, 2004 - 12:28 am UTC


A reader, September 01, 2004 - 10:17 am UTC

Tom,

I wanted to know, what are all advisories available in 9ir1

The advisories for buffer cache, shared pool, pga target, and mttr ..anything else?

BUFFER CACHE -- V$DB_CACHE_ADVICE
PGA TARGET -- V$PGA_TARGET_ADVICE AND V$PGA_TARGET_ADVICE_HISTOGRAM VIEWS
SHARED POOL -- V$SHARED_POOL_ADVICE
MTTR --?

what is the view name for mttr? ...

and in the below link, you have specified
Buffer Pool Advisory for DB: db1 Instance: db1 End Snap: 2

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:7143307018863#19345097992704 <code>


and in that you have said .. that its a direct select from the view .. is it the same for all .. can we interpret the view directlyas we do for V$DB_CACHE_ADVICE


Tom Kyte
September 01, 2004 - 10:49 am UTC

v$db_cache_advice is the only view that existed in 9ir1 from that list.

views for advisories in 9ir2

A reader, September 02, 2004 - 7:56 am UTC

Tom,,

Can you please let us know the advisories in 9ir2 and its corresponding views.

Thanks for your time

Tom Kyte
September 02, 2004 - 9:43 am UTC

use statspack, easiest way -- or OEM.  

otherwise, just query the data dictionary to see views that include the word "advice"

ops$tkyte@ORA9IR2> select view_name from all_views where view_name
  2  like '%ADVICE%';
 
VIEW_NAME
------------------------------
GV_$DB_CACHE_ADVICE
GV_$MTTR_TARGET_ADVICE
GV_$PGATARGET_ADVICE_HISTOGRAM
GV_$PGA_TARGET_ADVICE
GV_$SHARED_POOL_ADVICE
V_$DB_CACHE_ADVICE
V_$MTTR_TARGET_ADVICE
V_$PGA_TARGET_ADVICE
V_$PGA_TARGET_ADVICE_HISTOGRAM
V_$SHARED_POOL_ADVICE
 
10 rows selected.
 

Thanks tom ...

A reader, September 02, 2004 - 11:25 am UTC


Finding the usage of various pools

Bipul, January 26, 2005 - 9:51 am UTC

Hi Tom.

v$db_cache_advice gives us the estimated information on how much more or less physical I/O will be required as a result of changing a POOL size. Prior to changing the pool size, what I would be interested in is

A. finding out how much a particular pool is used? For example, I have KEEP pool of 208MB, I want to know if all of the memory assigned to KEEP pool is being utilized or not? How do I do that?

Based on the statspack data, I found that 52% of the physical I/O in system is due to one particular SQL statement. The table accessed by this sql statement is used as queue table [but is not AQ table, its a 3rd party product and we can't change it]. Some process populates this table, then others read it, do some work and then delete the data from this table. So at any time, there are not huge amount of rows in this table. I would like to know if this kind of object [where the data is transient] will make a good candidate for KEEP pool.

Thanks
bipul

Tom Kyte
January 26, 2005 - 10:32 am UTC

v$bh shows what blocks are in the cache(s). those blocks belong to segments, segments have a buffer pool associated with them.

v$buffer pool and v$buffer pool statistics will be of interest to you.

Am I using all the cache

A reader, January 27, 2005 - 10:21 am UTC

Hi Tom.

I gathered some statistics from my database. These are

Allocated pool size:

Default Buffer Pool 560MB
Recycle Buffer Pool 208MB
Keep Buffer Pool 208MB

Block Size - 8KB

Usage as found from v$BH [counting # of blcoks for each segment from v$BH and dba_objects and then using dba_tables, dba_indexes to find out which segment is in which pool]

Default Buffer Pool - 33763 Blocks = 263.77MB
Keep Buffer Pool - 1070 Blocks = 8.35 MB
Recycle Buffer Pool - 1045 Blocks = 8.16MB

Am I correct in assuming that the entire cache is not being used? And if so, how do I make use of the remaining space in buffer pools?

A table can be cached using ALTER TABLE <table name> CACHE; statement, how a index can be cached? Or is it always cached? i.e. put at the MRU end of LRU chain.

Thanks
bipul

Tom Kyte
January 27, 2005 - 10:30 am UTC

how to make use of the space?

query bigger tables I guess? we cache that which you query.

a table is not "cached" by that. how the blocks are managed in the buffer cache as the result of a full scan is changed. period. it does not automagically "cache" the blocks.


since indexes are read single blocks at a time, they are cached as they are read. If you full scan a large table, we try to age out (if needed) those blocks fast, rather then have them push out all of the other blocks in the cache. Indexes don't do that, so there is no option.

and if the cache is big enough, the full scan wouldn't age anything out anyway.

Applying your good information...

Michael, February 10, 2005 - 3:55 pm UTC

We are on 9.2.0.5 with all bells and whistles, on a Solaris Sunfire box with 8G ram and 8 cpus. The storage is on a very fast Hitachi SAN. We have memory to burn but will likely get more cpus soon, because we have a bunch of big partitioned tables with 16 parallel query servers maxxed out on them a lot of the time. My max_sga_size is about 1.3G right now, so we are not taxing this box at all. We share it with nothing else. I believe there is such a thing as making the sga and its components too big, but would like to know your opinion on what I am seeing. Our workarea size policy is auto and the pga aggregate target is about 490m.

I found this thread most interesting, and read the link you placed above showing how to interpret the results of the query :

select
SIZE_FOR_ESTIMATE, BUFFERS_FOR_ESTIMATE, SIZE_FACTOR, BUFFERS_FOR_ESTIMATE, ESTD_PHYSICAL_READ_FACTOR,
ESTD_PHYSICAL_READS
from v$db_cache_advice;

and here is our result:

sz4_est siz_fctr bufs4_est est_p_rd_fctr est_p_rds
------- ---------- ----------- ------------- ------------
16 .0833 1,008 1.8692 233,022,616
32 .1667 2,016 1.4862 185,281,178
48 .25 3,024 1.3169 164,176,887
64 .3333 4,032 1.1989 149,464,620
80 .4167 5,040 1.1345 141,437,397
96 .5 6,048 1.1028 137,478,610
112 .5833 7,056 1.0782 134,408,245
128 .6667 8,064 1.0496 130,850,940
144 .75 9,072 1.0303 128,439,539
160 .8333 10,080 1.0189 127,017,055
176 .9167 11,088 1.0092 125,813,175
192 1 12,096 1 124,664,196
208 1.0833 13,104 .9914 123,592,823
224 1.1667 14,112 .9838 122,647,652
240 1.25 15,120 .977 121,801,960
256 1.3333 16,128 .971 121,048,783
272 1.4167 17,136 .9655 120,359,396
288 1.5 18,144 .9603 119,710,264
304 1.5833 19,152 .9552 119,079,911
320 1.6667 20,160 .9476 118,128,380

so if I got your explanation, if I increase my db_cache_size
from its current size of 200m to 320m, I will only see physical reads drop by about 6 million, or about 6 percent.

On the other hand, we have the memory to spare, so would it hurt to say, double the db_cache_size? The reason I ask is that even though my tuning reports say that the buffer cache hit ratio is in the upper 90's, we have really wild dips throughout the day. If I use the OEM buffer cache hit % monitor, it will be cruising along near 100%, then it starts to look like my EKG while fleeing from an angry mob. It starts fluctuating wildly all over the chart. We have a few very large tables that are accessed throughout the day, and that data seems to blow everything else out of memory, and the hit ratio drops dramatically. Should I consider keeping that big table data in another pool instead? It would likely be pretty large, but how would I be able to see at any given time which objects have data in memory?

By the way, I ran the Oracle Expert Reports in OEM and it did not recommend any parameter changes, but I'm just a tad skeptical....

Two more facts:

NAME Shared Pool Size
------------------ ----------------
Database Buffers 201,326,592
Fixed Size 733,064
Redo Buffers 8,208,384
Variable Size 1,124,073,472
****************** ----------------
sum 1,334,341,512

POOL NAME Free Bytes
----------- ------------------ ----------------
shared pool free memory 23,668,920
large pool free memory 32,571,392
java pool free memory 167,772,160

The java pool setting was required for last patchup, the docs warned that catpatch would not run properly if not at least 150m for java and shared pools.

Tom Kyte
February 11, 2005 - 7:42 pm UTC

well, it depends (i say that alot :)

PQ -- it tends to want to global checkpoint and read right from disk (blow off that buffer cache, just adds overhead to a large parallel query -- to have to look and check constantly if we have to read or not read it).

A large buffer cache could take alot longer to checkpoint.


Maybe you truncate lots -- we flush the buffer cache of blocks in the table to be truncated -- so bigger cache, maybe more blocks are cached -- maybe truncates start taking much longer.

Alter table drop/truncate/whatever partition -- same thing.

If you are doing lots of full scans -- are you really expecting to find that stuff in the cache?


Now, on the other hand -- 200meg to 320meg is not that large -- ratio wise, it is huge, but raw number wise, in the grand scheme of things, it is small.

You might want to try it, you can always set it right back down -- but I would not really anticipate a problem with just another 100m.

Of course!

Michael, February 17, 2005 - 1:18 pm UTC

"
PQ -- it tends to want to global checkpoint and read right from disk (blow off that buffer cache, just adds overhead to a large parallel query -- to have to look and check constantly if we have to read or not read it).

A large buffer cache could take alot longer to checkpoint.
"

That makes so much sense, but it sure was not obvious to me that a lot of PQs against many table partitions would execute more efficiently without using the buffer cache. It perfectly explains the wild swings in my buffer hit ratio during the day.


What should be answer to this question ?

parag jayant patankar, March 02, 2005 - 10:09 am UTC

Hi Tom,

Somebody asked me following question

Which two statements are true regarding the use of DB_CACHE_ADVICE init.ora parameter? (Choose two)

A. Setting the parameter to READY reserves space in the buffer cache to
store information about different buffer cache sizes, but no CPU overhead is incurred.

B. Setting the parameter to READY reserves space in the shared pool to store information about different buffer cache sizes, but no CPU overhead is incurred.

C. Setting the parameter to ON reserves space in the buffer cache to store information about different buffer cache sizes, and CPU overhead is incurred as statistics are collected.

D. The V$DB_CACHE_ADVICE view contains information that predicts the estimated number of physical reads for different cache sizes for each buffer cache setup in the SGA.

In this question which only 2 answers are right ?

regards & thanks
pjp

Tom Kyte
March 02, 2005 - 10:13 am UTC

OCP questions... hmm, i always feel bad about looking at them (you are supposed to work them out..)

you should be able to research this in the documentation no?

OCP question,

Parag Jayant Patankar, March 02, 2005 - 10:34 am UTC

Hi Tom,

Yes I agree with your point that we should research the documents to arrive at right answer. But we do not come to right conclusion many times, at that time we require "Oracle Guru"s help to clear our doubts. This is one of the case for me, and I came to my Oracle guru to understand. According to me all answers are right. Pl help me by giving answers to this question with explanation.

regards & thanks
pjp

Tom Kyte
March 02, 2005 - 11:13 am UTC

tell me your answers and why you think they are right and I'll let you know if you are.

(look it up, this is fully documented, the two guides you need:

performance and tuning guide to learn about how this feature works, where it gets it's memory

reference guide to learn the meaning of the v$ view and what the different settings mean)



OCP Question

Parag Jayant Patankar, March 05, 2005 - 4:07 am UTC

Hi Tom,

regarding OCP question I have asked you in this thread

Which two statements are true regarding the use of DB_CACHE_ADVICE init.ora
parameter? (Choose two)

A. Setting the parameter to READY reserves space in the buffer cache to
store information about different buffer cache sizes, but no CPU overhead is
incurred.

B. Setting the parameter to READY reserves space in the shared pool to store
information about different buffer cache sizes, but no CPU overhead is incurred.

C. Setting the parameter to ON reserves space in the buffer cache to store
information about different buffer cache sizes, and CPU overhead is incurred as
statistics are collected.

D. The V$DB_CACHE_ADVICE view contains information that predicts the estimated
number of physical reads for different cache sizes for each buffer cache setup
in the SGA.

According to me correct answers are B and D because The advisory requires memory to be allocated from the shared pool not from buffer cache. ( approx 1000 bytes for each buffer it means if I set default, keep, recycle buffers then I require approx 3000 bytes )

Pl tell me my answers are correct or wrong.

regards & thanks
pjp


Tom Kyte
March 05, 2005 - 7:23 am UTC

Well done.

reader

A reader, August 16, 2007 - 7:40 am UTC

Could you confirm if db_cache_size parameter getting deprecated in the near future
Tom Kyte
August 20, 2007 - 9:59 pm UTC

no, it isn't.

Can db_cache_size be too big?

Rich, October 29, 2008 - 3:03 pm UTC

Hi Tom,

Your input on this puzzling question would be really appreciated! We tested a procedure in PRE-PROD. The procedure looks like this:

For x in (select....)
LOOP
Insert into table a;
Insert into table b;
Update table c;
END LOOP;
END;

The select statement returns around 500 000 rows, so total of 1 Million INSERTS and 500 000 UPDATES are done (we know this is not the optimal way of doing it, but the script is provided by the software provider and we cannot change a line in it...)

Testing gave us surprinsing results:

With db_cache_size=1GB
Test 1> Elapsed: 00:32:08.04
Test 2> Elapsed: 00:38:57.00

With db_cache_size=5GB
Test 1> Elapsed: 00:56:40.06
Test 2> Elapsed: 00:58:39.05

So, on the exact same server and instance (not running any other processes or instance), running the same procedure
with a much bigger db_cache_size gives a decrease of 50% in performance. How can this be explain?

Tom Kyte
October 29, 2008 - 4:24 pm UTC

well, there could be a myriad of reasons.

1) possible cause: 5gb exceeded real memory on the machine - couple the 5gb cache with the rest of the SGA and the other stuff on the machine and you started paging like mad, maybe even swapping.

2) possible cause: some percentage of the cache size is used to determine how blocks from a full scan are treated in the cache. As you change the cache size, you change the algorithms by which Oracle does things. When you change the approach taken, one of three things can happen - a) it'll go faster, b) it'll go slower, c) it'll stay the same. Maybe (b) happened this time.

3) with the smaller cache, dbwr had to be more aggressive to keep it clean, you did the same write IO's but more continuously and they in general happened in the background (you didn't wait). Now, all of a sudden, you had a bigger cache, you cached 5 times the amount of dirty data and had to checkpoint - you had to wait (and wait and wait) for 5gb of data to be written out - all at the same time - instead of a maximum of 1gb of data.


and so on - without a statspack for the period of observation or a tkprof with waits - it'll be hard to say.

db_cache_size too big?

Rich, November 05, 2008 - 9:57 am UTC

Hi Tom,

Here's the Top 5 Timed Events while running the exact same script with 1, 5 and 10GB db_cache_size. As you can see, CPU Time increase significantly while setting the db_cache_size higher (from 1,584s--> 1GB, 3,204s--> 5GB, 9,458s--> 10GB )

What do you think about it? Can you guide us on where to look to solve this problem?

1GB Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
---------------------------- ------------ ----------- --------
CPU time 1,584 64.49
db file sequential read 1,254,943 793 32.3
log file parallel write 20,384 37 1.51
log file sequential read 4,060 29 1.2
db file parallel write 6,019 6 0.23
---------------------------------- ------------- ------------ --


5GB Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
---------------------------- ------------ ----------- --------
CPU time 3,204 79.05
db file sequential read 1,169,200 738 18.22
log file parallel write 19,791 33 0.81
log file sequential read 4,003 31 0.76
process startup 36 21 0.52
---------------------------------- ------------- ------------ --



10GB Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
---------------------------- ------------ ----------- --------
CPU time 9,458 96.31
db file sequential read 98,869 127 1.29
log file parallel write 39,088 57 0.58
control file parallel write 5,338 51 0.52
db file parallel write 99,854 44 0.45
---------------------------------- ------------- ------------ --

Tom Kyte
November 11, 2008 - 1:07 pm UTC

well, using the code button so we can actually read the numbers:

CPU time                              1,584    64.49
db file sequential read    1,254,943    793    32.3


CPU time                              3,204    79.05
db file sequential read     1,169,200   738    18.22

CPU time                              9,458    96.31
db file sequential read        98,869   127    1.29



so, what you have to do is think about "what happens when we do IO".

First we attempt to find that block in the buffer cache. Steps in short are conceptually:

a) take the DBA (data block address, not good old Jim from the cube next to us) and hash it into a value 1 through N (where N is the number of cache buffers chains lists we have - the bigger the cache, the bigger N is)

b) take that hash value and go to that list

c) attempt to get a cache buffers chains (cbc) latch to safely walk the list. Now, a latch is gotten typically using a spin lock, so - the more people going after the same list, the more CPU you'll burn trying to get the latch.

d) walk the list looking for that DBA. If we find it - done - else

e) perform the physical IO, get the list again (latch - spin lock), put the block onto the list, then done



So.... As you increase the size of the cache, you have more lists, but bigger lists too (you will not have 5 times as many lists with 5 times the buffer cache).


So, as you increased the size of the buffer cache, you made finding a block in the buffer cache "take longer" quite possibly, hence latch holders were taking a little longer and everyone else was spinning in the latch. To confirm - page down in the report, look for the latching section, review your cbc latches - see if the misses and sleeps did not go up - if they did, you were spinning waiting for the latch and that eats cpu.


That is the likely culprit from 1 to 5gb since you didn't really change the IO patterns at all - you just had a bigger cache (same number of physical IOs pretty much)


From 5 to 10, you really reduced the IO, but you made the cache even larger (and the list longer). Look again at the latch section.

But here in this last one, another thing might be at work. There is a chance you actually did a lot more work (actually, in both cases this is true - you could have in the second case done about the same amount of physical IO's because you did N times as much work! Not because the cache wasn't big enough to hold your original set of blocks, but rather that the cache was holding a ton more blocks because you did a ton more work)


for that, you would go to the top of the report and look at "things per second done" like executions, transactions, parses - if they are going up, you were simply "doing more stuff because you were doing less physical IO"

Increase time running with a bigger db_cache_size

Rich, December 08, 2008 - 5:05 pm UTC

Hi Tom,

We are concentrating more on 1GB and 10GB difference for the moment (the 5GB test was only to show progression).
As you mentioned, we took a look at the cbc latches, in fact, cbc latches went down with the 10GB execution.

                                           Pct    Avg   Wait                 Pct
                              Get          Get   Slps   Time       NoWait NoWait
Latch (1GB)                 Requests      Miss  /Miss    (s)     Requests   Miss
------------------------ -------------- ------ ------ ------ ------------ ------
cache buffers chains        224,236,668    0.0    0.1      0    1,500,431    0.0

Latch (10GB)                Requests      Miss  /Miss    (s)     Requests   Miss
------------------------ -------------- ------ ------ ------ ------------ ------
cache buffers chains        224,609,519    0.0    0.8      4      337,246    0.0



As you mentioned also, we then took a look at the section Load Profile.

I am having a real time explaning how the load profile per second and per transaction could change so much since we did run the same exact script (rolling back the changes afterwards) on the exact same machine (making only the db_cache_size change)?

Can you give us a little hint on how to analyze those numbers?

The test with 10GB ran in 4 hours while it took 38min with a 1GB instance. What conclusion can we drive from this?

For example, Redo Size per second was 5x greater on the 1GB instance?
Perhaps, since the script ran 5 times faster, total redo size being the same overall, the Redo Size per second must be higher with the smaller instance?

Load Profile per transaction... well.... I have no clue about this one. Can't figure out a good interpretation of these numbers.

Please, your advices!! How can we analyze this?

Thanks again for your help.


Load Profile 1GB
~~~~~~~~~~~~                            Per Second       Per Transaction
                                   ---------------       ---------------
                  Redo size:          1,674,806.12         28,327,116.46
              Logical reads:             28,918.80            489,122.94
              Block changes:             12,691.21            214,654.94
             Physical reads:                541.14              9,152.59
            Physical writes:                456.99              7,729.37
                 User calls:                  0.26                  4.38
                     Parses:                 16.70                282.45
                Hard parses:                  0.09                  1.49
                      Sorts:                  0.76                 12.88
                     Logons:                  0.03                  0.57
                   Executes:              1,232.27             20,842.22
               Transactions:                  0.06

Load Profile 10GB
~~~~~~~~~~~~                            Per Second       Per Transaction
                                   ---------------       ---------------
                  Redo size:            296,267.12      2,050,761,000.00
              Logical reads:              4,933.99         34,153,061.50
              Block changes:              2,176.49         15,065,689.50
             Physical reads:                  7.62             52,764.00
            Physical writes:                 87.07            602,731.00
                 User calls:                  0.17              1,204.50
                     Parses:                  2.91             20,116.50
                Hard parses:                  0.02                171.50
                      Sorts:                  0.29              2,031.00
                     Logons:                  0.00                 15.00
                   Executes:                209.66          1,451,296.00
               Transactions:                  0.00


Tom Kyte
December 09, 2008 - 2:01 pm UTC

I don't see them going down, I see the get requests constant - but having more contention (longer waits)


it would appear as if you did very much a different workload the second time around. A very different workload.

Are you sure this was a valid apples to apples comparison?

Same Script...

Rich, December 09, 2008 - 2:43 pm UTC

Hi Tom,

Same script, same database (varying only db_cache_size, from 1GB to 10GB), same server, only one connected user. Very puzzling for the moment.

I tried opening a SR with Oracle, but for now, they seem to think it is an OS problem, without giving any clear indications why....

What do you think?

Rich

Here's the tkprof output:


1GB
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse    38405      5.19       5.19         65       2590          0           0
Execute 2895607   1418.80    2045.58     917474    3514832   61735629     1465949
Fetch   1912004    121.62     150.85     349933    4645173      29757     1909652
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   4846016   1545.63    2201.63    1267472    8162595   61765386     3375601

10GB
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse    38405      5.19       5.19         65       2590          0           0
Execute 2895607   1418.80    2045.58     917474    3514832   61735629     1465949
Fetch   1912004    121.62     150.85     349933    4645173      29757     1909652
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   4846016   1545.63    2201.63    1267472    8162595   61765386     3375601



Rich
Tom Kyte
December 09, 2008 - 4:23 pm UTC

it seems highly unusual to me that the numbers would be *identical* here. So unusual as to be "not probable".

so, backup and explain the testing process from start to finish.

for example....

Ok, so we start by restoring from backup and doing an incomplete recovery to SCN 12345632. Then we modify the init.ora and start the database up. Next we take a statspack snapshot and then we run the test from start to finish. We conclude by taking a snapshot and then shutting down.

We follow the same process but using a different init.ora setting.




(a process like that would let you directly compare statspack to statspack - if you just take two 15 minute snapshots at some arbitrary points - and compare them - they are not really comparable)


Same script, different execution time.

Rich, December 09, 2008 - 2:45 pm UTC

oups...

1GB
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse    38405      5.19       5.19         65       2590          0           0
Execute 2895607   1418.80    2045.58     917474    3514832   61735629     1465949
Fetch   1912004    121.62     150.85     349933    4645173      29757     1909652
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   4846016   1545.63    2201.63    1267472    8162595   61765386     3375601

10GB
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse    38579      6.35       6.61          0       1339          0           0
Execute 2895805   9106.99    9841.06      98557    1654313   61982877     1466111
Fetch   1912157    145.79     287.17       1696    4645184      29742     1909607
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   4846541   9259.14   10134.86     100253    6300836   62012619     3375718


Same script...

Rich, December 10, 2008 - 10:58 am UTC

Hi Tom,

Okay, let me give you more details:

The testing was done on a Linux Server with 16GB of RAM, running Oracle 9.2.0.8.  During the testing, only one database was up and running, with only one session opened (the one we use for test). We tested the procedure with 1GB, 5GB and 10GB db_cache_size (not changing any OS or Oracle parameter beside db_cache_size).   (The testing procedure is describe below.)

----------------testing procedure--------------------------------

CONNECT SYSTEM/xxx@vkf0
SELECT value/1024/1024/1024 FROM v$parameter
WHERE name = 'db_cache_size';
ALTER SYSTEM FLUSH SHARED_POOL;
ALTER SESSION SET TIMED_STATISTICS = TRUE; 
ALTER SESSION SET STATISTICS_LEVEL=ALL; 
ALTER SESSION SET MAX_DUMP_FILE_SIZE=UNLIMITED; 

EXEC PERFSTAT.STATSPACK.SNAP;

DECLARE
   CURSOR C_PTNMETER   IS ...

BEGIN
      FOR RC_PTNMETER IN C_PTNMETER LOOP
         INSERT INTO EMS.DEVTRANS ...
         INSERT INTO EMS.DTRANSNOTE ...
         UPDATE EMS.PTNMETER...
      END LOOP;
END;
/


At the end, in another session, we issue 
EXEC PERFSTAT.STATSPACK.SNAP;

And then, coming back in the original session, we issue:

ROLLBACK;

------------------------------------------------

Results


Execution results were (3 executions per test case)
-----------------------------------
With db_cache_size=1GB  Elapsed: 00:32:08.04 
With db_cache_size=5GB  Elapsed: 01:04:40.06 
With db_cache_size=10GB  Elapsed: 03:42:46.03

Since testing with 5GB was solely done to prove progression of execution time, further analysis were only conducted on 1 and 10GB db_cache_size:


Some results taken from STATPACK:
<code>

Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ 
CPU time (1GB)                        1,584    64.49
db file sequential read    1,254,943    793    32.3

CPU time (10GB)                       9,458    96.31
db file sequential read        98,869   127    1.29


Latch Activity for DB
~~~~~~~~~~~~~~~~~~ 
                                           Pct    Avg   Wait                 Pct
                              Get          Get   Slps   Time       NoWait NoWait
Latch (1GB)                 Requests      Miss  /Miss    (s)     Requests   Miss
------------------------ -------------- ------ ------ ------ ------------ ------
cache buffers chains        224,236,668    0.0    0.1      0    1,500,431    0.0

Latch (10GB)                Requests      Miss  /Miss    (s)     Requests   Miss
------------------------ -------------- ------ ------ ------ ------------ ------
cache buffers chains        224,609,519    0.0    0.8      4      337,246    0.0



Load Profile 1GB
~~~~~~~~~~~~                            Per Second       Per Transaction
                                   ---------------       ---------------
                  Redo size:          1,674,806.12         28,327,116.46
              Logical reads:             28,918.80            489,122.94
              Block changes:             12,691.21            214,654.94
             Physical reads:                541.14              9,152.59
            Physical writes:                456.99              7,729.37
                 User calls:                  0.26                  4.38
                     Parses:                 16.70                282.45
                Hard parses:                  0.09                  1.49
                      Sorts:                  0.76                 12.88
                     Logons:                  0.03                  0.57
                   Executes:              1,232.27             20,842.22
               Transactions:                  0.06

Load Profile 10GB
~~~~~~~~~~~~                            Per Second       Per Transaction
                                   ---------------       ---------------
                  Redo size:            296,267.12      2,050,761,000.00
              Logical reads:              4,933.99         34,153,061.50
              Block changes:              2,176.49         15,065,689.50
             Physical reads:                  7.62             52,764.00
            Physical writes:                 87.07            602,731.00
                 User calls:                  0.17              1,204.50
                     Parses:                  2.91             20,116.50
                Hard parses:                  0.02                171.50
                      Sorts:                  0.29              2,031.00
                     Logons:                  0.00                 15.00
                   Executes:                209.66          1,451,296.00
               Transactions:                  0.00






What do you think, where should we try to push forward this investigation?

Thanks again for such a great site!

Rich</code>
Tom Kyte
December 10, 2008 - 1:50 pm UTC

you issued ROLLBACK???!?!?!?!


there you go.

You do understand rollback is a logical operation.

If you

a) create a table

and then


b) full scan it (small, fast)
c) insert 1,000,000,000 rows into it (big now)
d) rollback

you do know the next time you full scan it - it will be LARGE and SLOW right?

And indexes will be slammed.

And so many other things


You need to

a) restore from backup
b) startup
c) snap - test - snap

and go to (a) again.

If you want to compare (or use flashback database to put it back the way it was)

Same Script

Rich, December 10, 2008 - 3:23 pm UTC

Yes I understand that there would be a difference (space allocation and many more).

But I am able to do the following:

1- Run test with 1GB (38min)
2- Rollback;
3- Run test with 10GB (3h42min)
4- Rollback;
5- Run test with 1GB (38min)
..

I thought this was a proof that space allocation (and all other little things happening), was not a factor of the execution time...

In fact, is there a difference between "rollbacking" the same transaction if you use different db_cache_size?


Tom Kyte
December 10, 2008 - 4:19 pm UTC

i would suggest NOT rolling back - use flashback or restore.

I would want you to compare a tkprof (the ones you showed above show "same work done" they do not seem to match up with the statspack)

and many things change with the size of the buffer cache - for example: block cleanouts happen at a level of 10% the size of the buffer cache (if you have expert Oracle database architecture - see the chapter on redo and undo)


Tell you what - I'm curious, compress and email those two statspack to me - the LAST TWO (10gb followed by 1gb)

thomas dot kyte at oracle dot com

do not name the file something.zip - name it something.not_a_zip - else it will be rejected.

A reader, May 03, 2010 - 4:27 pm UTC

Hi Tom,

We know most of the time we may need to concentrate on reducing "logical i/o" which in turn will take care of physical i/o .

1.When do we need 'buffer cache hit ratio' to be calculated ? what might be the situations in real world where we need to concentrate on increasing this ratio ? Do we need to really bother about this in real world ?

2.I did not flush database buffer cache till now - I am not able to understand when do we actually need this to flush the data buffer cache ? if not really not required - why the concept is introduced in oracle ?

If these are already answered in any of the previous posters- Can you please forward the link ?
Tom Kyte
May 06, 2010 - 12:36 pm UTC

1) only so as to develop a trend, so that when it (the ratio) changes, you can ask yourself "why"

so, if you measure it constantly and find "it is always about 92.34%", and it suddenly changes to 98.7%, you can ask "why"

it could be because a query that used to efficiently full scan was forced to use an index by a misguided developer - and the rise in the ratio implies something really bad happened.

it could be because you increased the buffer cache size and physical IO's went way down, and this is a good thing.

so, do you need to bother with this in the real world? Yes, sure.

But what you DO NOT do with it is say "it should be x.y%"...

What you DO DO WITH IT is say "it has been X, it is now Y, what changed - what caused that change and is it good or bad?" In other words, treat it like a barometer - when a barometer changes, you investigate the change and forecast what is to come from that change.


2) you never do, I wish they hadn't introduced it, they added it because of popular misguided demand.

Search this site for

"secondary sga"

for why I think it is just a bad idea.

A reader, May 06, 2010 - 5:37 pm UTC

Thank you very much Tom for your reply

Is not the buffer_cache "empty" when the instance started up ?
if it is not - what will fill the buffer cache immediately after the instance started ? becuase we 'may'not execute any SQL statements immediately(very immediately) after the instance is started in 'general'?


Tom Kyte
May 06, 2010 - 9:04 pm UTC

yes, but so what? so what if the buffer cache is empty right after you startup, is that the normal state of being? No, it is not - you hardly ever startup.

and even if you did, please do search for

"secondary sga"

that secondary sga is still in effect unless you reboot the entire server itself.

Increasing SGA

akas, May 07, 2010 - 2:05 am UTC

Hi Tom,
We are getting richer by Oracle knowledge by going through your sites and i appreciate, a respected consultant like you can spare some of his time to educate others.

Here is one of my doubt.
DB CACHE ADVICE suggesting if we increase our buffer cache from 2,384 MB to 4,256 MB we will save around 5,000,000,000 disk reads.

My question:
1. The number of disk reads it will save after 1 day or a week or month duration?
2. What are the negative impacts of increasing Buffer Cache. Lets say we have sufficient free memory available for this 2 GB increment.
3. Since Oracle mange memory automatically , which value should i set for SGA so that Buffer Cache will automatically grow to 4,256 MB.

Tom Kyte
May 07, 2010 - 8:23 am UTC

1) what was your report interval, it was for that interval of time, it is saying 'if you had set it to X instead of Y, we would have done A physical IOs instead of B physical IOs during that period'


2) larger cache takes longer to manage, you may have increased contention due latching, you might find that logical IO takes longer as it might take longer to discover a block is not in the cache yet. You might find the machine more sluggish if you have over committed memory.

All of these are 'mights' and 'maybes' and 'coulds' - not "this will happen"

3) you would have to set the sga target to some number larger than that, but it may or may not grow to that size - it depends on the demands placed on the other caches as well. The only way to ensure it gets that much would be to allocate it manually

Thanks

akas, May 08, 2010 - 3:37 am UTC

Thanks Tom, that information was very informative.

However, I've one doubt.

I was going through AWR report which is for 60 minutes(Start time 30-Nov-09 17:00:28 End Time: 30-Nov-09 18:00:29), DB machine got 16 CPU core. The total DISK READ from op DISK READ SQL is 4,873,787. But when i see Buffer Cache advisory it's saying "Estimated Physical Reads" is 11,053,070,439.
This 11 Billion number looks too big to me for 1 hour period.

I also looked at another AWR report start time: 29-Nov-09 00:00:38 End Time:29-Nov-09 10:00:50 (Duration 600 Minutes) same instance. However when i look at "Estimated Physical Reads" its saying 10,864,251,394 which is less than above 1 hour AWR report. How this thing is possible?

I'm getting confused here.

Tom Kyte
May 08, 2010 - 7:49 am UTC

keyword = estimated.

You compared a sunday to a monday, that would lead me to think "big differences" too.


But, the number you are interested in would likely be the percentage reduction. Look at that number and read the report as "if I made my buffer cache 1.4 times larger (140%), it would probably do X% (est phys read factor) less physical IO"

alternative to db_cache_advice

josh, June 27, 2013 - 6:54 am UTC

Tom,

Can you please tell me how to calculate the estimated db cache size? Currently we don't have db_cache_advice parameter set to ON(because of performance issues it caused due to the overhead). Irrespective, I would like to know what is the correct way to determine the estimated buffer cache size. Is there a way to get this apart from db_cache_advice? If so, can you please provide the steps?

Thank you for the help.
Tom Kyte
July 01, 2013 - 9:03 pm UTC

can you give me some metrics as to the overhead you observed? It is very lightweight and shouldn't give a problem - do you have real numbers showing it impacted you measurably - or did someone "feel" it was an issue?

the only way to know if increasing or decreasing the cache size would make sense is to enable the database to track these things. Without it, you can only resize, measure, resize, measure, resize, measure, and then compare. That would be way more overhead than db_cache_advice.

alternative to db_cache_advice

josh, July 01, 2013 - 11:10 pm UTC

Thank you Tom.

From your quote "resize measure resize measure". Please tell me how to "measure". What can we see in the AWR report which tells us current db_cache_size is good? Can we check any "physical %' parameters in AWR to determine the threshold value for db_cache_size? Much appreciate your inputs.
Tom Kyte
July 02, 2013 - 4:50 pm UTC

you would be looking at physical IO's performed before and after using the same identical workload. You'd look at real response time to see if it improved, if you are doing less physical IOs, the same, or more. You'd be looking at all of the performance related metrics before and after (under the *same* workload)

meaning, this isn't going to be something you can easily accomplish - do you really want to resize your SGA in your production system and replay workloads?


and still no feedback on what it was that lead you to (incorrectly in my opinion/experience) believe that db cache advice was somehow causing an outage? I'd really like to understand that better.

alternative to db_cache_advice

josh, July 02, 2013 - 6:05 pm UTC

Thank you Tom.

Regarding the outage, I am sorry I couldn't get this information currently. I recently joined this company and found the comments in init.ora. As a standard, they disabled this variable in most of the environments :-( Once I get the details, I will let you know.

db cache size advice

Dusan, May 08, 2017 - 3:02 pm UTC

Hi Tom , i am new at 11g administerings so can you please advise me about few things ...

here are my parameters

NAME TYPE VALUE
------------ ----------- ------
sga_max_size big integer 14016M
NAME TYPE VALUE
----------------------------- ----------- ------
archive_lag_target integer 0
db_flashback_retention_target integer 1440
fast_start_io_target integer 0
fast_start_mttr_target integer 0
memory_max_target big integer 0
memory_target big integer 0
parallel_servers_target integer 192
pga_aggregate_target big integer 1500M
sga_target big integer 10016M

db_cache_size is set to 0!

This setting were left by previous administrators and database is having issues and running slow from a time to time.

this is advice from db_cache_advice

640 78800 15.7351 5867668433
1280 157600 7.8151 2914273180
1920 236400 4.6182 1722147506
2560 315200 3.0691 1144472758
3200 394000 2.2479 838260657
3840 472800 1.767 658927275
4480 551600 1.4674 547200070
5120 630400 1.2699 473535710
5760 709200 1.1324 422268202
6400 788000 1.033 385215514
6688 823460 1 372902596
7040 866800 0.9595 357782493
7680 945600 0.9027 336634767
8320 1024400 0.856 319215648
8960 1103200 0.8159 304264825
9600 1182000 0.7812 291307306
10240 1260800 0.7509 280001373
10880 1339600 0.7233 269724222
11520 1418400 0.6964 259680005
12160 1497200 0.6728 250886565
12800 1576000 0.6491 242056826

Database server has 24gb of physical RAM always free about 3gb
First thing, is AMM active or not and do you recommend tuning db_cache_size to some value ? Sorry if questions are dumb i could not find any usable answer before.
Thank you very much in advance
Connor McDonald
May 09, 2017 - 2:21 am UTC

sga_target being non zero means *we* are controlling the db cache size.

You can keep an eye on V$DB_CACHE_ADVICE to see how well you cache is going.

db cache size advice

Dusan, May 09, 2017 - 7:15 am UTC

^
You see the output of ddbcache size advice, can you tell me based on that do you suggest to set some value to db_cache_size? When sga_target is non zero it means AMM is not turned on ?
Connor McDonald
May 10, 2017 - 1:27 am UTC

If sga_target is non zero, then you only need to set db_cache_size if you want to set a lower limit under which the cache cannot fall. But except in niche circumstances, just having sga_target should be sufficient.