Alexander, January 14, 2013 - 1:09 pm UTC
Hello,
I have two memory related questions;
1) How does Oracle decide which is the more "important" cache if say something like an sql statement (shared pool) and a block or set of blocks (buffer cache) are equally used? There is an LRU algorithm to determine within the appropriate cache to age things out, but what about across them?
2) I saw you make brief comments somewhere about how the new max memory parameters are only really fit for smaller systems. I have seen some rather anecdotal, difficult to prove memory related issues, so I'm suspicious of it (like not using pga and spilling to disk when it shouldn't...). So I was hoping you could go into as much detail on this subject as you can to either explain or refute what I think I read. Thanks.
January 15, 2013 - 10:27 am UTC
1) it is using its advisors - the same thing that you might use to decide to change the cache sizes.
database is running, it looks at the advisor and says "hey, if i were to increase the shared pool by X - we'd benefit from that and it says over here if I decrease the buffer pool by X - we wouldn't have a serious increase in physical IO's - therefore, I think it would be good to move X from the buffer cache to the shared pool"
2) are only really fit for smaller systems. by smaller I mean in importance, in monitoring, in how it is used.
If you have a mission critical system that is used constantly by a large group of people and you have a DBA staff charged with monitoring that system and ensuring it is always going - you probably don't want to use full automatic memory management (sort of like a race car driver would not use a fully automatic transmission). You don't want your SGA to resize itself or the PGA to resize itself at 3pm on a busy friday. What you want to do is take the information from the advisors - make a balanced decision (maybe even after testing with real application testing for example) on how to resize things and WHEN to resize them (probably not at peak usage).
if you have a 'small' system that you don't monitor all of the time, used by a smaller audience of people perhaps, not running thousands of transactions per second - fully automatic memory management could definitely be the way to do - perhaps with minimum reasonable sizes set on each of the areas - allowing the database to move around the remaining bits of memory.
Alexander, May 21, 2013 - 5:22 pm UTC
Hi Tom,
Can you have a look at this please:
1 select pool, name, bytes/1024/1024
2 from gv$sgastat
3* where name = 'KGH: NO ACCESS'
SQL> /
POOL NAME BYTES/1024/1024
------------ -------------------------- ---------------
shared pool KGH: NO ACCESS 3765.83868
shared pool KGH: NO ACCESS 6243.47711
streams pool KGH: NO ACCESS 63.992218
We are using MAX_MEMORY,(not sure if that has anything to do with this) are we are hitting a bug (lost track of the # at the moment) that is causing that KGH thing to grow unexpectedly high. This causes a lot of problems, it can cause the entire instance to crash with a ORA-4041, it can cause everything to hang waiting on latches querying X$KSMSP. We wanted to get ahead of this until we have a real fix, so we tried flushing the shared pool but it did not release anything. Do yo know why? Do you know what we can do, is there anything else we can run? Thank you.
May 22, 2013 - 12:49 pm UTC
please work this via support - I don't know your system, your conditions, your OS, your version, etc... (please don't post it here, I don't really do support issues like this...)
memory_target
Arvind Mishra, May 07, 2014 - 11:29 pm UTC
Hi Tom,
We are using 11gR2. Our DBA says we are not using memory_target and max_memory_target because there are many Oracle bugs related to these parameters. I do not believe it but just wanted to confirm it from you. Is it true?
Regards,
Arvind