Skip to Main Content

Breadcrumb

Dev Live Dev Intro

We are celebrating Developers at AskTOM. We welcome Developers of all levels of experience to join us at our FREE Developer Live events coming in August and September. Just click on the left to register today! If you are brand new to Database Technology, then we also have got you covered. Just click on the right for your comprehensive FREE training program to kick start your Oracle Database Development journey!

Question and Answer

Tom Kyte

Thanks for the question, Anandkumar.

Asked: March 30, 2009 - 2:04 am UTC

Answered by: Tom Kyte - Last updated: February 14, 2012 - 9:03 am UTC

Category: Database - Version: 9.2.0.8

Viewed 50K+ times! This question is

You Asked

Hi Tom

My question is related to sizing the KEEP buffer pool. The reason for my question is that, in my production database there are lots of full table scans are happening and that too these table segments are of very small in nature usually between 2 to 10 MB.

My question is that, when I assign any of the segment to keep buffer pool I need to create appropriate DB_KEEP_CACHE_SIZE as per the sizes of the various segments.

Few more Questions are as below related to Keep buffer pool

1: Is this DB_KEEP_CACHE_SIZE is part SGA or its a separate part ?

2: When we assign any segment to KEEP buffer pool, data will be always available
in buffer cache so that we can avoid physical reads whenever data is
required from that segment. When we add new rows, update rows or when we
delete rows from the segment how this will reflect in the buffer pool ? It
means all rows from the segments will be reloaded into the buffer pool
whenever we do insert, update and delete or only the new rows or changed
will be updated in the buffer pool ?


Thanks In Advance

Anandkumar S.M - India

and we said...

I would not use the keep pool, the default buffer cache is almost certainly better than good enough.


The keep buffer would definitely be in the SGA, it is just another buffer cache.


When we assign any segment to KEEP buffer pool, data will be always available
in buffer cache so that we can avoid physical reads


that is absolutely NOT correct. The keep pool is just a buffer cache. There is no "data will always be available" about it, it is just like the default pool (it caches blocks)


<quote src = Expert Oracle Database Architecture>

Block Buffer Cache


So far, we have looked at relatively small components of the SGA. Now we are going to look at one that is possibly huge in size. The block buffer cache is where Oracle stores database blocks before writing them to disk and after reading them in from disk. This is a crucial area of the SGA for us. Make it too small and our queries will take forever to run. Make it too big and we¿ll starve other processes (e.g., we won¿t leave enough room for a dedicated server to create its PGA, and we won¿t even get started).

In earlier releases of Oracle, there was a single block buffer cache, and all blocks from any segment went into this single area. Starting with Oracle 8.0, we had three places to store cached blocks from individual segments in the SGA:

* Default pool: The location where all segment blocks are normally cached. This is the original¿and previously only¿buffer pool.

* Keep pool: An alternate buffer pool where by convention you would assign segments that were accessed fairly frequently, but still got aged out of the default buffer pool due to other segments needing space.

* Recycle pool: An alternate buffer pool where by convention you would assign large segments that you access very randomly, and which would therefore cause excessive buffer flushing but would offer no benefit because by the time you wanted the block again it would have been aged out of the cache. You would separate these segments out from the segments in the default and keep pools so that they would not cause those blocks to age out of the cache.

Note that in the keep and recycle pool descriptions I used the phrase 'by convention.' There is nothing in place to ensure that you use neither the keep pool nor the recycle pool in the fashion described. In fact, the three pools manage blocks in a mostly identical fashion; they do not have radically different algorithms for aging or caching blocks. The goal here was to give the DBA the ability to segregate segments to hot, warm, and do not care to cache areas. The theory was that objects in the default pool would be hot enough (i.e., used enough) to warrant staying in the cache all by themselves. The cache would keep them in memory since they were very popular blocks. You might have had some segments that were fairly popular, but not really hot; these would be considered the warm blocks. These segments' blocks could get flushed from the cache to make room for some blocks you used infrequently (the 'do not care to cache' blocks). To keep these warm segments blocks cached, you could do one of the following:

* Assign these segments to the keep pool, in an attempt to let the warm blocks stay in the buffer cache longer.

* Assign the 'do not care to cache' segments to the recycle pool, keeping the recycle pool fairly small so as to let the blocks come into the cache and leave the cache rapidly (decrease the overhead of managing them all).

This increased the management work the DBA had to perform, as there were three caches to think about, size, and assign objects to. Remember also that there is no sharing between them, so if the keep pool has lots of unused space, it won¿t give it to the overworked default or recycle pool. All in all, these pools were generally regarded a very fine, low-level tuning device, only to be used after most all other tuning alternatives had been looked at (if I could rewrite a query to do one-tenth the I/O rather then set up multiple buffer pools, that would be my choice!).
</quote>



Read also
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:253415112676


Based on your misunderstanding of what the keep pool is/does, I would really recommend you do not use it. It does not do what you imagine.


If your tables are less than about 2% the size of the buffer cache - they will be cached normally in the default pool. If they exceed 2% of the buffer cache, they will not be cached "normally" (read the above link) in order to avoid blowing out the buffer cache.

and you rated our response

  (18 ratings)

Is this answer out of date? If it is, please let us know via a Review

Reviews

keep buffer pool

April 07, 2009 - 5:13 am UTC

Reviewer: A

Hello Tom,
What is the difference between table placed in keep buffer pool and table cache using ALTER TABLE CACHE? Which one of it is better?

Moreover if any frequently queried and frequently updated table is placed in keep buffer pool, what happens when commit takes place? Does it update directly in keep buffer pool or it reads in default buffer pool and then updates it?

Or can I place only indexes in keep buffer pool?
Tom Kyte

Followup  

April 13, 2009 - 11:44 am UTC

re-read the answer above, the keep, recycle and default pools are just three separate buffer pools - they do not really manage blocks *any differently*. They are just three different caches.

How about this - instead of calling them default, keep and recycle - just call them pool1, pool2, pool3. Pool1 will not share blocks with pool2 and pool3 and vice versa. They are just three separate areas that do not overlap, do not share.


alter table cache - that changes the way blocks that are read into the pool as the result of a large table full table scan (tables bigger than about 2% of the buffer cache) are treated - period.

alter table cache - does not cause a table to be cached. I wish they had not called it that at all, very misleading.

Normally, if a table is larger than 2% of the buffer cache in size, when you full scan it - the blocks you read in are aged out very rapidly - so that you do not "blow out" the cache by reading a big table. You would use and reuse the same set of blocks over and over - instead of wiping out the entire cache. So, when you full scan a large table - it will typically result in the data NOT remaining in the cache afterwords.

Unless you "alter table cache", then the blocks are read into the cache like they were single block IO blocks - and are cached using the least recently used algorithms.

ops$tkyte%ORA10GR2> create table t1
  2  as
  3  select level id, rpad('*',4000,'*') data, rpad('*',2000,'*') data2
  4    from dual
  5   connect by level <= 15000;

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create table t2
  2  CACHE
  3  as
  4  select level id, rpad('*',4000,'*') data, rpad('*',2000,'*') data2
  5    from dual
  6   connect by level <= 15000;

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select count(*) from t1;

  COUNT(*)
----------
     15000

ops$tkyte%ORA10GR2> select count(*) from t2;

  COUNT(*)
----------
     15000

ops$tkyte%ORA10GR2> set autotrace traceonly statistics;
ops$tkyte%ORA10GR2> select count(*) from t1;


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      15011  consistent gets
       7672  physical reads
          0  redo size
        412  bytes sent via SQL*Net to client
        396  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte%ORA10GR2> select count(*) from t1;


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      15011  consistent gets
      14936  physical reads
          0  redo size
        412  bytes sent via SQL*Net to client
        396  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte%ORA10GR2> select count(*) from t2;


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      15011  consistent gets
          1  physical reads
          0  redo size
        412  bytes sent via SQL*Net to client
        396  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte%ORA10GR2> select count(*) from t2;


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      15011  consistent gets
          0  physical reads
          0  redo size
        412  bytes sent via SQL*Net to client
        396  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte%ORA10GR2> set autotrace off



note the differences in physical IO requests - the large table T2 is cached as if it were the size of DUAL, table T1 on the other hand - that tries to NOT wipe out the entire buffer cache.

Remember, this will vary by your CACHE size - so if you do this test on your system, you might see different numbers based on your block size (mine was 8k, a row per block) and cache size (sga target was 512m on my system)

Thanks

April 14, 2009 - 5:07 am UTC

Reviewer: A

Hello Tom,
Thanks for the reply.

The second part of my question was, if any frequently queried and frequently updated table is placed in keep buffer pool, what happens when commit takes place? Does it update directly in keep buffer pool or it reads in default
buffer pool and then updates it?

This is what I understand, there's no difference when commit happens on a table cached in buffer pool or default pool.

What about this, can I place only indexes in keep buffer pool?


Tom Kyte

Followup  

April 14, 2009 - 11:38 am UTC

But I answered that already:



re-read the answer above, the keep, recycle and default pools are just three separate buffer pools - they do not really manage blocks *any differently*. They are just three different caches.



they are just three different, non-overlapping caches. If you place a segment in the keep pool, it will be buffered in the keep pool - period. It will not migrate to the default pool.


Can you place only indexes in the keep pool? Sure, you assign SEGMENTS to one of the three pools.

That said, *should you* place only (or all) indexes in the keep pool? Almost certainly not. In fact, unless you have a really truly powerful reason for even having a keep/recycle pool - don't. Default is better than good enough for almost every implementation.

keep pool sizing and content viewing

September 18, 2009 - 4:23 am UTC

Reviewer: Fredx from Croatia

Hi Tom,

Why does the Oracle not return any error when I create a new table and put the table into KEEP BUFFER_POOL, although my DB_KEEP_CACHE_SIZE = 0 (which means that keep buffer pool is not defined)?

Does it means that Oracle ignore just storage clause and put the table in default buffer cache?

When sizing keep buffer pool to put all dimension tables in it (data warehousing), is it enough to summarize (using for example your procedure show_space) actual space used for dimension tables, or is it necessary to add some overhead?

Are there any view to see what is the contents of the keep buffer pool?

I’m using automatic PGA (pga_aggregate_target<>0) and SGA (sga_max_size<>0 and sga_target <>0).

Tom Kyte

Followup  

September 18, 2009 - 11:11 am UTC

... Why does the Oracle not return any error when I create a new table and put the
table into KEEP BUFFER_POOL, although my DB_KEEP_CACHE_SIZE = 0 (which means
that keep buffer pool is not defined)? ...

because we decided "no harm done, no big deal, we'll just put it in default if keep is not there"


... or is it necessary to add
some overhead?
...

it is highly doubtful that all dimensions would be cached - if there is any modification going on at all - there will be multiple versions of blocks in there and yes, there will be overhead (our blocks used to manage your blocks).

Also, bear in mind, that if you are using parallel query (any release) or current releases of the database (like 11) - we know that the buffer cache can be as slow as sin for full scans and we avoid it altogether - preferring directio to the datafiles. Parallel query tends to checkpoint the segment OUT of the buffer cache just to avoid having to process it there. Much more efficient to read from disk. So, be aware of how the data is actually accessed - unless you do lots of single block IO on it (index reads) we probably don't want to use the cache.


keep pool sizing and content viewing 2

September 21, 2009 - 4:26 am UTC

Reviewer: A reader

Hi Tom,

1.
You convinced me not to use keep pool for caching dimensions, but what do you mean by: if there is any modification going at all – there will be multiple versions of block in there? Could you give me more detail explanation?

2.
If I update one of dimension tables which are stored in keep pool, why does the Oracle simply automatically update blocks in keep pool?

3.
Should I then have a reason to be concern that I am going to get wrong results?

4.
How can I clean only keep pool (I know it’s possible to flush buffer cache but that would clean all pools), and is it possible to see the contents of keep pool?


Thanks.


Tom Kyte

Followup  

September 28, 2009 - 12:56 pm UTC

1) read:

http://www.oracle.com/technology/oramag/oracle/05-may/o35asktom.html

especially the link to the documentation


2) that question makes no sense to me.

If i update, why does Oracle update

that is what I read out of it.

3) no, of course not. why would you even think that?

4) you don't ever need to clean it. It takes care of itself, by itself.

load objects into keep buffer pool

September 29, 2009 - 4:03 pm UTC

Reviewer: A reader

Tom,

Having tried to rewrite my queries to reduce i/o and various other tuning alternatives, I'm at the point now where I would like to benchmark what difference it would make to performance,if any, by "priming" the system with data into the keep buffer pool.

Table A (col1 number, col2 varchar2(10), col3 number);

To load all of table A into the pool, would this do it;
select count(col3) into :b1 from A; -- I'm guessing that by accessing the last column defined in the table, it would automatically bring in all the data for the row? I'm trying to avoid having to select every column and every row back (by doing a SUM) as the real table has many columns.

Would CLOB has to treated differently, eg;
Table B (col1 number, col2 varchar2(10), col3 CLOB);



Tom Kyte

Followup  

October 07, 2009 - 7:36 am UTC

we only cache blocks. we do not cache 'rows'. You don't have to touch anything in particular, you just have to cause block X to be read and it'll be in the cache (assuming we do not use direct IO on it, then it goes into your PGA and not in the cache)

and if you update this stuff - don't forget to add infinite space for the multiple versions we'll need to keep out there as well - you cannot just allocate space for the segment as it exists on disk, if you update - it'll take a lot more space...

clobs are stored out of line typically (not in the table). clobs by default are not cached (you'd have to explicitly tell us to do that) - the clob has a pointer in the table, that pointer references a lob index, that lob index references a lob segment.

you would have to touch every byte of every clob in order to 'get them into the cache' - and that would only work if you changed the defaults.

keep pool and recycle pool

August 30, 2010 - 1:48 pm UTC

Reviewer: Arindam from India

Hi Tom,

I went through the above few topics discussed and got an idea about keep pool and recycle pool:

1) Keep pool is separate cache area other than default pool and recycle pool initialized by DB_KEEP_CACHE_SIZE. Unless and until I set the above parameter the warm/frequently accessed blocks which are not always at the MRU end (I think only Hot/most recently accessed blocks are in that part) of LRU list, will be wiped out from the default cache. If now I set the above parameter (DB_KEEP_CACHE_SIZE) to some value then the blocks will be wiped out from default cache and get into the keep pool where again LRU algorithm is used and the blocks stays in the keep pool for some more time.
2) Recycle Pool is initialized with DB_RECYCLE_CACHE_SIZE parameter. Unless and until the above parameter is not set then the undesired blocks (like taking data from a table to update another table) will hold extra blocks which we don't want at all.

Now my questions are:

A) Please correct me if the above understandings are not correct.
B) Who will decide which blocks to go to the recycle or keep pool? like how undesired blocks go to the recycle pool? is it internal functionality of oracle server or is it by user's choice?
Tom Kyte

Followup  

September 09, 2010 - 11:25 am UTC

the keep and recycle pools are two separately configured buffer caches, separate from each other and from the default pool.

they only exist if you configure them.

that they exist or not does not affect the default pool in any way shape or form. So, their existence does not change the default pool in any way.

a) read above...

b) you do. you assign segments to the keep/recycle pools, not us.

Keep and recycle pool

September 10, 2010 - 12:26 pm UTC

Reviewer: Arindam from India

Thanks a ton for clearing that confusion. That means no blocks in Default pool will never get affected by the keep or recycle pool config.

Now can you so me an example of how we can assign a segment so that it can go to keep or recycle pool.
That will be real fun. :)
Tom Kyte

Followup  

September 13, 2010 - 7:52 am UTC

an alter table statement would be fun???

ops$tkyte%ORA11GR2> alter table t storage ( buffer_pool keep );

Table altered.



have you read about this stuff at all in the concepts guide, the admin guide and the sql language reference guide? It is all documented...

keep and recycle pool

September 13, 2010 - 10:40 pm UTC

Reviewer: Arindam from India

Hi Tom,

I am new to this field, so to me the discussions are like exciting and at the same time some serious learning. You are the only teacher for me.
Now my queries are:
1) How do the blocks in Keep pool aged out? Is LRU algorithm works for Keep pool as well?
2)But what is the logic behind recycle pool so that the blocks are being flushed out quickly as they come? Can recycle pool also hold DML operated blocks? or only scanned blocks?

Thanks,
Tom Kyte

Followup  

September 14, 2010 - 7:02 am UTC

You might be interested in my book "Expert Oracle Database Architecture" (see home page here for link). I cover pretty much all of this stuff in some detail.

briefly:

1) same as the default pool - keep and recycle are just names, they don't really do anything differently - they could have been called A and B just as well.

2) they aren't really, they are just cached - but cached separately so as to cause them to NOT flush out blocks in the default or keep pools. By convention, recycle would be set on the small side - the keep to be some number LARGER than the objects you want to cache in there (remember, multi-versioning, we keep multiple versions of blocks - and the default pool to be what you want it to be.

if the recycle is small - things come in and out very fast. If the keep is larger than large enough, the things that you want cached would be cached (after reading them of course). If the default is sized right (see statspack/AWR reports for help on sizing that), then - it is OK too.


99.999999999999% of the world does not need, want, nor desire a keep and recycle pool, they add to your administrative overhead and are for very special edge cases where that extra 1% matters in tuning.

Keep and recycle pool

September 14, 2010 - 2:53 pm UTC

Reviewer: Arindam from India

Thanks a ton Tom. That ends my query for this topic. I understood it finally.

Buffer cache can be slow as sin

November 21, 2011 - 7:57 am UTC

Reviewer: Matt Birchall from Bristol, England

Your comment from Sept 2009 puzzles me. If you do a FTS then Oracle buffers the data at the LRU end of the cache (11gr2 SQL Language ref) with NOCACHE and this data is most likely to be discarded when other data is required by an application. Your comment implies that Oracle may (at its discretion) use direct I/O to the datafiles when doing an FTS. I thought direct i/o was from a different layer as an alternative to OS data i/o.
Your Expert book touches on this topic for LOBs and it seems to me that NOCACHE for LOBs is a different subject compared to NOCACHE for 'normal' tables and indexes. Could you clarify if direct i/o can happen for an FTS of a non-LOB object please?
Tom Kyte

Followup  

November 21, 2011 - 2:11 pm UTC

I thought direct i/o was from a
different layer as an alternative to OS data i/o.


no, it is direct IO in the sense the IO is performed directly into your PGA instead of being read into the buffer cache (which would then be followed by a logical IO to get it out).



It would be the same sort of 'direct IO' done for a non-cached lob. A non-cached lob requires a physical IO to read any portion of the log segment. That physical IO goes into your PGA (process) memory, not the SGA.


NOCACHE for lobs is very different from NOCACHE for tables. NOCACHE for lobs means "not going into the buffer cache, ever". NOCACHE for tables means "if you full scan a large table, do not allow it to flood the cache - you'll cache some of it, but not too much".


FTS bypasses SGA

November 22, 2011 - 5:45 am UTC

Reviewer: Matt Birchall from Bristol, England

Are you saying that the documentation is incomplete - i.e. that in 11g an FTS 'usually' goes to the LRU end of the cache but Oracle may decide that this is a waste of time e.g. if it expects the FTS to be bigger than available SGA cache. In this case it simply reads the data to the PGA and doesn't bother with the overhead of enabling sharing since it estimates that the data is unlikely to be used by another query.
Does this behaviour show up in XPLAN?
Tom Kyte

Followup  

November 22, 2011 - 8:43 am UTC

I would not say the documentation is incomplete,

http://docs.oracle.com/docs/cd/E11882_01/server.112/e25789/glossary.htm#CNCPT44477

<quote>A scan of table data in which the database sequentially reads all rows from a table and filters out those that do not meet the selection criteria. All data blocks under the high water mark are scanned.</quote>


http://docs.oracle.com/docs/cd/E11882_01/server.112/e16638/optimops.htm#PFGRF94600

longer definition - follow link to read.



It doesn't really go into much detail about the block placement either way.



This behavior does not show up in the plan. A way to observe it would be to look at the waits experienced, you'll either see db file scattered read (buffer cache) - or direct path reads (straight to pga).

Thanks for the info

November 22, 2011 - 11:27 am UTC

Reviewer: Matt Birchall from Bristol, England

The glossary definition doesn't help clarify about caching behaviour but the second link distinguishes between Small/Medium/Large objects and parallel query, which is explicit about caching. I sit informed and corrected and have enough in my pipe to smoke for a while. :-)
Thanks for your time, it seems to me that if Oracle's official documentation were clearer and more accessible (to mere mortals) then we might get more value out of the product and you might have less to do in filling in and expanding on where the docs leave off.
Tom Kyte

Followup  

November 22, 2011 - 1:16 pm UTC

all i did was type

full table scan

http://www.oracle.com/pls/db112/search?remark=quick_search&word=full+table+scan

into the search dialog on otn.oracle.com.... :) Except for the concepts guide reference, I didn't know where exactly to look right off the bat!

Trust me, I have memorized the documentation or anything..

I like to KEEP & recycle

January 17, 2012 - 6:10 am UTC

Reviewer: Reader

I believe these are nice features -- I'm just working in two projects: in one we have large tables and we dramaticly need to keep them in memory because they are being used all the time and lobs which are only once viewed and never used again. Then it's quite nice that we've created RECYCLE for lobs (100M, 2% of MEMORY_TARGET) and default is used only for indeces and tables which is ok. In the other system we have night load which clear up memory from important tables, while some people work on them. So we are going to use KEEP this time to separate cache of indeces which we would like to be kept from rest of system. Reading your answer I lose my self confidence if it's a good idea
Tom Kyte

Followup  

January 17, 2012 - 3:29 pm UTC

by default, lobs are NOT cached in the first place, by default they do not hit the buffer cache. they would only hit the cache if your developers made them do that - which, if they are "only once viewed and never used again." - would not be smart.

so, if they are at the default of nocache, the memory you set aside for your recycle pool is not being used.

if they are cached, you are using your recycle pool - but you didn't need to since you could have just left them uncached and use the recycle pool memory for something better.


they
are being used all the time,


they buffer cache would keep them cached then. If you are using them all of the time - they will be in the cache. What won't be in the cache are things that

a) you haven't accessed
b) you don't keep accessing after accessing it


I'm not a fan of them in general, they complicate things and only rarely do they provide any sort of large benefit.

let's see

January 18, 2012 - 2:22 am UTC

Reviewer: A reader

well, you are right (nothing new :-) ). we'll try if this keep pool (and maybe we compare it with CACHE option for indexes) will help us, in any case I'll update thread.
regards,
Tom Kyte

Followup  

January 18, 2012 - 7:36 am UTC

what is this cache option with indexes you speak of?

ups

January 18, 2012 - 2:35 am UTC

Reviewer: A reader

we won't compare -- there is no option CACHE for indexes
Tom Kyte

Followup  

January 18, 2012 - 7:36 am UTC

indeed ;)

January 20, 2012 - 3:35 am UTC

Reviewer: A reader

Is an index fast full scan treated like a table full scan regarding the block caching or like normal index single-block I/O?
Tom Kyte

Followup  

January 20, 2012 - 9:48 am UTC

it is a full scan and treated the same as a full scan.

recycle pool size 0?

February 13, 2012 - 8:55 am UTC

Reviewer: Josh A from Rochester, NY

Hi Tom,
I'm working in a warehouse environment. We have many tables that are short lived. We do one insert, commit, one select, then truncate. I was planning on changing these tables to use the recycle pool. There are currently no tables set up to use the recycle pool. Since we are only ever doing one select, is there any benefit of setting the recycle pool to a size greater than zero?

Thanks,
Josh
Tom Kyte

Followup  

February 13, 2012 - 9:06 am UTC

... I was planning on
changing these tables to use the recycle pool. ...

don't bother, they are too small to worry about.


you'd have to actually configure a recycle pool in order to actually use it, 0 would not cut it. But - don't go down that path.


any reason these are not global temporary tables?

Recycle pool size

February 14, 2012 - 7:43 am UTC

Reviewer: Josh A from Rochester, NY

The tables aren't that small. One from a smaller load right now shows about 2.5 million records. Larger loads could be ten times that size. Toad shows the table size to be 5GB.

I think that size is significant enough that I don't want these tables using the default pool. But if I'm only reading them once does it matter what size the recycle pool is?

There are reasons that these are not global temp tables. We do one whole table insert into the table followed by many partition specific selects from the table, not in the same session.
Tom Kyte

Followup  

February 14, 2012 - 9:03 am UTC

they won't chew up the buffer cache anyway. The single insert should be a direct path insert (insert /*+ APPEND */ - skip undo, and redo if you want). That doesn't use the buffer cache.

In 11g, you will almost certainly be using a direct path read (no buffer cache) for the full scans anyway.

But go ahead if you want to - I'm dubious. Make the pool at least a few megabytes - if we do use it, you do want it to be usable. You don't want to have a teeny tiny one, you'd end up getting massive buffer busy waits while waiting to clear blocks out of the cache.