Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, s..

Asked: May 30, 2000 - 3:03 pm UTC

Last updated: March 04, 2011 - 8:54 am UTC

Version: 8.1.5

Viewed 10K+ times! This question is

You Asked

table scans (short tables)

Long (or conversely short)
tables can be defined by
optimizer hints coming down
into the row source access layer
of Oracle. The table must have
the CACHE option set.
What are short tables ?
Can you elaborate on this part of the documentation (O8 Reference) ?
If there are lots of long table scan, what are the implications?
Bad performance of queries ?

Thanks
SL


and Tom said...

A "short" table is defined here as one with a high water mark lower than:

4 blocks or 2% of the db block buffer cache for normal tables

OR

less then CACHE_SIZE_THRESHOLD for 'cache' tables in v7.x (buffer pools in 8.x eliminate the need for this parameter. It is intended that you would use buffer pools to cache individual objects and control the size they consume in the cache in that fashion)

In 8i, if you alter table cache -- it'll be considered a small table regardless of the HWM.


Lots of long table scans does not imply bad performance necessarily -- a 10 block table with no indexes (a lookup table perhaps, frequently used) will make this number jump up -- at virtually no cost.

Its a statistic that can be used in conjunction with other facts you and you alone know about your database to make some judgements as to how well its performing. Full table scans are not necessarily a "bad" thing.


Rating

  (10 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Which tables involve in the table scans (short tables)?

Danny, February 24, 2005 - 2:12 pm UTC

How can you determine which tables are involved in the table scans(short tables). In the other words, how could you find out those tables so that you could put them into the KEEP cache?

Tom Kyte
February 24, 2005 - 5:20 pm UTC

hey -- if you use them, you won't lose them -- if you scan them lots, you know where they will be? in the buffer cache.

you can use the v$ segment statistics view to see what segments you are doing lots of physical IO against, you would see if they make sense for special consideration.

Touch Count bug

Jonathan Lewis, February 25, 2005 - 9:16 am UTC

Actually there is a bug with the touch count that was only fixed in 10g. For tablescans (and index fast full scans), the touch count is not incremented.

This means that small tables will fall off the end of the LRU fairly promptly, no matter how much you use them, even though they have been loaded in to the midpoint rather than the end of the LRU.



Alberto Dell'Era, February 25, 2005 - 4:35 pm UTC

> Actually there is a bug with the touch count that was only fixed in 10g.
> For tablescans (and index fast full scans), the touch count is not
> incremented.

> This means that small tables will fall off the end of the LRU fairly promptly,
> no matter how much you use them, even though they have been loaded in to the
> midpoint rather than the end of the LRU.

There they fall, my hash-joined lookup tables ...

A workaround may be to put them into the KEEP pool and make it bigger then the sum of the lookup tables size, correct ?

Jonathan, may you please post the bug number ?

TIA
Alberto

Bug Number

Jonathan Lewis, February 25, 2005 - 6:39 pm UTC

Sorry.

It's something Steve Adams mentioned to me two or three years ago, so I've been testing it on every upgrade since. I never bothered to see if there was a bug number for it.

Yes, until 10g, if you really have to scan small tables regularly, then putting them into the KEEP pool would be good enough. If the tables are subject to updates, don't forget to size the pool to allow extra space for CR clones of CURrent blocks, rather than just using sum(table sizes).

If you are using file system buffering, though, there may be little point in worrying about a KEEP pool, as the file system buffer may be protecting you from the worst costs of the tables falling out of the buffer.



Tom Kyte
February 25, 2005 - 7:12 pm UTC

Thanks -- appreciate the input always...

Touch count for small tables

yas, December 16, 2005 - 9:58 am UTC

Jonathan, can you give the test case you use to show that table scans do not increment the touch count? I am trying this with a table with one row but i see that TCH column of X$BH increases after each scan.

Tom Kyte
December 16, 2005 - 12:57 pm UTC

@connect /

drop table t;

create table t pctfree 90 pctused 10 as select * from all_users where 1=0;
insert into t select * from all_users;
select distinct dbms_rowid.rowid_block_number(rowid) from t;

begin
for i in 1 .. 10
loop
for x in ( select * from t )
loop
null;
end loop;
end loop;
end;
/
set autotrace traceonly statistics
exec dbms_lock.sleep(3)
select * from t;
exec dbms_lock.sleep(3)
select * from t;
exec dbms_lock.sleep(3)
select * from t;
exec dbms_lock.sleep(3)
select * from t;

column data_object_id new_val d
select data_object_id from user_objects where object_name = 'T';
@connect "/ as sysdba"
select file#, dbablk, tch from x$bh where obj = &D;
@connect /



I get this:

old 1: select file#, dbablk, tch from x$bh where obj = &D
new 1: select file#, dbablk, tch from x$bh where obj = 36320

FILE# DBABLK TCH
---------- ---------- ----------
12 9 4
12 10 1
12 11 1


(9206)

Very useful

Michel Cadot, December 16, 2005 - 1:38 pm UTC

I didn't see this thread before.
Very interesting.

Regards
Michel


Test case works

yas, December 17, 2005 - 6:01 pm UTC

Sorry, it seems i was looking at the touch count of the header block. I have tried this case on 9.2.0.7 and it is the same as you provided. Touch count for the header block increments with each table scan, but the touch count of the other blocks does not increment. Thanks.

Different results among different 10g versions

Stefan Knecht, May 05, 2006 - 4:53 am UTC

Hi tom,

very interesting topic.. I've been running the above test on 3 different version of 10g, and it seems that the "bug" is still not fixed in some:

in 10.1.0.3 I get

FILE# DBABLK TCH
---------- ---------- ----------
8 6194 0
8 6193 0
8 6195 0

where as in 10.1.0.5 I get the much better

FILE# DBABLK TCH
---------- ---------- ----------
4 18 10
4 17 10
4 19 6

10.2.0.2 Also nicely keeps them in the pool and happily increases touch counts:

FILE# DBABLK TCH
---------- ---------- ----------
4 243 7
4 242 7
4 241 7

Stefan



different types of table scans

-Aditi, March 04, 2011 - 4:03 am UTC

Hello Tom,

In oracle 11g reference manual I see following types of tablescans , can you please tell what are the performance implications of these statsitcis, when we can say that they are alarming or rather in what perspective we need to analyze them ( as see them in couple of my AWR reports as well ) .

also are there any changes in 11g for the definition for table scan(short tables) [ you have explained them at top of this thread ].

table fetch by rowid
table fetch continued row
table scan blocks gotten
table scan rows gotten
table scans (direct read)
table scans (long tables)
table scans (rowid ranges)
table scans (short tables)

if I see too many table fetch continued row - does this mean there are row migration issues in some of the tables ?

Thanks
Tom Kyte
March 04, 2011 - 8:54 am UTC

table fetch by rowid typically comes from an index range scan. We get a rowid and read a single row from the table


table fetch continued row is a row that was fetched (be it by rowid or scanning) that either didn't fit on a block or in the case of a rowid fetch - has migrated. It does not MEAN the row was migrated, it could be that the row is just too large to fit on a single block.

table scan blocks/rows gotten - self defining, just counters of rows/blocks retrieved during full scans


direct read scans are full scans that bypass the buffer cache (much like parallel query always used to to - but in 11g parallel query might not do anymore - it depends). We read directly from disk into your pga to process the data, the buffer cache is deemed far too inefficient.

table scans long tables - big tables that were full scanned and placed into the buffer cache in a manner so as to not fill up the buffer cache. Typically as you are scanning the table - you are using a very very small set of blocks in the buffer cache over and over - so the table basically overwrites itself in the cache. This prevents a 5gb full table scan from obliterating a 4gb db cache size. That 5gb scan would only use a very small section of the buffer cache.

table scans rowid ranges - self describing, scanning a portion of a table based on rowid ranges.


table scans short tables - table is deemed small enough to fit comfortably in the buffer cache so unlike the long table full table scan - it does not overwrite itself in the cache.


the method for determining a small table is constantly being revised - so the process is a little different from the 7.x days but the net result is the same. Consider any table greater than 5% the size of the cache to be a large table an anything less is probably small.

A reader, March 04, 2011 - 9:22 pm UTC

Thank you . this was very helpful to understand the concepts of these table scans.

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.