Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, wallacel.

Asked: March 05, 2001 - 2:44 pm UTC

Last updated: August 28, 2013 - 5:26 pm UTC

Version: 8.1.6

Viewed 10K+ times! This question is

You Asked

Tom:

if "table fetch continued row" statistics is high in v$sysstat.
we need to analyze table list chained rows.
but if there are lots of table, how do i know which one
i should examine? or examine all?
it will very time-consuming.

Regards,

and Tom said...

Well, first of all -- make sure it is in fact "high"

For example, I have:


table fetch continued row 10,417,181

is that high? Hard to say in isolation. When I consider that the sum of table scan rows gotten and table fetch by rowid is 34,419,835,366 -- it doesn't look so high anymore.

If you have longs and long raws in your tables, you will have this regardless. If you have tables with more then 255 columns, you will have this regardless (they are stored in 255 column rowpieces -- never as a single big row).

You will have to either analyze them or you can do a select to find out (see
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1339202657182 <code>for how to do that). Pick you heaviest hit tables and start with them.

Alternatively, you could instruement your popular application, have it select out the table fetch continued statistic from v$sesstat either at the end of its run or periodically during its execution and stash that somewhere with a description of the application. Now, you'll now which applications are hitting this most and can concentrate on their tables.



Rating

  (4 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Percentage of Table fetch continued row

Vivek Sharma, July 04, 2003 - 7:40 am UTC

Dear Tom,

My Statspack report shows that there are 24220415 table fetch continued row and per second are 7696.4 which I think is too high. The sum of table fetch by rowid and table scan rows gotten is 559364228 and thus table fetch continued row is 4% of this i.e.24220415/559364228.

table fetch by rowid 114,997,281 36,541.9 5,212.2
table fetch continued row 24,220,415 7,696.4 1,097.8
table scan blocks gotten 7,572,955 2,406.4 343.2
table scan rows gotten 535,143,813 170,048.9 24,255.3
table scans (long tables) 285 0.1 0.0
table scans (short tables) 1,993,184 633.4 90.3

What according to you should be the situation when I should consider concentrating on table fetch continued row. Secondly, what does these numbers indicate, I think all these numbers indicate the number of rows returned. Correct me if I am wrong.

Thanks in Advance

Tom Kyte
July 04, 2003 - 8:44 am UTC

how long was that statspack for (15 minutes is good, 30 minutes is very long, anything longer isn't good).

But in any case -- you did the right thing here.

If I simply told you that on my last car trip I stopped at 50 red lights -- what could you say about that?  Is that good or bad?

If you say "oh, that is too high", I'll respond -- but I drove from Washington DC to San Francisco, it was 2,500 miles.  I think it was very good.

If you say "oh, that is really good", I'll respond -- but all I did was drive two city blocks in Manhatten during rush hour.  It was horrible.

If you say "insufficient data to draw any meaningful conclusions" -- you got the answer right.


So, the number 24,220,415 "seems" high but as an overall percentage of rows retrieved in your system -- it is pretty low.


The numbers you see are detailed row counts of rows processed -- not the number of rows returned.  Consider:

ops$tkyte@ORA920> @mystat "rows gotten"
old   4: and lower(a.name) like '%' || lower('&1')||'%'
new   4: and lower(a.name) like '%' || lower('rows gotten')||'%'

NAME                                VALUE
------------------------------ ----------
table scan rows gotten                 29

ops$tkyte@ORA920> select count(*) from big_table;

  COUNT(*)
----------
   1000000

ops$tkyte@ORA920> @mystat "rows gotten"
old   4: and lower(a.name) like '%' || lower('&1')||'%'
new   4: and lower(a.name) like '%' || lower('rows gotten')||'%'

NAME                                VALUE
------------------------------ ----------
table scan rows gotten            1000029

ops$tkyte@ORA920>


I had 1 row returned -- yet, I had over 1,000,000 rows gotten from the table.


Does your system have any tables with a row width > block size?  if so, they are chained and will remain chained FOREVER (well, bigger block size may correct it).  Tables with longs?  Pretty much forget about it -- it is chained.


These might be migrated rows (analyze list chained rows can help you figure out that).  If so, you can fix them BUT -- and this is very very important -- have you 

a) a performance issue
b) identified this to be the cause


if not, don't do anything. 

About table fetch continued row

Karthik, March 05, 2007 - 10:56 am UTC

Hello Sir,
I was trying to understand the explaination you given to the previous question. Here is our statspack output taken for 30 min interval.

table fetch by rowid 1,397,065 388.1 1,401.3
table fetch continued row 14,005,777 3,890.5 14,047.9
table scan blocks gotten 9,759,750 2,711.0 9,789.1
table scan rows gotten 19,781,359 5,494.8 19,840.9
table scans (long tables) 92 0.0 0.1
table scans (rowid ranges) 0 0.0 0.0
table scans (short tables) 4,387 1.2 4.4

We have 4 tables which has more than 255 columns. Each of these have 100 thousand rows in them. The top sql queries with high logical and physical reads comprises of these 4 tables. One of thae table has around 600 columns.

There are 450 tables which has column type of CLOB, LONG or LONG RAW. In all some 800 columns are one of these types.

Is our 'table fetch continued row' value a real concern? What do you suggest, should I try yo reorginize the table with lots of chain_cnt values?


Tom Kyte
March 05, 2007 - 2:19 pm UTC

if you select * from a table with more than 255 columns, you will get a table fetch continued row - no matter what.

if you have tables with longs and long raws, you will almost certainly get the table fetch continued row if you retrieve the long/long raw (but not clob/blob - totally different implementations)


since you do that, table fetch continued row is a FACT of life, no avoiding it.

does this still hold

Sokrates, August 19, 2013 - 4:12 pm UTC

if you select * from a table with more than 255 columns, you will get a table fetch continued row

is this still true in current releases ? ( >= 11.2 )

Thanks
Tom Kyte
August 28, 2013 - 5:26 pm UTC

yes. as long as one of the columns past 254 has a value - yes.

found it

Sokrates, August 19, 2013 - 6:17 pm UTC

... http://docs.oracle.com/cd/E16655_01/server.121/e17633/logical.htm#CNCPT1055
Oracle Database can only store 255 columns in a row piece. Thus, if you insert a row into a table that has 1000 columns, then the database creates 4 row pieces, typically chained over multiple blocks.