Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Prince.

Asked: June 26, 2002 - 9:47 pm UTC

Last updated: February 21, 2008 - 4:50 pm UTC

Version: 8.1.7.4

Viewed 1000+ times

You Asked

Dear Tom,

I have a table with 25% chained rows.

I am trying to determine the impact of this chaining and rebuild it if needed.

To determine, whether I need to reorg this table I did the following test (Logically it seems, what I am doing is correct).

Though the statistics shows there are lot of chained rows, the "table fetch continued row" of autotrace doesn't validate it.

Could you pl, give me an explanation on this behaviour? Am I doing anything wrong here?

Note: Though I didn't choose the "explain" of autotrace, OEM showed that it was doing a FTS.

Thanks,

SQL> select count(*) from chained_rows where table_name = 'MYTABLE';

COUNT(*)
----------
258683

SQL> select table_name, last_analyzed, avg_row_len, num_rows, chain_cnt from dba_tables
where owner = 'DB_USER' and table_name = 'MYTABLE'

TABLE_NAME LAST_ANALYZED AVG_ROW_LEN NUM_ROWS CHAIN_CNT
------------------------------ ---------------- ----------- ---------- ----------
MYTABLE 2002-06-25 11:35 35 1061399 277139


SQL> select a.name, b.value
from v$statname a, v$mystat b
where a.statistic#=b.statistic#
and name like 'table%';


NAME VALUE
----------------------------------------------------------------- ----------
table scans (short tables) 28
table scans (long tables) 7
table scans (rowid ranges) 0
table scans (cache partitions) 0
table scans (direct read) 0
table scan rows gotten 2329430
table scan blocks gotten 104510
table fetch by rowid 1573
table fetch continued row 99


SQL> set autotrace traceonly stat ;
/* there is index only on the CNY# col */
SQL> select /*+ full */ CNY#, PAYMENTDATE from DB_USER.MYTABLE ;

1064428 rows selected.


Statistics
----------------------------------------------------------
0 recursive calls
13 db block gets
81903 consistent gets
8377 physical reads
180 redo size
56728523 bytes sent via SQL*Net to client
7877096 bytes received via SQL*Net from client
70963 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1064428 rows processed


SQL> select a.name, b.value
from v$statname a, v$mystat b
where a.statistic#=b.statistic#
and name like 'table%';

9 rows selected.


Statistics
----------------------------------------------------------
136 recursive calls
0 db block gets
34 consistent gets
0 physical reads
0 redo size
1245 bytes sent via SQL*Net to client
425 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
6 sorts (memory)
0 sorts (disk)
12 rows processed

SQL> set autotrace off
SQL> /

NAME VALUE
----------------------------------------------------------------- ----------
table scans (short tables) 30
table scans (long tables) 8
table scans (rowid ranges) 0
table scans (cache partitions) 0
table scans (direct read) 0
table scan rows gotten 3393860
table scan blocks gotten 186411
table fetch by rowid 1583
table fetch continued row 99

9 rows selected.



and Tom said...

This is a cool question actually - lots to be learned from this one aobut how the data is processed.

I'll guess that 100% of your rows that are chained are actually MIGRATED, which is a special kind of "chaining". (if you have my book, I go into great detail on this -- with pictures and everything).

My other guess would be that CNY#, PAYMENTDATE are near the "front" of your table (they come first in the create table statement).

Now lets see why.... We need a little background first.

We will migrate a row when an update to that row would cause it to not fit on the block anymore (with all of the other data that exists there currently). A migration means that the entire row will move and we just leave behind the "forwarding address". So, the original block just has the rowid of the new block and the entire row is moved.

Then there is the true "chained row" (well, a migrated row is a specialized case of a chained row...) With a true "chained row", part of the DATA of a row is on one block and part of it is on another block. So, instead of just having a forwarding address on one block -- and the data on another (thats a migrated row), we have data on two or more blocks....


When we FULL SCAN a table, we actually *ignore* the forwarding addresses (the head rowpiece we call it for a row). We know that as we continue the full scan, we'll eventually get to that row so we can ignore the forwarding address and just process the row when we get there. Hence, in a full scan migrated rows don't cause us to really do any extra work -- they are meaningless. Oh sure, the forwarding address is consuming a couple of bytes on the block -- it is overhead -- but frankly, it is meaningless.

When we INDEX READ into a table -- then a migrated row will cause additional IO's. That is because the index will tell us "goto file X, block Y, slot Z to find this row". But when we get there we find a message that says "well, really goto file A, block B, slot C to find this row". We have to do another IO (logical or physical) to find the row.


"truly" chained rows affect us differently. Here, it depends on the data we need. If we had a row with two columns that was spread over two blocks -- the query:

select column1 from t

where column1 is the "first" column -- would not cause any table fetch continued row. It would not actually have to get column2 -- it would not follow the chained row all of the way out. On the other hand, if we ask for

select column2 from t

then you would in fact see a table fetch continued row.



This behavior is so predicable that setting up an illustrative example is easy. That is what I'll do here. I am using an 8k block size so if you use something different, you'll have to adjust the column sizes appropriately (or add more columns as the case may be).

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t ( x int primary key, a char(2000), b char(2000), c char(2000), d char(2000), e char(2000) );

Table created.

that is our table. The char(2000)'s will let us easily cause rows to migrate or chain. I used 5 columns a,b,c,d,e so that the total rowsize can grow to about 10k -- bigger then my block, ensuring I can truly chain a row...

ops$tkyte@ORA817DEV.US.ORACLE.COM> insert into t(x) values ( 1 );
1 row created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> insert into t(x) values ( 2 );
1 row created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> insert into t(x) values ( 3 );
1 row created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> commit;
Commit complete.

we don't care about seeing a,b,c,d,e -- just fetching them. they are really wide so we'll surpress their display....

ops$tkyte@ORA817DEV.US.ORACLE.COM> column a noprint
ops$tkyte@ORA817DEV.US.ORACLE.COM> column b noprint
ops$tkyte@ORA817DEV.US.ORACLE.COM> column c noprint
ops$tkyte@ORA817DEV.US.ORACLE.COM> column d noprint
ops$tkyte@ORA817DEV.US.ORACLE.COM> column e noprint

ops$tkyte@ORA817DEV.US.ORACLE.COM> select * from t;

X
----------
1
2
3

ops$tkyte@ORA817DEV.US.ORACLE.COM> @mystat continued
ops$tkyte@ORA817DEV.US.ORACLE.COM> select a.name, b.value
2 from v$statname a, v$mystat b
3 where a.statistic# = b.statistic#
4 and lower(a.name) like '%' || lower('&1')||'%'
5 --and b.value > 0
6 /

NAME VALUE
------------------------------ ----------
table fetch continued row 0

Now that is to be expected -- the rows came out in the order we put them in (Oracle full scanned this query, it processed the data as it found it). Also expected is the table fetch continued row (TFCR from now on) is zero. This data is so small right now, we know that all three rows fit on a single block. No chaining.

Now, lets do some updates in a specific way. I want to demonstrate the MIGRATION issue and how it affects (or rather DOESN'T affect) the full scan:



ops$tkyte@ORA817DEV.US.ORACLE.COM> update t set a = 'x', b = 'x', c = 'x' where x = 3;
1 row updated.

ops$tkyte@ORA817DEV.US.ORACLE.COM> commit;
Commit complete.

ops$tkyte@ORA817DEV.US.ORACLE.COM> update t set a = 'x', b = 'x', c = 'x' where x = 2;
1 row updated.

ops$tkyte@ORA817DEV.US.ORACLE.COM> commit;
Commit complete.

ops$tkyte@ORA817DEV.US.ORACLE.COM> update t set a = 'x', b = 'x', c = 'x' where x = 1;
1 row updated.

ops$tkyte@ORA817DEV.US.ORACLE.COM> commit;
Commit complete.

Note the order of updates -- I did last row first, first row last...

ops$tkyte@ORA817DEV.US.ORACLE.COM> select * from t;

X
----------
3
2
1

ops$tkyte@ORA817DEV.US.ORACLE.COM> @mystat continued
ops$tkyte@ORA817DEV.US.ORACLE.COM> select a.name, b.value
2 from v$statname a, v$mystat b
3 where a.statistic# = b.statistic#
4 and lower(a.name) like '%' || lower('&1')||'%'
5 --and b.value > 0
6 /

NAME VALUE
------------------------------ ----------
table fetch continued row 0

Interesting, the rows came out "backwards" now (yet further proof that rows do NOT come out in the order of insertion generally -- but thats another question). That is because we updated row 3 first. It did not have to migrate, but it filled up block 1. We then updated row 2. It migrated to block 2 -- with row 3 hogging all of the space, it had to. We then updated row 1, it migrated to block 3. We migrated rows 2 and 1, leaving 3 where it started (you can analyze the table to confirm this yourself).

So, when Oracle full scanned the table, it found row 3 on block 1 first, row 2 on block 2 second and row 1 on block 3 third. It IGNORED the head rowid piece on block 1 for rows 1 and 2 and just found the rows as it scanned the table. That is why the TFCR = 0 still.

So, lets see a migrated row affecting the TFCR...


ops$tkyte@ORA817DEV.US.ORACLE.COM> select * from t where x = 3;

X
----------
3

ops$tkyte@ORA817DEV.US.ORACLE.COM> @mystat continued
ops$tkyte@ORA817DEV.US.ORACLE.COM> select a.name, b.value
2 from v$statname a, v$mystat b
3 where a.statistic# = b.statistic#
4 and lower(a.name) like '%' || lower('&1')||'%'
5 --and b.value > 0
6 /

NAME VALUE
------------------------------ ----------
table fetch continued row 0

I'm using RBO and RBO is notorious for being index happy -- hence, this was an index range scan / table access by rowid. We didn't increment the TFCR yet since row 3 isn't migrated...

ops$tkyte@ORA817DEV.US.ORACLE.COM> select * from t where x = 1;

X
----------
1

ops$tkyte@ORA817DEV.US.ORACLE.COM> @mystat continued
ops$tkyte@ORA817DEV.US.ORACLE.COM> select a.name, b.value
2 from v$statname a, v$mystat b
3 where a.statistic# = b.statistic#
4 and lower(a.name) like '%' || lower('&1')||'%'
5 --and b.value > 0
6 /

NAME VALUE
------------------------------ ----------
table fetch continued row 1

but row 1 is! and there it is. Using the index, we forced a TFCR...

Now, lets see the effect of a CHAINED row:


ops$tkyte@ORA817DEV.US.ORACLE.COM> update t set d = 'x', e = 'x' where x = 3;
1 row updated.

ops$tkyte@ORA817DEV.US.ORACLE.COM> commit;
Commit complete.


row 3 no longer fits on block 1. With d and e set, the rowsize is about 10k. It is truly chained...

ops$tkyte@ORA817DEV.US.ORACLE.COM> select x, a from t where x = 3;

X
----------
3

ops$tkyte@ORA817DEV.US.ORACLE.COM> @mystat continued
ops$tkyte@ORA817DEV.US.ORACLE.COM> select a.name, b.value
2 from v$statname a, v$mystat b
3 where a.statistic# = b.statistic#
4 and lower(a.name) like '%' || lower('&1')||'%'
5 --and b.value > 0
6 /

NAME VALUE
------------------------------ ----------
table fetch continued row 1

fetch something from the "head" of the table and lo and behold -- it will NOT cause a TFCR. Column A is on the "head rowpiece" block. No extra IO to get it....


ops$tkyte@ORA817DEV.US.ORACLE.COM> select x, d, e from t where x = 3;

X
----------
3

ops$tkyte@ORA817DEV.US.ORACLE.COM> @mystat continued
ops$tkyte@ORA817DEV.US.ORACLE.COM> select a.name, b.value
2 from v$statname a, v$mystat b
3 where a.statistic# = b.statistic#
4 and lower(a.name) like '%' || lower('&1')||'%'
5 --and b.value > 0
6 /

NAME VALUE
------------------------------ ----------
table fetch continued row 2

But going after columns D and E via the index do increment the TFCR. We had to put the row back together from its head to its tail to get that data.

Interestingly the full scan is now affected as well:


ops$tkyte@ORA817DEV.US.ORACLE.COM> select * from t;

X
----------
3
2
1

ops$tkyte@ORA817DEV.US.ORACLE.COM> @mystat continued
ops$tkyte@ORA817DEV.US.ORACLE.COM> select a.name, b.value
2 from v$statname a, v$mystat b
3 where a.statistic# = b.statistic#
4 and lower(a.name) like '%' || lower('&1')||'%'
5 --and b.value > 0
6 /

NAME VALUE
------------------------------ ----------
table fetch continued row 3

the TFCR was incremented here because of row 3 -- we had to assemble it in its entirety to get the trailing columns. Rows 1 and 2, even though they are "chained" -- migrated really -- don't add to the TFCR since we full scanned.

continuing on:



ops$tkyte@ORA817DEV.US.ORACLE.COM> select x, a from t;

X
----------
3
2
1

ops$tkyte@ORA817DEV.US.ORACLE.COM> @mystat continued
ops$tkyte@ORA817DEV.US.ORACLE.COM> select a.name, b.value
2 from v$statname a, v$mystat b
3 where a.statistic# = b.statistic#
4 and lower(a.name) like '%' || lower('&1')||'%'
5 --and b.value > 0
6 /

NAME VALUE
------------------------------ ----------
table fetch continued row 3


No TFCR since we didn't have to assemble row 3, we just needed the first two columns

ops$tkyte@ORA817DEV.US.ORACLE.COM> select x, e from t;

X
----------
3
2
1

ops$tkyte@ORA817DEV.US.ORACLE.COM> @mystat continued
ops$tkyte@ORA817DEV.US.ORACLE.COM> select a.name, b.value
2 from v$statname a, v$mystat b
3 where a.statistic# = b.statistic#
4 and lower(a.name) like '%' || lower('&1')||'%'
5 --and b.value > 0
6 /

NAME VALUE
------------------------------ ----------
table fetch continued row 4

but by asking for d and e -- we did.....

there you go. You most likely have only MIGRATED ROWS but even if they are truly chained, the columns you are selecting are at the front of the table.

Migrated rows affect OLTP systems which use indexed reads to read singleton rows. In the worst case, you can add an extra IO to all reads which would be really bad.

Truly chained rows -- well, they affect everyone.

So, how can you decide if you have migrated or truly chained? glad you asked...


ops$tkyte@ORA817DEV.US.ORACLE.COM> select count(e) from t;

COUNT(E)
----------
1

ops$tkyte@ORA817DEV.US.ORACLE.COM> @mystat continued
ops$tkyte@ORA817DEV.US.ORACLE.COM> select a.name, b.value
2 from v$statname a, v$mystat b
3 where a.statistic# = b.statistic#
4 and lower(a.name) like '%' || lower('&1')||'%'
5 --and b.value > 0
6 /

NAME VALUE
------------------------------ ----------
table fetch continued row 5


Just count the LAST column in that table -- That'll force us to construct the entire row. Now, we can see that the count did 1 TFCR but we have:

ops$tkyte@ORA817DEV.US.ORACLE.COM> analyze table t compute statistics;

Table analyzed.

ops$tkyte@ORA817DEV.US.ORACLE.COM> select chain_cnt from user_tables
2 where table_name = 'T';

CHAIN_CNT
----------
3

three rows that are chained. Apparently, 2 of them are MIGRATED (rows 1 and 2) and one is truly chained (row 3)





Rating

  (48 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Dave, June 27, 2002 - 9:37 am UTC

Now that was a real education. Everything you need to know about chaining and migration in one response.

Excellent explanation, useful information

Alan, June 27, 2002 - 9:59 am UTC


wow

A reader, June 27, 2002 - 10:04 am UTC

I agree. Talk about thorough. And fun to read too. Nice answer.

Thanks Tom.


Fabulous explanation - especially liked change in FT scan order

Bill Bunke, June 27, 2002 - 11:48 am UTC

Tom,

Simple (yet powerful) examples that are easy to follow as the scenario unfolds.

The real-world example of how the query order of rows can change is especially helpful; I always knew this was true, but had nothing to back it up with. Now I can refer to your data when the topic arises.

Excellent!

Prince, June 27, 2002 - 1:29 pm UTC

Wow! Perfect explanation.

It makes lot of sense now. I was puzzled by the output.

I am very sure that it is caused by the migrated rows. (the avg rowlen is 35 and the table has only a few columns). I guess it was caused by adding another column with default value after the table is in use (populated) for a while. Also, this table undergoes lots of update. (May be I have to increase the PCTFREE further)

Also I made sure to take the first and last column of the table while I was doing the test.

As always you covered the subject so throughly and answered lot of other (related) questions as well.

Just a ques: I am thinking export/import (lot of rows) is the best way to reorg this table, if I have to. Do you see any other better solution? (I have 8.1.7.4 SE)

Thanks,


Tom Kyte
June 27, 2002 - 1:42 pm UTC

Alter table T move;

then rebuild the indexes.

Much better then export/import.

How did I forget that?

Prince., June 27, 2002 - 1:48 pm UTC

sigh! Why didn't I think of this one? (Infact, I did this once for another purpose). May be my brain has been overwritten by some of the docs I read on metalink on row-chaining and wasn't thinking other alternatives.

Thank you very very much for the suggestion.

Finding table chained rows

PN, July 17, 2002 - 12:02 am UTC

Hi Tom
Please look at the query and its output:-

1.
SELECT 'Chained or Migrated Rows = '||value
FROM v$sysstat
WHERE name = 'table fetch continued row';

output=Chained or Migrated Rows = 652572

2.
select chain_cnt
from user_tables
where CHAIN_CNT >0;
output = 0 records.
I don't know weather I am right or not in finding the table name for chained rows and/or migrated rows.
Please help how to find for corrective action.
Rgds
PN
/

Tom Kyte
July 17, 2002 - 10:14 am UTC

You are comparing apples with toaster ovens here.

V$sysstat tells you how many times, since the system (database) was started -- you did a table fetch cont row. Over ALL TABLES

user_tables tells you IMMEDIATELY AFTER AN ANALYZE (will be null otherwise) how many rows in a table are chained.

You could have 1 table with 1 chained row that was fetched 652,572 times.
You could have 652,572 tables -- each with a chained row -- each of which was fetched once.
You could have any combination of the above -- any combo.

Also, 652k -- maybe that's good, maybe that's bad. it is a function of
a) how long has the db been up
b) how many rows is this as a percentage of total fetched rows (eg: if 0.001% of your fetched are table fetch cont row -- WHO CARES)


A reader, July 17, 2002 - 1:10 pm UTC

Excellent Explanation!!!!

I have this data from v$sysstat table.
We are using SIEBEL crm application.Major tables(Account,contact,Opportunity,Activity,Service Request ETC) in application has more then 200 column.

We have 8k block size.

STATISTIC# NAME CLASS VALUE
---------- ------------------------------ ---------- ----------
150 table scans (short tables) 64 1471099
151 table scans (long tables) 64 11391
152 table scans (rowid ranges) 64 0
153 table scans (cache partitions) 64 0
154 table scans (direct read) 64 0
155 table scan rows gotten 64 1980298337
156 table scan blocks gotten 64 58991204
157 table fetch by rowid 64 1960637410
158 table fetch continued row 64 11510010 <-----

What is your Opinion on this(Row chaining and migration and performance)?

Since major table have more then 200 column, Shuold we increase block size to 16K or more?

Thanks for very good above explanation.




Tom Kyte
July 17, 2002 - 2:10 pm UTC

One way to look at that is 0.58% of your fetches result in a table fetch continued row....

OLTP should not goto 16k blocks.

Look at the avg row size for that table - is it in fact that table?

steps for you:

a) determine that this is really an issue you must solve (0.5% -- hmmm)
b) isolate the table(s) first, then
c) figure out the average row size and see if you can fix
d) if you can and you still want to -- just fix those rows that are chained

Capital Tom!!!!!

Ashok, July 17, 2002 - 9:22 pm UTC


Great

A reader, July 17, 2002 - 10:27 pm UTC

Got It.
Thanks
PN

over 96% rows chained

Mike, August 31, 2002 - 11:48 am UTC

Tom:
I found one of table in my production database (datamart) has over 96% chained, the block size is 4k:

select  CHAIN_CNT,chain_cnt/num_rows pct_chained,AVG_ROW_LEN, PCT_FREE , PCT_used, table_name
  from dba_tables where CHAIN_CNT/num_rows >0.6 and chain_cnt>0;SQL>   2


                       AVG
  CHAIN                ROW
    CNT PCT_CHAINED    LEN   PCT_FREE   PCT_USED TABLE NAME
------- ----------- ------ ---------- ---------- ------------------------------
 39,222  .964918323    529         10         90 SF75_VIEW


The table definition, which does not look like a fat table. Why this table so chained easy?

SQL> desc sf75_view
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 BAA_SSAN                                  NOT NULL VARCHAR2(9)
 QHB_CURR_APPRAISAL_DATE                            DATE
 QHC_CURR_APPRAISAL_RATING                          VARCHAR2(1)
 QBJ_CURR_APPRAISAL_RATING                          VARCHAR2(1)
 QBJ_CT263_DESC                                     VARCHAR2(90)
 QBK_APPR_PATTERN_CURR                              VARCHAR2(1)
 QBK_CT270_DESC                                     VARCHAR2(80)
 QHB_PRIOR_APPRAISAL_DATE1                          DATE
 QHC_PRIOR_APPRAISAL_RATING1                        VARCHAR2(1)
 QHC_CT269_DESC1                                    VARCHAR2(80)
 QHJ_PRIOR_APPR_TYP_RATING1                         VARCHAR2(1)
 QHJ_CT263_DESC1                                    VARCHAR2(90)
 QHK_APPR_PATTERN_HIST1                             VARCHAR2(1)
 QHK_CT270_DESC1                                    VARCHAR2(80)
 QHB_PRIOR_APPRAISAL_DATE2                          DATE
 QHC_PRIOR_APPRAISAL_RATING2                        VARCHAR2(1)
 QHC_CT269_DESC2                                    VARCHAR2(80)
 QHJ_PRIOR_APPR_TYP_RATING2                         VARCHAR2(1)
 QHJ_CT263_DESC2                                    VARCHAR2(90)
 QHK_APPR_PATTERN_HIST2                             VARCHAR2(1)
 QHK_CT270_DESC2                                    VARCHAR2(80)
 QHB_PRIOR_APPRAISAL_DATE3                          DATE
 QHC_PRIOR_APPRAISAL_RATING3                        VARCHAR2(1)
 QHC_CT269_DESC3                                    VARCHAR2(80)
 QHJ_PRIOR_APPR_TYP_RATING3                         VARCHAR2(1)
 QHJ_CT263_DESC3                                    VARCHAR2(90)
 QHK_APPR_PATTERN_HIST3                             VARCHAR2(1)
 QHK_CT270_DESC3                                    VARCHAR2(80)
 QHB_PRIOR_APPRAISAL_DATE4                          DATE
 QHC_PRIOR_APPRAISAL_RATING4                        VARCHAR2(1)
 QHC_CT269_DESC4                                    VARCHAR2(80)
 QHJ_PRIOR_APPR_TYP_RATING4                         VARCHAR2(1)
 QHJ_CT263_DESC4                                    VARCHAR2(90)
 QHK_APPR_PATTERN_HIST4                             VARCHAR2(1)
 QHK_CT270_DESC4                                    VARCHAR2(80)

SQL>
 

Tom Kyte
August 31, 2002 - 12:34 pm UTC

I'll guess -- hypothesize....

The life of a row in this table starts with the row being skinny.  Makes sense, based on the column names -- the rows look like they store history so when first inserted -- most all of the trailing columns (the history) are NULL.

So, you insert a bunch of skinny rows....  You get hundreds / block.  Now, you start modifying these rows over time -- adding the history as the present becomes the past.  The row gets fatter (hey, sort of like us, the older we are...).  Now, where 100 rows used to fit -- only 6 or 7 do (avg rowsize = 500 bytes over time, 4k blocks 3500 bytes and its full).  the other 95 or so rows MIGRATE to other blocks to live.  No more room for them on the block.

So, now that you know that about 6/7 rows fit per block on average here -- what should you do?  Well, you should set the PCTFREE much much higher so that only 6/7 rows can get inserted per block.  Consider:


ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t ( x int, y varchar2(4000) );
Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> insert into t
  2  select object_id, null
  3    from all_objects
  4   where rownum <= 662*3;

1986 rows created.

<b>I picked 662*3 cause I knew I would get 662 rows / block on my system and I wanted just 3 blocks of data to start... We'll end up with 7 rows/block soon</b>


ops$tkyte@ORA817DEV.US.ORACLE.COM> select dbms_rowid.rowid_block_number(rowid), count(*)
  2    from t
  3   group by dbms_rowid.rowid_block_number(rowid)
  4  /

DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)   COUNT(*)
------------------------------------ ----------
                                  74        662
                                  75        662
                                  76        662

ops$tkyte@ORA817DEV.US.ORACLE.COM> analyze table t compute statistics;
Table analyzed.

ops$tkyte@ORA817DEV.US.ORACLE.COM> select CHAIN_CNT,
  2         round(chain_cnt/num_rows*100,2) pct_chained,
  3         AVG_ROW_LEN, PCT_FREE , PCT_used
  4    from user_tables
  5   where table_name = 'T'
  6  /

 CHAIN_CNT PCT_CHAINED AVG_ROW_LEN   PCT_FREE   PCT_USED
---------- ----------- ----------- ---------- ----------
         0           0           8         10         40

<b>No chaining of course.  Our rows started small, so now we update them</b>

ops$tkyte@ORA817DEV.US.ORACLE.COM> update t set y = rpad( '*', 1000, '*' );
1986 rows updated.

ops$tkyte@ORA817DEV.US.ORACLE.COM> commit;
Commit complete.

ops$tkyte@ORA817DEV.US.ORACLE.COM> analyze table t compute statistics;
Table analyzed.

ops$tkyte@ORA817DEV.US.ORACLE.COM> select CHAIN_CNT,
  2         round(chain_cnt/num_rows*100,2) pct_chained,
  3         AVG_ROW_LEN, PCT_FREE , PCT_used
  4    from user_tables
  5   where table_name = 'T'
  6  /

 CHAIN_CNT PCT_CHAINED AVG_ROW_LEN   PCT_FREE   PCT_USED
---------- ----------- ----------- ---------- ----------
      1986         100        1017         10         40

<b>There we go, we have the worst case, all of them moved -- all of them!</b>

ops$tkyte@ORA817DEV.US.ORACLE.COM> alter table t move;
Table altered.

ops$tkyte@ORA817DEV.US.ORACLE.COM> select dbms_rowid.rowid_block_number(rowid), count(*)
  2    from t
  3   group by dbms_rowid.rowid_block_number(rowid)
  4  /

DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)   COUNT(*)
------------------------------------ ----------
                               53834          7
                               53835          7
                               53836          7
                               53837          7
...
                               54115          7
                               54116          7
                               54117          5

284 rows selected.

<b>after unchaining the table -- we see -- 7 rows/block basically.  And none of them are chained (they all fit on the block itself</b>

ops$tkyte@ORA817DEV.US.ORACLE.COM> analyze table t compute statistics;
Table analyzed.

ops$tkyte@ORA817DEV.US.ORACLE.COM> select CHAIN_CNT,
  2         round(chain_cnt/num_rows*100,2) pct_chained,
  3         AVG_ROW_LEN, PCT_FREE , PCT_used
  4    from user_tables
  5   where table_name = 'T'
  6  /

 CHAIN_CNT PCT_CHAINED AVG_ROW_LEN   PCT_FREE   PCT_USED
---------- ----------- ----------- ---------- ----------
         0           0        1011         10         40


<b>So, armed with that knowledge, what can you do?  Well, if you KNOW you'll insert lots of rows with little data and be updating it over time, you can reserve most of the blocks for these updates.  I played around a bit to get the right pctfree</b>

ops$tkyte@ORA817DEV.US.ORACLE.COM> drop table t;
Table dropped.

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t
  2  ( x int, y varchar2(4000) )
  3  pctfree 99 pctused 1;
Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> insert into t
  2  select object_id, null
  3    from all_objects
  4   where rownum <= 35;

35 rows created.

<b>and put in 5 blocks of data..</b>

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> select dbms_rowid.rowid_block_number(rowid),
count(*)
  2    from t
  3   group by dbms_rowid.rowid_block_number(rowid)
  4  /

DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)   COUNT(*)
------------------------------------ ----------
                                  74          7
                                  75          7
                                  76          7
                                  77          7
                                  78          7

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> analyze table t compute statistics;
Table analyzed.

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> select CHAIN_CNT,
  2         round(chain_cnt/num_rows*100,2) pct_chained,
  3         AVG_ROW_LEN, PCT_FREE , PCT_used
  4    from user_tables
  5   where table_name = 'T'
  6  /

 CHAIN_CNT PCT_CHAINED AVG_ROW_LEN   PCT_FREE   PCT_USED
---------- ----------- ----------- ---------- ----------
         0           0           8         99          1

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> update t set y = rpad( '*', 1000, '*' );

35 rows updated.

ops$tkyte@ORA817DEV.US.ORACLE.COM> commit;

Commit complete.

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> analyze table t compute statistics;

Table analyzed.

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> select CHAIN_CNT,
  2         round(chain_cnt/num_rows*100,2) pct_chained,
  3         AVG_ROW_LEN, PCT_FREE , PCT_used
  4    from user_tables
  5   where table_name = 'T'
  6  /

 CHAIN_CNT PCT_CHAINED AVG_ROW_LEN   PCT_FREE   PCT_USED
---------- ----------- ----------- ---------- ----------
         0           0        1011         99          1

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM>

<b>and now you can see there is no migration -- the rows can expand.  I've set the space aside in the beginning</b>

So, what should you do?  I would recommend an alter table MOVE to unmigrate these rows (and then alter index rebuilds as well -- since the indexes will be invalid after this operation).  AFTER the alter table move, you would do an ALTER TABLE to up the pctfree setting for all subsequent inserts (you don't necessarily want to play with the pctfree on the move -- if the existing rows are as "fat" as they are going to be -- on the other hand, if the existing rows are only about "half as fat" as they are going to get -- you could use a higher pctfree on the alter table move, like 40% or 50% for their updates)

ops$tkyte@ORA817DEV.US.ORACLE.COM> alter table t pctfree 75
  2  ;

Table altered.


 

Avoiding Row Migration

Basharat, September 01, 2002 - 10:07 am UTC

Tom, whats the best technique to avoid row migration; breaking the big columns into smaller ones or re-creating the data base with larger block size, albeit none of the above guarantee zero row migration.

Isn't it possible for Oracle server to update the rowid of migrated row instead of keeping a pointer for new location to avoid additional logical/physical IO?

Tom Kyte
September 01, 2002 - 10:23 am UTC

You would have to goto each index and update it as well. Meaning, we would have to take the before image of the row and do an index range scan on every index to find the single leaf entry that points back to us and move it in the index (the rowid is changing and the rowid is part of the entry in the leaf node -- so we most likely need to move the index entry as well now)

Applications that use rowids like forms, bc4j and dozens of others would be totally broken as well. They count on the rowid being stable for the life of the row.

The best way to avoid it is to have some understanding of the life of a typical row in your table and set pctfree accordingly. (or realizing that it is generally only the truly exaggerated cases like this one that cause concern -- just worry about it on tables that need worrying)

Another cool trick...

Connor, September 02, 2002 - 6:03 am UTC

... is to use minimise_records_per_block when (re)loading a table that has disparate row sizes throughout.

(Courtesy: JL's book)

Cheers
Connor

Tom Kyte
September 02, 2002 - 10:20 am UTC

Yeah, I thought about that BUT, it is a little trickier.

What you would have to do is:

rebuild the table
then minimize records/block.


But if you were INSTALLING your app, you would have to

create table
insert N rows where N = number of rows/block you want
alter minimize records/block
delete the rows

So, for a rebuild -- it works good. For a create, PCTFREE is perhaps "better"

Very useful!

Cynthia Pinchot, September 25, 2002 - 9:30 pm UTC


Hats off to you Tom.

A reader, October 16, 2002 - 5:10 pm UTC


Chained Rows and Autotrace

Venkat, October 27, 2002 - 5:01 am UTC

Other than Tom, who can explain like this???

A reader, February 28, 2003 - 10:55 pm UTC

Thanks for your great explanation on row migrating/chaining.

I have a similar question. I have a table “A” with 70% row chaining. The table size is 400 Mb. When I created a copy of that table, i.e.,

CREATE TABLE B AS SELECT * FROM A;

Then the table B size is 35 Mb.

I am sure that table got populated with tons of inserts and update, there were 0 deletes on that table.

Is row chaining/migrating going to increasing the table size very badly?

Here are the Table storage parameters::

TABLESPACE myTablespace NOLOGGING
PCTFREE 10
PCTUSED 40
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 65536
NEXT 65536
PCTINCREASE 0
MINEXTENTS 1
MAXEXTENTS 10000
FREELISTS 1 FREELIST GROUPS 1 )
NOCACHE;


Tom Kyte
March 01, 2003 - 9:56 am UTC

do you have the original storage parameters for the TABLE in question -- not the tablespace but the table.

The tablespace provides defaults which can and many times are overridden by the create table itself.

Redaer

A reader, March 01, 2003 - 1:03 pm UTC

"
ops$tkyte@ORA817DEV.US.ORACLE.COM> select CHAIN_CNT,
2 round(chain_cnt/num_rows*100,2) pct_chained,
3 AVG_ROW_LEN, PCT_FREE , PCT_used
4 from user_tables
5 where table_name = 'T'
6 /

CHAIN_CNT PCT_CHAINED AVG_ROW_LEN PCT_FREE PCT_USED
---------- ----------- ----------- ---------- ----------
0 0 8 10 40
"

The average row length = 8
maximum size of each column is actually 3 +null
DUMP(X)
-----------------------
DUMP(Y)
-----------------------
Typ=2 Len=3: 194,20,87
NULL

What are the overhead included in the avg_row_len value

Tom Kyte
March 01, 2003 - 1:09 pm UTC

max size is not 3 -- it is much larger (numbers are varying width strings)

ops$tkyte@ORA920> create table t ( x int );

Table created.

ops$tkyte@ORA920> insert into t select object_id from all_objects;

30540 rows created.

ops$tkyte@ORA920> insert into t values ( rpad('9',38,'9') );

1 row created.

ops$tkyte@ORA920> insert into t values ( '-'||rpad('9',38,'9') );

1 row created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> select max(to_number(rtrim(substr( dump(x), instr(dump(x),'Len=')+4, 2 ),':'))) d,
  2         max(vsize(x))
  3    from t;

         D MAX(VSIZE(X))
---------- -------------
        21            21



but anyway -- null/not null flags, length bytes, some row overhead.


 

A reader, March 01, 2003 - 8:12 pm UTC

>>do you have the original storage parameters for the TABLE in question -- not the
tablespace but the table.

>>The tablespace provides defaults which can and many times are overridden by the
create table itself.

They are table storage parameters only. Table doesn't have any storage parameter, it inherited the tablespace default parameters.

Thanks,

Tom Kyte
March 02, 2003 - 9:53 am UTC

which could have been different, altered.


No, I do not see how simple row chaining could account for a 400meg down to 35meg occurrence.

Something else is as work here -- storage parameters were different or someone allocated many more extents manually or the data actually was deleted or the data was allowed to grow really big and then shrunk back (i'll demo that one below) -- but without an example, or all of the details, we cannot really say what.


Consider:

ops$tkyte@ORA920> create table t
  2  ( a varchar2(4000) default rpad('*',4000,'*'),
  3    b varchar2(4000) default rpad('*',4000,'*'),
  4    c varchar2(4000) default rpad('*',4000,'*'),
  5    d varchar2(4000) default rpad('*',4000,'*'),
  6    e varchar2(4000) default rpad('*',4000,'*'),
  7    f varchar2(4000) default rpad('*',4000,'*'),
  8    g varchar2(4000) default rpad('*',4000,'*'),
  9    h varchar2(4000) default rpad('*',4000,'*'),
 10    i varchar2(4000) default rpad('*',4000,'*'),
 11    j varchar2(4000) default rpad('*',4000,'*')
 12  );

Table created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> insert into t (a)
  2  select rpad('*',4000,'*')
  3    from all_objects
  4   where rownum <= 1000;

1000 rows created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> exec show_space( 'T' );
Free Blocks.............................
Total Blocks............................6144
Total Bytes.............................50331648
Total MBytes............................48
Unused Blocks...........................0
Unused Bytes............................0
Last Used Ext FileId....................9
Last Used Ext BlockId...................118664
Last Used Block.........................128

PL/SQL procedure successfully completed.

ops$tkyte@ORA920> update t set b='x', c='y', d='z', e=null, f='a',g=1,h=2,i=3,j=4;

1000 rows updated.

ops$tkyte@ORA920> drop table t2;

Table dropped.

ops$tkyte@ORA920> create table t2 as select * from t;

Table created.

ops$tkyte@ORA920> exec show_space( 'T2' )
Free Blocks.............................
Total Blocks............................1024
Total Bytes.............................8388608
Total MBytes............................8
Unused Blocks...........................0
Unused Bytes............................0
Last Used Ext FileId....................9
Last Used Ext BlockId...................102024
Last Used Block.........................128

PL/SQL procedure successfully completed.


So, T was 45meg -- T2 is 8meg.... same data - why?  because T got really fat, then really skinny but a table never decreases until re-orged so....  T2 is smaller than T 

Great explanation!!!! (as usual)

A reader, July 07, 2003 - 4:57 pm UTC


Row Migration

Vinnie, June 08, 2004 - 4:25 pm UTC

When a row is migrated is it deleted from the original block?

Tom Kyte
June 08, 2004 - 5:23 pm UTC

no, a head row piece is left behind with the forwarding address.

so, the row data isn't there, but it's address is.

review

Sven, June 16, 2004 - 10:53 am UTC

Hi Tom,

I was playing with pctfree and pctused for one of my tables and come up with the following "strange" results:
DB block size 8k
number of rows in table 9999
Avg Row Len 1156
Blocks 1806
Empty blocks 112
Avg Space 1563
Chain Count 0 (after analyzing table)
pctfree 12
pctused 75

There is a index on a PK column - DSF_ID and table have 310 columns.

So, since chain count is 0, means no chained and migrated rows, correct?
If I execute query like

select "column number 10"
from table_a
where DSF_ID = 100

I have 3 LIO's which is OK (two index, one table)

But, for a query

select "column number 310"
from table_a
where DSF_ID = 100

I have 4 LIO's - two index, two table.

If I execute:

select count(""column number 10")
from table_a;

and after that:

select a.name, b.value
from v$statname a, v$mystat b
where a.statistic#=b.statistic#
and name = 'table fetch continued row';

result is 0;

but for

select count("column number 310")
from table_a;

and after that:

select a.name, b.value
from v$statname a, v$mystat b
where a.statistic#=b.statistic#
and name = 'table fetch continued row';

result is 9999.

According to this, all rows are migrated, but statistics show me different information.
How should I interpret this?

Thanks,






Tom Kyte
June 16, 2004 - 1:11 pm UTC

every row that has more than 255 columns is stored in multiple row pieces.

columns beyond 255 are accessed via the table fetch continued row. they are generally on the same block so the hit is nominal.

review

Sven, June 16, 2004 - 3:22 pm UTC

Does this means that in tables which have more than 255 columns every full row acces (select * from ...) we have one (or more than one) additional LIO?

Tom Kyte
June 16, 2004 - 3:44 pm UTC

a cheap one, yes (you already have the block you need most likely)

Chained Rows...

r00ter, August 12, 2004 - 10:32 pm UTC

Tom!
You ROCK!!
You really do, man!
I have been working with Oracle for quite some time, but not being DBA did not have to deal with chained rows.Your explanation was a very good "patch" for my skills.

rebuilding

Dave, September 01, 2004 - 11:22 am UTC

Hi Tom,

I have a problem with chained rows where my pctfree setting was too low (default 10%) and the rows start skinny and grow to be terribly fat, so we got a lot of migrated rows.

Now Im trying to fix this with this strategy.

Reorg table with pctfree setting of 20% so that current fat rows dont waste too much space, and current skinny rows can grow a bit (and maybe some will migrate which is ok)

Then do alter table xxx pctfree 60; (which should be for new rows)

So now new rows will have the correct space and old rows will be ok?

or have i misunderstood what alter table xxx.. actually does?

I dont want to set it to 60% before the reorg as the current fat rows will have a lot of wasted space

Will this work? (9.2.0.5)

Thanks

Tom Kyte
September 01, 2004 - 11:49 am UTC

what you'll have after the rebuild it

a) nothing really on the freelist
b) blocks that are 80% full (20% for growth)
c) any newly added block (from above HWM) will be 60% free space after inserts
d) pctused will govern when and if a block from b) goes back onto the freelist.

yes, it would work (assuming no ASSM in place for this example:

ops$tkyte@ORA9IR2> create table t ( x int, y char(50) ) pctfree 20;
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into t select rownum, 'x' from all_objects where rownum <= 323;
 
323 rows created.
 
ops$tkyte@ORA9IR2> select dbms_rowid.rowid_block_number(rowid) , count(*) from t
  2  group by dbms_rowid.rowid_block_number(rowid) order by 1;
 
DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)   COUNT(*)
------------------------------------ ----------
                                  10        109
                                  11        107
                                  12        107
 
ops$tkyte@ORA9IR2> commit;
 
Commit complete.
 
<b>that represents your "after rebuild" state</b>

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter table t pctfree 60;
 
Table altered.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into t select -rownum, 'x' from all_objects where rownum <= 323;
 
323 rows created.
 
ops$tkyte@ORA9IR2> select dbms_rowid.rowid_block_number(rowid) , count(*) from t
  2  group by dbms_rowid.rowid_block_number(rowid) order by 1;
 
DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)   COUNT(*)
------------------------------------ ----------
                                  10        109
                                  11        107
                                  12        107
                                  13         53
                                  14         53
                                  15         52
                                  16         52
                                  17         52
                                  18         52
                                  19          9
 
10 rows selected.
 
ops$tkyte@ORA9IR2> select dbms_rowid.rowid_block_number(rowid) , count(*) from t
  2  where x < 0
  3  group by dbms_rowid.rowid_block_number(rowid) order by 1;
 
DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)   COUNT(*)
------------------------------------ ----------
                                  13         53
                                  14         53
                                  15         52
                                  16         52
                                  17         52
                                  18         52
                                  19          9
 
7 rows selected.
 
<b>newly inserted data will use the pctfree 60, old data blocks will be pctfree 20</b> 

Dave, September 01, 2004 - 3:23 pm UTC

thanks for that.

More research here and elsewhere also gives me this option:

empty out, more smaple data in to fill a block as much as I want and then you alter table xxx minimise records_per_block and set pctfree to 0.

Seems like a useful tool (albeit not what it was designed for)

do you have a preference of what you would do here, knowing more data will be inserted (data is never deleted from this table) and no ASSM.

Wasting space in a block isnt a ig issue as disk space isnt my premium concern (reading more blocks into the buffer cache, aging old stuff out could be though)

Tom Kyte
September 01, 2004 - 8:04 pm UTC

if you know the max rows/block you want to have -- it would be a very viable option.



Suresh, September 01, 2004 - 3:55 pm UTC

Very useful information. I need somehelp in uderstanding below results, which somehow do not match completely with theories discussed above:

I have a person table with 282 columns. we have a requirement to add a number(10) field to this table. The moment we add this new field and update it. Every row gets "truly" chained (I will  prove below why truly chained).  And the size/number of blocks used by the table just gets double.
I understand tables with columns more than 255 have intra-block chaining, but then why should number of blocks increase?

SQL> drop table TEMP_CHAIN;

Table dropped.

SQL> create table TEMP_CHAIN as (select * from my_person where rownum < 21); 

Table created.

SQL> analyze table TEMP_CHAIN compute statistics;

Table analyzed.

SQL> select avg_row_len, BLOCKS, chain_cnt, PCT_FREE, PCT_used from user_tables where table_name = 'TEMP_CHAIN';

AVG_ROW_LEN     BLOCKS  CHAIN_CNT   PCT_FREE   PCT_USED
----------- ---------- ---------- ---------- ----------
        716          3          0         10         40
        

===> Note that no chaining at this point. And the Blocks used to store this table:


SQL> select dbms_rowid.rowid_block_number(rowid),  count(*) from TEMP_CHAIN group by dbms_rowid.rowid_block_number(rowid);

DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)   COUNT(*)
------------------------------------ ----------
                              117762         10
                              117763          9
                              117764          1
                              


SQL> alter table TEMP_CHAIN add (xx number(10));

Table altered.

SQL> update TEMP_CHAIN set xx = 1;

20 rows updated.

SQL> commit;

Commit complete.

SQL> analyze table TEMP_CHAIN compute statistics;

Table analyzed.

SQL> select avg_row_len, BLOCKS, chain_cnt, PCT_FREE, PCT_used from user_tables where table_name = 'TEMP_CHAIN';

AVG_ROW_LEN     BLOCKS  CHAIN_CNT   PCT_FREE   PCT_USED
----------- ---------- ---------- ---------- ----------
        772          5         20         10         40

===> See what a difference....So, two Questions:

QUESTION#1: Why chaining? There should have been enough room in the block (10% of 8K, block size, approx. 800 bytes) to accomodate 1 bytes for every row and since there were 10 rows (max) it just needed 10 bytes.

QUESTION#2: What's going on with AVG_ROW_LEN? Why has it increased by 56 Bytes?

===> Let me try to prove it's true row chaining not migration.

SQL> select a.name, b.value from v$statname a, v$mystat b where a.statistic# = b.statistic# and lower(a.name) like '%continued%';

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
table fetch continued row                                                 0

===> I am forcing TFCR by building the row, if it's in pieces.

SQL> select count(xx) from TEMP_CHAIN;

 COUNT(XX)
----------
        20

SQL> select a.name, b.value from v$statname a, v$mystat b where a.statistic# = b.statistic# and lower(a.name) like '%continued%';

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
table fetch continued row                                                20

===> And it is in pieces. Per above discussions, TFCR increases when there is a true chaining or we access table via index (which I am not doing certainly). So, another question:

QUESTION#3: Why row chaining not row migration?

Other information:
Block Size: 8K
Oracle: 8.1.7.4
OS: AIX 5

Thanks in advances for all your help. 

Tom Kyte
September 01, 2004 - 8:16 pm UTC

without a test case I and the rest of the world can run to reproduce, I'll have to fall back on "a hunch"

your table has 282 columns

the trailing columns are NULL -- only columns 1..255 or less actually have a value in your 20 row example.

trailing null columns in a table take ZERO bytes.

You added a new "last column" and updated it to not null.

so, the table has a 283'rd column.  the intervening columns now all take 1 byte (for the null flag, they go from zero to 1).

Now, the rows are truly chained since every table with more than 255 columns such that the last column(s) are not null (some column over 255) 


here be the example:

ops$tkyte@ORA9IR2> declare
  2          l_stmt long := 'create table t ( x int primary key';
  3  begin
  4          for i in 1 .. 250
  5          loop
  6                  l_stmt := l_stmt || ', c' || i || ' number default 1';
  7          end loop;
  8          for i in 251..281
  9          loop
 10                  l_stmt := l_stmt || ', c' || i || ' number ';
 11          end loop;
 12          execute immediate l_stmt || ')';
 13  end;
 14  /
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> insert into t (x) values ( 1);
 
1 row created.
 
ops$tkyte@ORA9IR2> analyze table t compute statistics;
 
Table analyzed.
 
ops$tkyte@ORA9IR2> select avg_row_len, BLOCKS, chain_cnt, PCT_FREE, PCT_used from user_tables
  2  where table_name = 'T';
 
AVG_ROW_LEN     BLOCKS  CHAIN_CNT   PCT_FREE   PCT_USED
----------- ---------- ---------- ---------- ----------
        756          5          0         10
 
<b>nothing chained, row length of 756..</b>

ops$tkyte@ORA9IR2> update t set c281 = 42;
 
1 row updated.
 
ops$tkyte@ORA9IR2> analyze table t compute statistics;
 
Table analyzed.
 
ops$tkyte@ORA9IR2> select avg_row_len, BLOCKS, chain_cnt, PCT_FREE, PCT_used from user_tables
  2  where table_name = 'T';
 
AVG_ROW_LEN     BLOCKS  CHAIN_CNT   PCT_FREE   PCT_USED
----------- ---------- ---------- ---------- ----------
        795          5          1         10
 
<b>1 row chained -- because of the trailing column being NOT NULL - and avg row length goes up</b>


 

Suresh, September 01, 2004 - 9:37 pm UTC

Thanks Tom. This helped a lot in clarifying my doubts. And my apologies for not providing test case,  I will keep that in mind in future.

Little more doubts: 
This means tables with columns more than 255 will always have chained rows, if columns beyond 255 are populated with values. Right?

So, now when I perform below operations on this test table (i had to rename as tt), why is chaining going away?

SQL> alter table tt drop primary key ;

Table altered.

SQL> alter table tt move;

Table altered.

SQL> analyze table tt compute statistics;


Table analyzed.

SQL> SQL> select avg_row_len, BLOCKS, chain_cnt, PCT_FREE, PCT_used 
from user_tables where table_name = 'TT';  2  

AVG_ROW_LEN     BLOCKS  CHAIN_CNT   PCT_FREE   PCT_USED
----------- ---------- ---------- ---------- ----------
        795          1          0         10         40

And another question: How should we tackle this table in production which will blow away space and may cause performance degradation in production when we add 283rd column and populate it. The table is already 50 GB fat.

Thank you very much for much needed help,

Suresh 

Tom Kyte
September 02, 2004 - 7:55 am UTC

tables with more than 255 columns are always stored in multiple row pieces, yes.


looks like an "issue with chain_cnt" for the row is chained (if you wanted to go as gory deep as dumping the block -- you could see it physically).  I can show it using the statistic 'table fetch continued row'

picking my example up where I left off:


ops$tkyte@ORA9IR2> alter table t move;
Table altered.
 
ops$tkyte@ORA9IR2> analyze table t compute statistics;
Table analyzed.
 
ops$tkyte@ORA9IR2> select avg_row_len, BLOCKS, chain_cnt, PCT_FREE, PCT_used
  2   from user_tables
  3  where table_name = 'T';
 
AVG_ROW_LEN     BLOCKS  CHAIN_CNT   PCT_FREE   PCT_USED
----------- ---------- ---------- ---------- ----------
        795          1          0         10         40

<b>according to that -- no chaining, but:</b>

 
ops$tkyte@ORA9IR2> @mystat continued
ops$tkyte@ORA9IR2> column value new_val V
ops$tkyte@ORA9IR2> define S="&1"
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> set autotrace off
ops$tkyte@ORA9IR2> select a.name, b.value
  2  from v$statname a, v$mystat b
  3  where a.statistic# = b.statistic#
  4  and lower(a.name) like '%' || lower('&S')||'%'
  5  /
old   4: and lower(a.name) like '%' || lower('&S')||'%'
new   4: and lower(a.name) like '%' || lower('continued')||'%'
 
NAME                                VALUE
------------------------------ ----------
table fetch continued row               6
 
ops$tkyte@ORA9IR2> select c1 from t;
 
        C1
----------
         1
 
ops$tkyte@ORA9IR2> @mystat2
ops$tkyte@ORA9IR2> select a.name, b.value V, b.value-&V diff
  2  from v$statname a, v$mystat b
  3  where a.statistic# = b.statistic#
  4  and lower(a.name) like '%' || lower('&S')||'%'
  5  /
old   1: select a.name, b.value V, b.value-&V diff
new   1: select a.name, b.value V, b.value-         6 diff
old   4: and lower(a.name) like '%' || lower('&S')||'%'
new   4: and lower(a.name) like '%' || lower('continued')||'%'
 
NAME                                    V       DIFF
------------------------------ ---------- ----------
table fetch continued row               6          0

<b>fetching c1 incurred no table fetch continued row, but getting this column</b>
 
ops$tkyte@ORA9IR2> select c281 from t;
 
      C281
----------
        42
 
ops$tkyte@ORA9IR2> @mystat2
ops$tkyte@ORA9IR2> select a.name, b.value V, b.value-&V diff
  2  from v$statname a, v$mystat b
  3  where a.statistic# = b.statistic#
  4  and lower(a.name) like '%' || lower('&S')||'%'
  5  /
old   1: select a.name, b.value V, b.value-&V diff
new   1: select a.name, b.value V, b.value-         6 diff
old   4: and lower(a.name) like '%' || lower('&S')||'%'
new   4: and lower(a.name) like '%' || lower('continued')||'%'
 
NAME                                    V       DIFF
------------------------------ ---------- ----------
table fetch continued row               7          1

<b>does, the row is in fact chained -- for whatever reason, the analyze did not report it.  What i observed physically is that

a) in the first case, the row was physically chained over two blocks
b) in the second case, the row was physically chained *on the same block*

so the chain_cnt must be a count of rows that are chained over 2 or more blocks as well as migrated rows -- but not the special case of a row stored in multiple row pieces on the same block and the alter table t move put the two row pieces on the same block. 

</b>


So, before the move, it took two IO's of two different block to construct the row, after it took two IO's of the *same* block.


As for the table in production -- that is going to be a problem.  It is going to get "fatter" and it'll probably cause many of the rows to now physically span two blocks.

I would probably *not* alter this table and add a new column with a default -- If the column is not null and frequently accessed, I would:

  1  create table
  2  new_table
  3  ( x, c1, c2, new_column not null, c3, c4 )
  4  as
  5* select x, c1, c2, cast( 'x' as varchar2(20) ), c3, c4 from t
ops$tkyte@ORA9IR2> /
 
Table created.
 
ops$tkyte@ORA9IR2> alter table new_table modify new_column default 'x';
 
Table altered.
 
ops$tkyte@ORA9IR2> desc new_table;
 Name                                     Null?    Type
 ---------------------------------------- -------- ----------------------------
 X                                                 NUMBER(38)
 C1                                                NUMBER
 C2                                                NUMBER
 NEW_COLUMN                               NOT NULL VARCHAR2(20)
 C3                                                NUMBER
 C4                                                NUMBER
 
ops$tkyte@ORA9IR2> drop table t;
 
Table dropped.
 
ops$tkyte@ORA9IR2> rename new_table to t;
 
Table renamed.


placing this NOT NULL column up in the first 255 columns -- letting the trailing columns remain trailing NULL COLUMNS (might even look at the table as it exists and determine the optimum set of columns to have "trailing" -- the ones that are most frequently all NULL).

If the order of columns was 'relevant' to someone I would:

create table new_table as select ...
drop table t;
create view t as select <columns in any order you like> from new_table;


 

A reader, September 02, 2004 - 2:31 pm UTC

Great explaination...as always. THANK YOU VERY MUCH TOM.

Very informativ.

Carl Bruhn, October 27, 2004 - 8:50 am UTC

Excellent overview and helped me solve a performance issue

Exelent TOM you are really gr8

Neeraj,India, April 25, 2005 - 2:35 am UTC

Tom,
What kind of solution need to be taken care if i have the following situation.Please tell me whether this will degrade the performance of the application using the same table.

CHAIN_CNT PCT_CHAINED AVG_ROW_LEN PCT_FREE PCT_USED TABLE_NAME
325997 0.16 250 10 SSL_IN_PREP_ACCT_DTLS



Is this mean that we have to increase the datablock size in order to prevent the row chaining.

2091935 number of rows available in the current table SSL_IN_PREP_ACCT_DTLS




Tom Kyte
April 25, 2005 - 7:13 am UTC

do you know if

a) they are chained
b) they are migrated

and

c) is this affecting anything, do you see lots of table fetch continued row operations due to this.

Chained Rows and Autotrace

neeraj, April 25, 2005 - 10:26 am UTC

As discussed above :
you insert a bunch of skinny rows.... You get hundreds / block. Now, you start modifying these rows over time -- adding the history as the present becomes the past. The row gets fatter (hey, sort of like us, the older we are...). Now, where 100 rows used to fit -- only 6 or 7 do (avg rowsize = 500 bytes over time, 4k blocks 3500 bytes and its full). the other 95 or so rows MIGRATE to other blocks to live. No more room for them on the block.



modifying these rows over time -- how the rows gets fatter.
If one field like status=(T/F).If we will change this status frequently they how the rows gets fatter.
Also how to check/count row chained and migrated rows individually.I am new to these concepts.Please revert me back.


Tom Kyte
April 25, 2005 - 10:44 am UTC

I gave you the techniques in some detail for seeing if a set of rows is chained or migrated above.

but an easier way:

analyze table list the chained rows. You have their rowids.

Look at the length of those rows, if it is less than your block size, they are MIGRATED. if it is more, they are truly CHAINED (and no amount of reorg will fix them in that blocksize)

If ou just update a T/F field -- the rows are not getting fatter.

reader

A reader, July 06, 2005 - 12:45 pm UTC

Does alter table t move preserves

1. synonyms
2. grants to table
3. roles associated with the table

How does the transactions during the "move" on the table
handled
Except for PK, FK, indexes, everything else do not to
worry about.

Tom Kyte
July 06, 2005 - 12:48 pm UTC

yup

you won't be doing transactions on a table undergoing a move.

just INDEXES need to be rebuilt after, pk's/fk's are already OK.

reader

A reader, July 06, 2005 - 1:26 pm UTC

is "alter table move .." preferable to
"alter table move online .."

In the later case, will there be performance hit on
TXs on the table

Tom Kyte
July 06, 2005 - 2:26 pm UTC

and one only works with IOTs, not real tables, it is like an online index rebuild.

online is not available for "normal" tables

reader

A reader, July 06, 2005 - 2:49 pm UTC

Do the triggers on the table not become
invalid either during "alter table move ...."

Tom Kyte
July 07, 2005 - 8:22 am UTC

negatives....

Yes, the triggers do NOT become invalidate

the triggers are 100% unaffected by this.

Finding number of blocks occupied by table

Vikram Romeo, June 06, 2006 - 5:44 pm UTC

Hi Tom,

Consider:

create table t (x int) as select 1 from all_objects;
analyze table t compute statistics;

Arent the following 2 queries supposed to give the same result?

select blocks from user_tables where table_name='T';

select count(distinct dbms_rowid.rowid_block_number(rowid)) num_blocks_used from t;

Appreciate your response.



Tom Kyte
June 06, 2006 - 9:36 pm UTC

No.

not at all.


you are counting the HEAD rowid piece.

rows can chain, rows can migrate, rows can be deleted...



ops$tkyte@ORA10GR2> create table t ( x int ) tablespace manual;

Table created.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> insert into t
  2  select null from all_objects;

49906 rows created.

ops$tkyte@ORA10GR2> commit;

Commit complete.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops$tkyte@ORA10GR2> select blocks, (select count(distinct dbms_rowid.rowid_block_number(rowid)) from t) from user_tables where table_name = 'T';

    BLOCKS (SELECTCOUNT(DISTINCTDBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID))FROMT)
---------- ----------------------------------------------------------------
        76                                                               76

ops$tkyte@ORA10GR2> update t set x = rpad('9',38,'9');

49906 rows updated.

ops$tkyte@ORA10GR2> commit;

Commit complete.

ops$tkyte@ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops$tkyte@ORA10GR2> select blocks, (select count(distinct dbms_rowid.rowid_block_number(rowid)) from t) from user_tables where table_name = 'T';

    BLOCKS (SELECTCOUNT(DISTINCTDBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID))FROMT)
---------- ----------------------------------------------------------------
       433                                                               76


ops$tkyte@ORA10GR2> delete from t;

49906 rows deleted.

ops$tkyte@ORA10GR2> commit;

Commit complete.

ops$tkyte@ORA10GR2>  select blocks, (select count(distinct dbms_rowid.rowid_block_number(rowid)) from t) from user_tables where table_name = 'T';


    BLOCKS (SELECTCOUNT(DISTINCTDBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID))FROMT)
---------- ----------------------------------------------------------------
       433                                                                0



And if you use ASSM (automatic segment space management), you'll typically find more than one block initially under the high water mark for free space management.  We use blocks in your table to manage space for your table.

 

Thanks Tom

Vikram Romeo, June 07, 2006 - 1:27 pm UTC

Thanks a lot Tom ...

However, for you , atleast after the table creation, they seemed to match. For me, even after the table creation, the number of blocks from user_tables were double the value from the dbms_rowid fucntion.

Does row migration happen even when the table is created for the first time? or does it happen only when we update/delete from the table?

Thanks once again for your response.

Regards,
Vikram Romeo

Tom Kyte
June 07, 2006 - 3:32 pm UTC

because I use "manual segment space management" and you probably used "automatic" (assm) and as I said...

<quote>
And if you use ASSM (automatic segment space management), you'll typically find
more than one block initially under the high water mark for free space
management. We use blocks in your table to manage space for your table.
</quote>


rows migrate after being updated sometimes (they do not always migrate and then can in fact even migrate back!)



Thanks Tom

Vikram Romeo, June 08, 2006 - 11:38 am UTC

Thanks for your patience in answering even trivial questions like this Tom :-)

Can you please tell me how to find out whether I am using ASSM or not?

Regards,
Vikram Romeo

Tom Kyte
June 08, 2006 - 11:43 am UTC

ops$tkyte@ORA10GR2> select tablespace_name, segment_space_management from dba_tablespaces;
 
TABLESPACE_NAME                SEGMEN
------------------------------ ------
SYSTEM                         MANUAL
SYSAUX                         AUTO
USERS                          AUTO
UNDOTBS2                       MANUAL
EXAMPLE                        AUTO
BIG_TABLE                      AUTO
TEMPSPC                        MANUAL
X_8M                           MANUAL
Y_48K                          MANUAL
MSSM                           AUTO
 
10 rows selected.
 
 

Cause of Chaining

Murali, June 28, 2006 - 9:56 pm UTC

Hello Tom,

I heard that bouncing the database very frequently in the middle of a batch program can cause chaining (i.e the program which is doing the insert is aborted frequently due to database bounce). I have a table with 70 Million records out of which 80000 records are chained, at this point i am not sure whether these records are queried any more, but does this indicate too much chaining ??

Does Alter table move reduce chaining? or should i do CTAS? Please advise.

Murali

Tom Kyte
June 29, 2006 - 7:17 am UTC

well, while bouncing the dtabase very frequently is in itself a rather silly thing to do - it won't really be the cause of chaining.

row chaining happens when

a) a row is created that is larger than the Oracle block size, it must be chained and no amount of reorganization will change that fact

b) a row is updated and the block it was on originally no longer has sufficient free space to hold the entire row. We move the row to a new block and just leave behind a pointer to it. This type of chaining can be "removed"/"fixed"


If you have a measly 80,000 records out of 70,000,000 chained, I'm not sure I've even think about "fixing" it. Before you even consider "fixing it", you should

a) make sure the rows can fit on a single oracle block. If not, just drop the idea of fixing it, you'd need a bigger block.

b) make sure you have "high" table fetch continued row statistics and that this table is the root cause.


And if you believe A) is not true and B) is true - then you can:

o analyze table to list chained rows, we'll populate a table with rowids of chained rows

o create table temp as select * from original_table where rowid in (that set of rowids) - copy the chained data out

o delete from original table where rowid in (that set) -- remove the chained rows

o insert /*+ append */ into original table select * from temp; -- put them back

o commit;



What to do now?

Rakesh Arora, August 07, 2006 - 6:56 pm UTC

Hi Tom,

Thanks for your response for the quetion on row chaining/migration ( RC and RM) ? I am going through lot of oracle documentation on Metalink and notes on this site as well and still found 2 reasons for RC and RM
a) A row length is large that it cannot fit into a single block (i.e. RC)
b) A row is updated after insert and due to insufficient value of PCTFREE it casuses now RM.

I have a 3rd cases where I have reorged a table (3 millions rows) and it had 0 chained rows and there is a daily process that inserts the data (there is NO UPDATE) and afterwards now I have 20% of the rows that were inserted in last 3 days af=re chained, after I did the reorg using alter table move.
So I am not sure what is causing this and avae_row_len is 10 bytes whereas the block size I am using is 16K?

So I am totally lost as how to explain this behavior. Appreciate any insight you cna provide?

In addition your comment "
make sure you have "high" table fetch continued row statistics and that this
table is the root cause."

What is "high", I know it's subjective but is there any magic number I should be looking for plus how do I relate this sttas with which tables? In one of our database the count is 70 millions, so what does this mean?

Thanks
Rakesh

Tom Kyte
August 07, 2006 - 10:03 pm UTC

I think something else is happening that you are not mentioning (eg: you think the rows are not updated - but they are, you just don't know they are.

can you reproduce with a test case?

what to do now?

Rakesh Arora, August 07, 2006 - 7:02 pm UTC

Sorry about the typo Tom,

Please read foloing as
"So I am not sure what is causing this and avae_row_len is 10 bytes whereas the
block size I am using is 16K?
"
as
corrected (ythe row length was 100 bytes).

So I am not sure what is causing this and ave_row_len is 100 bytes whereas the block size I am using is 16K?
Thanks
Rakesh


avg_row_len=100

Michel Cadot, August 08, 2006 - 12:42 am UTC

Hi Rakeh,

If your avg_row_len is always 100 maybe you encountered a known bug.
Have a look at Metalink note 252361.1.

Regards
Michel


Still looking for an answer

Rakesh Arora, August 08, 2006 - 2:35 am UTC

Hi Michael,

Thanks for your reply and directing me to this note and yes it does explain why I am seeing the ave_row_len as 100 bytes.

But still I am looking for an answer to the following 2 questions.

Q1: I have a 3rd cases where I have reorged a table (3 millions rows) and it had 0 chained rows and there is a daily process that inserts data (there is NO
UPDATE) and now I have 20% of the rows that were inserted in last 3 days, that are chained, after I did the reorg using "alter table move".

So I am not sure what is causing this chained rows?
The ave_row_len is 100 bytes whereas the block size I am using is 16K?

So I am totally lost as how to explain this behavior. Appreciate any insight you can provide?

In addition your comment "
make sure you have "high" table fetch continued row statistics and that this table is the root cause."

What is "high", I know it's subjective but is there any magic number I should be looking for plus how do I relate this stats with which tables? In one of our
database the count is 70 millions, so what does this mean?

Thanks for your reply.

Regards,
Rakesh

Tom Kyte
August 08, 2006 - 7:51 am UTC

I still think there is an update you are just not aware of. Perhaps you can enable auditing on this table to see what really happens to it over time.

70,000,000 in what period of time
and what is the number of rows retrieved that DID'NT suffer from this.

Follow-up

Rakesh Arora, August 08, 2006 - 8:42 am UTC

Hi Tom,

Thanks for your reply.

Actually there are 2 things that I got mixed together.

In database- A I have this issue of migrated rows (20% of the rows inserted in the last 3 days) and also I verified the code and there is no update statement.

In database-B, which is totaly a different database instance I ran the query to get the "TFCR" from V$sysstat and it gave me the count of 70 milions when the DB uptime was only 1 day and it is a DB that gets lot of inserts/updates and READS plus there are around 800 plus tables in this database. So I am not sure this TFCR value of 70 milion is for one table or all 800 tables?

As I am using DBMS_STATS and that doesn't populate chain_cnt columns in DBA_TABLES, so I am not sure if I have 1 table that had RC/RM or all 800 tables. Plus the case-I (i.e. database -A) is different than this?

My aplogies for mixing up the two?

Appreciate if could help in clearing this?

Thanks
Rakesh

Tom Kyte
August 09, 2006 - 9:47 am UTC

sorry, but inserts won't migrate unless they are changed. Just doesn't happen see. There is something else afoot here. You are missing part of the puzzle.


the system statistics are for the "system"

so, what are your table fetch by rowid, table scan rows gotten values for this system.

Looking for Help

Rakesh, August 08, 2006 - 9:05 pm UTC

Hi Tom,

I am looking for your help on this one?

Aprreciate if you could help me understand this behavior of oracle.

Thanks
Rakesh

Tom Kyte
August 09, 2006 - 10:36 am UTC

(I have a day job.....)

I'm convinced you are missing a piece of the puzzle. inserts don't chain unless the row doesn't fit on a block.

Hence, I truly believe something else is afoot.

Behaviour in 10g

egonzafe, January 17, 2008 - 9:28 am UTC

Hello, Tom

could you explain this behaviour in 10g?


drop table t;

declare
l_stmt long := 'create table t ( x int';
begin
for i in 1 .. 250
loop
l_stmt := l_stmt || ', c' || i || ' number default 1';
end loop;
for i in 251..281
loop
l_stmt := l_stmt || ', c' || i || ' number ';
end loop;
execute immediate l_stmt || ') pctfree 0';
end;
/

insert into t (x, c281)
select rownum, 42 -- all rows with intra-block chaining
from all_objects
where rownum<=1000;

commit;

analyze table t compute statistics;

select num_rows, avg_row_len, blocks, empty_blocks, chain_cnt, pct_free, pct_used
from user_tables
where table_name = 'T';


NUM_ROWS AVG_ROW_LEN BLOCKS EMPTY_BLOCKS CHAIN_CNT PCT_FREE PCT_USED
---------- ----------- ---------- ------------ ---------- ---------- ----------
9=> 1000 796 102 33 0 0 40
10=> 1000 796 100 3 0 0 40



select count(c281) from t;

...


9=> table fetch continued row = 1000
10=> table fetch continued row = 99 (like blocks with intra-block chaining - 1 ?)


drop table t;

declare
l_stmt long := 'create table t ( x int';
begin
for i in 1 .. 250
loop
l_stmt := l_stmt || ', c' || i || ' number default 1';
end loop;
for i in 251..281
loop
l_stmt := l_stmt || ', c' || i || ' number ';
end loop;
execute immediate l_stmt || ') tablespace tbs_t pctfree 0';
end;
/

insert into t (x)
select rownum
from all_objects
where rownum<=500;

insert into t (x, c281)
select rownum, 42
from all_objects
where rownum<=500;

commit;

analyze table t compute statistics;

select num_rows, avg_row_len, blocks, empty_blocks, chain_cnt, pct_free, pct_used
from user_tables
where table_name = 'T';


NUM_ROWS AVG_ROW_LEN BLOCKS EMPTY_BLOCKS CHAIN_CNT PCT_FREE PCT_USED
---------- ----------- ---------- ------------ ---------- ---------- ----------
9=> 1000 776 102 33 0 0 40
10=> 1000 776 100 3 0 0 40

select count(c281) from t;

..

9=> table fetch continued row = 500
10=> table fetch continued row = 49 (like blocks with intra-block chaining - 1 ?)


Thanks



table fetch continued row

deba, February 20, 2008 - 9:40 pm UTC

Hi,

My databse is 10.2 64 bit on Solaris. table fetch continued row statistic is huge value in v$sysstat. Now analyze command is deprecated in 10g. Now if I want to know which objects are having chained/migrated rows then what should I do ? v$segment_statistics is not showing value of statustic "table fetch continued row" against any segment .

Thanks
Deba
Tom Kyte
February 20, 2008 - 10:42 pm UTC

define huge in the context of time here please.

v$ tables are 'as of the time of instance startup'


are they really "huge" or just "well, they add up over a long time"


and remember if you have long, long raw, rows larger than blocksize or tables with more than 254 columns - this is a fact of life, you will have continued row fetches - forever.

table fetch continued row

deba, February 21, 2008 - 12:10 pm UTC

Hi Tom,

Thanks for the reply. My question :

1) If there is value for "table fetch continued row" in v$sysstat ,I should expect to see some value for "table fetch continued row" in v$segment_statistics against any object(s). Am I right ?

2) To find out the tables having chained rows or migrated rows , what is the procedure other than analyze since analyze is getting deprecated ?

Thanks
Deba
Tom Kyte
February 21, 2008 - 4:50 pm UTC

1) well, consider this...

a) you see it in v$sysstat
b) you don't in v$segment_statistic

so, no. You shouldn't. select distinct name from v$segment_statistic shows the small set of statistics maintained.

2) analyze as a method to gather statistics is, but analyze will still exist.

More to Explore

Hints

Need more information on hints? Check out the complete hint descriptions