Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Howie.

Asked: March 13, 2001 - 10:39 am UTC

Last updated: November 12, 2018 - 3:24 pm UTC

Version: 8.0.5

Viewed 10K+ times! This question is

You Asked

Hi Tom

What is row chaining/migration ? what are the consequences of Row Chaining/Migration ? How I can find whether it is there in my database or not ? and if it is there what is the solution to get rid from it?

Thanks in advance

Howie


and we said...

For other background on this:

</code> http://asktom.oracle.com/pls/ask/f?p=4950:7:::::F4950_P7_SEARCH_CRITERIA:chaining http://asktom.oracle.com/pls/ask/f?p=4950:7:::::F4950_P7_SEARCH_CRITERIA:migrated

A migrated row is a row that was moved to another block due to an update making it too large to fit on its original block with the other rows there.  We cannot just "move" the row -- we have lots of indexes pointing to the original block.  We therefore leave behind a forwarding address on the original block and migrate the updated row to a new block.  now, when you access that row, we discover it is not really there -- it is migrated and we read the other block to get it.

A chained row is a row that is too large to fit on a single block.  If you have a 2k blocksize but your row size is 4k, we will use 3 blocks to store that row in pieces.  Any table with a long/long raw will have chained rows.  Any table whose rowsize exceeds the blocksize will have chained rows.  Any table with more then 255 columns will have chained rows (we break really wide tables up).

You can see if you "have it" by looking at the system wide statistic:

sys@OSI816> select * from v$sysstat where name like 'table fetch cont%';

STA NAME                                CLASS      VALUE
--- ------------------------------ ---------- ----------
158 table fetch continued row              64    2267554


that shows that in that database, since it was up, i had 2,267,554 requests for rows that were migrated or chained.  You have to take that with a grain of salt however since

155 table scan rows gotten                 64 8544809978
156 table scan blocks gotten               64  190361236
157 table fetch by rowid                   64  419291785
158 table fetch continued row              64    2267577

that is a VERY small percentage of my total rows.

You can analyze a table to find the chained rows.  See
http://docs.oracle.com/cd/A81042_01/DOC/server.816/a76992/ch20_io.htm#3552 <code>
for more details, including "how to get rid of it".




Rating

  (21 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Intra-block Chaining

Vivek Sharma, May 28, 2004 - 5:34 am UTC

Dear Tom,

Chapter 10 of Oracle 9i Document

</code> http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96524/c11schem.htm <code>

under heading "Row Format and Size" states about Intra-Block Chaining which occurs when the number of columns in a table are more than 256. The documentation says that :

"When a table has more than 255 columns, rows that have data after the 255th column are likely to be chained within the same block. This is called intra-block chaining. A chained row's pieces are chained together using the rowids of the pieces. With intra-block chaining, users receive all the data in the same block. If the row fits in the block, users do not see an effect in I/O performance, because no extra I/O operation is required to retrieve the rest of the row."


Is it true ? Because we have an application from a 3rd party vendor which has certain tables with more than 255 rows.

If it is true then I have few queries :

1. Since these are linked using the rowid pieces, do they have more than 1 rowids, depending upon the number of columns ?

2. If a block is completely empty and a new row (with more than 255 columns) is inserted into this block. Will they be chained ? if yes, then what will be the layout of the block ? i.e.row piece for 255 columns, then free space, then row piece for balance columns.

3. Scenarion : Block Size is 8k. If a block is 20% free and is on freelist (assuming I have not enabled ASSM). The total size of a new row with more than 255 columns is 2K but the size of the columns less than 255 is about 600 bytes. In this case, will it insert first 255 columns in this block and for the rest to new block causing row chaining. Or will it check for the block with free space of 2k or more.

4. Documentation says that there is no overhead of Intra Block Chaining. Is it so ?

Awaiting your valuable comments.

Thanks and Regards
Vivek



Tom Kyte
May 28, 2004 - 11:09 am UTC

yes, it is true.


1) a row has a rowid.  row pieces have their own rowids but you'll never ever see them.

2) yes.  you could dump the block if you are really interested in what your specific version will do -- but it really doesn't matter very much to you and me.

3) at first I thought you were going to make me dump a block ;)  but this shows what happens without doing that:


ops$tkyte@ORA9IR2> drop table t;
 
Table dropped.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> declare
  2          l_txt long;
  3  begin
  4          l_txt := 'create table t ( x int';
  5          for i in 1 .. 300
  6          loop
  7                  l_txt := l_txt || ', c' || i || ' varchar(4000)';
  8          end loop;
  9          l_txt := l_txt || ') ' ;
 10          execute immediate l_txt;
 11  end;
 12  /
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into t (x,c1,c2) values ( 1, rpad('*',4000,'*'), rpad('*',2300,'*') );
 
1 row created.
 
ops$tkyte@ORA9IR2> commit;
 
Commit complete.
 
ops$tkyte@ORA9IR2> exec show_space( 'T' );
Free Blocks.............................               1
Total Blocks............................               8
Total Bytes.............................          65,536
Total MBytes............................               0
Unused Blocks...........................               6
Unused Bytes............................          49,152
Last Used Ext FileId....................               6
Last Used Ext BlockId...................               9
Last Used Block.........................               2
 
PL/SQL procedure successfully completed.

<b>8 blocks allocated -- one is overhead for the segment, one has data.  6 blocks never used.  first block is still on the freelist</b>


 
ops$tkyte@ORA9IR2> insert into t (x,c1,c300) values ( 2, rpad('*',6,'*'), rpad('*',1,'*') );
 
1 row created.


ops$tkyte@ORA9IR2> commit;
 
Commit complete.
 
ops$tkyte@ORA9IR2> exec show_space( 'T' );
Free Blocks.............................               1
Total Blocks............................               8
Total Bytes.............................          65,536
Total MBytes............................               0
Unused Blocks...........................               6
Unused Bytes............................          49,152
Last Used Ext FileId....................               6
Last Used Ext BlockId...................               9
Last Used Block.........................               2
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select dbms_rowid.rowid_block_number(rowid) from t;
 
DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)
------------------------------------
                                  10
                                  10

<b>that shows the row will go onto the first block when it entirely fits.  As soon as we bump it out however:</b>


ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> drop table t;
 
Table dropped.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> declare
  2          l_txt long;
  3  begin
  4          l_txt := 'create table t ( x int';
  5          for i in 1 .. 300
  6          loop
  7                  l_txt := l_txt || ', c' || i || ' varchar(4000)';
  8          end loop;
  9          l_txt := l_txt || ') ' ;
 10          execute immediate l_txt;
 11  end;
 12  /
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into t (x,c1,c2) values ( 1, rpad('*',4000,'*'), rpad('*',2300,'*') );
 
1 row created.
 
ops$tkyte@ORA9IR2> commit;
 
Commit complete.
 
ops$tkyte@ORA9IR2> exec show_space( 'T' );
Free Blocks.............................               1
Total Blocks............................               8
Total Bytes.............................          65,536
Total MBytes............................               0
Unused Blocks...........................               6
Unused Bytes............................          49,152
Last Used Ext FileId....................               6
Last Used Ext BlockId...................               9
Last Used Block.........................               2
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> insert into t (x,c1,c300) values ( 2, rpad('*',6,'*'), rpad('*',1400,'*') );
 
1 row created.
 
ops$tkyte@ORA9IR2> commit;
 
Commit complete.
 
ops$tkyte@ORA9IR2> exec show_space( 'T' );
Free Blocks.............................               1
Total Blocks............................               8
Total Bytes.............................          65,536
Total MBytes............................               0
Unused Blocks...........................               5
Unused Bytes............................          40,960
Last Used Ext FileId....................               6
Last Used Ext BlockId...................               9
Last Used Block.........................               3
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select dbms_rowid.rowid_block_number(rowid) from t;
 
DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)
------------------------------------
                                  10
                                  11
 
<b>we can see the second insert went into a new block -- we know the head of the row is on the second block since the block number is different.  The database considered the length of the entire row - not the row pieces in this case</b>


4) there is as little as you are going to be able to achieve.  Overhead is that which you can remove -- we cannot remove this :)  by definition it is not overhead, just the cost of having more than 255 columns. 

Can an column span row pieces?

Logan Palanisamy, September 15, 2004 - 4:11 pm UTC

Tom,

On the last line of page 5-5 of Oracle 10g Concepts Manual (
</code> http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10743/schema.htm#sthref694 <code>
), it says

"Individual columns can also span row pieces and, consequently, data blocks."

How an a column, say c1, span row pieces?

Tom Kyte
September 15, 2004 - 8:25 pm UTC

x varchar2(4000)

2k block size

4000 won't fit in 2k

To the point!!!!

A reader, September 16, 2004 - 10:46 am UTC


255 Columns

Deepak, June 28, 2005 - 1:09 pm UTC

Hi Tom,

You said that "Tables having more than 255 columns will ALWAYS have chained rows". Is there any specific reason for that? Will it occur even if we have very big database blocks?

Tom Kyte
June 28, 2005 - 1:21 pm UTC

because the limit on columns historically was 255 columns, their is a "byte" controlling stuff -- a byte isn't big enough to hold more than 255. So, in order to increase to 1000 they just chained the row pieces in chunks.

Regardless of the block size.

Chained Rows

mAg, September 22, 2005 - 7:46 am UTC

Hi Tom

We are using Oracle 9i dw under unix with 8k, there are chained rows some few tables...

gmbpm@GMBPMDBP> SELECT table_name, COUNT(*) FROM chained_rows GROUP BY table_name ORDER BY table_name;

TABLE_NAME COUNT(*)
------------------------------ ----------
PRODUCT 15
PRODUCT1 137
PRODUCT5 3621
SADTEST 245622

gmbpm@GMBPMDBP> SELECT table_name, num_rows FROM user_tables WHERE table_name IN (SELECT UNIQUE table_name FROM chained_rows) ORDER BY table_name;

TABLE_NAME NUM_ROWS
------------------------------ ----------
PRODUCT 73263
PRODUCT1 2277
PRODUCT5 13259
SADTEST 2804

Which is the best way to avoid chained rows
1) Recreate the database with 16k?
or
2) Putting these tables in 16k tablespace and allocate new db cache size?
or
3) Remove the chained rows and insert back into the table?

Thanks for your time.

Tom Kyte
September 22, 2005 - 2:03 pm UTC

are the rows CHAINED or MIGRATED.

That is, are the rows > 8k in width? If not, they are migrated and using a larger block isn't relevant.

Or do the rows have more than 255 columns? In which case, you can rebuild until the cows come home and they'll still be chained.


What causes the chaining in your case - need to know that *first*

Some more info...

mAg, September 22, 2005 - 5:36 pm UTC

Hi Tom,
Thanks lot for your input.

On collecting the following information, I'm make sure it's chaining as I told you its DW application.
No updating is going on. And table is with default storage options.

TABLE_NAME NUM_ROWS CHAIN_CNT AVG_ROW_LEN LAST_ANAL PCT_FREE PCT_USED
------------ ---------- ---------- ----------- --------- ---------- ----------
PRODUCT1 2277 12 423 21-SEP-05 10 40

So, is it a good option to create new tablespace with block size of 16k or 32k and move these tables to new tablespace ?
Where I'm also getting high value on 'table fetch continued row' statistics.

Here is a question, please correct me if I'm wrong:

The avg_row_len is 423k, on allocating block size 16k or 32k will not a row chained again when it will not fit
in a single block ???

And, could you elaborate is there any rule to specify memory for db_16k_cache_size or db_32k_cache_size when
we choosing non-standard tablespace.

Here is the structure info this table and there are few more tables like this:

CREATE TABLE PRODUCT1
(
PRODUCTKEY NUMBER(10) NOT NULL,
WAVEID NUMBER(2) NOT NULL,
CATEGORYKEY NUMBER(10) NOT NULL,
PID NUMBER(6),
BID NUMBER(6),
LEVEL1 VARCHAR2(100 ),
LEVEL2 VARCHAR2(200 ),
LEVEL3 VARCHAR2(300 ),
LEVEL4 VARCHAR2(400 ),
LEVEL5 VARCHAR2(500 ),
LEVEL6 VARCHAR2(600 ),
LEVEL7 VARCHAR2(700 ),
LEVEL8 VARCHAR2(800 ),
ATTR1 VARCHAR2(100 ),
ATTR2 VARCHAR2(100 ),
ATTR3 VARCHAR2(100 ),
ATTR4 VARCHAR2(100 ),
ATTR5 VARCHAR2(100 ),
ATTR6 VARCHAR2(100 ),
ATTR7 VARCHAR2(100 ),
ATTR8 VARCHAR2(100 ),
PRODDESC VARCHAR2(100 ),
PACK VARCHAR2(100 ),
UPCDESC VARCHAR2(100 ),
SYS NUMBER(5),
GEN NUMBER(5),
VEN NUMBER(5),
ITE NUMBER(10),
UPC10 VARCHAR2(10 ),
UPC12 VARCHAR2(12 ),
UPC13 VARCHAR2(25 ),
PG1 NUMBER(4),
PG2 NUMBER(4),
PG3 NUMBER(4),
PG4 NUMBER(4),
DATECREATED DATE NOT NULL,
DATEUPDATED DATE,
UPC12DESC VARCHAR2(100 )
)


Thanking you.

Regards,
mAg


Tom Kyte
September 22, 2005 - 9:36 pm UTC

... I'm make sure it's chaining as I told
you its DW application. ...


sorry -- but I don't see A (chaining) being implied by B (data warehouse) at all??? I don't see that implication being true even "in general".


your average row len is 423 BYTES (not K)


what is your MAXIMUM true row length (select MAX(vsize(c1)+vsize(c2)+...) from table)

Please, correct me on my understanding...

mAg, September 23, 2005 - 6:09 am UTC

  Hi Tom,
 Thank you so much for your help and correcting to me. And proud to me, be with a mentor like you.
 
 Please, correct me again, if I'm wrong once.
 
 My understanding on CHAINING is, though there is ONLY INSERTION operation and PCTFREE is 40.
 So it will not go for MIGRATION. And here is my test case:
  
  
  SQL> select max (nvl(vsize(PRODUCTKEY),0)+
    nvl(vsize(WAVEID),0)+
    nvl(vsize(CATEGORYKEY),0)+
  .
  .
    nvl(vsize(DATEUPDATED),0)+
    nvl(vsize(UPC12DESC),0)) from gmbpm.product1;
  
  MAX(NVL(VSIZE(PR
  ----------------
               464
  1 row selected
  

  SQL> alter table product1 move;  
 
  SQL> /*
  DOC>Please, comment on this statement. Doesn't it reorganizing the SEGMENT again?
  DOC>*/
  SQL>
 
  SQL> analyze table product1 list chained rows;
  SQL>select count(1) from chained_rows where table_name='PRODUCT1'

  COUNT(1)
  ----------
        12
  1 row selected

  SQL> analyze table PRODUCT1 compute statistics
  for table
  for all indexes
  for all indexed columns;

  Table analyzed
  
  SQL> select table_name, avg_row_len from user_tables where table_name='PRODUCT1';
  
  TABLE_NAME                     AVG_ROW_LEN
  ------------------------------ -----------
  PRODUCT1                               422
  1 row selected
  
  
  SQL> select dbms_rowid.ROWID_BLOCK_NUMBER(rowid), count(1) from PRODUCT1;
  group by dbms_rowid.ROWID_BLOCK_NUMBER(rowid)
  
  DBMS_ROWID.ROWID_BLOCK_NUMBER(   COUNT(1)
  ------------------------------ ----------
                           70290         16
                           70291         16
                           70292         15
                           70293         16
                           70294         17
                           70295         17
                           70296         17
                           70297         16
                           70298         16
                           70299         16
                           70300         16
                           70301         16
                           70302         16
                           70303         17
                           70304         18
                           70305         17
                           70306         18
                           70307         18
                           70308         18
                           70309         17
                           70310         18
                           .
                           .
                           .
                           70340         17
                           70341         16
                           70342         17
                           70343         18
                           70344         16
                           
138 rows selected

SQL> select avg(count(1) )from PRODUCT1
group by dbms_rowid.ROWID_BLOCK_NUMBER(rowid);

AVG(COUNT(1))
-------------
         16.5
1 row selected                           



And another, conclusion to me from this ROWID_BLOCK_NUMBER is 

avg_row_len=422
avg_no_row_per_block=16.5
avg_space_used_in_block= 16.5*422=6963

And, where PCTFREE is 40 .


So, my understanding is as above, please correct me on my bit knowledge to this test case.
And I would appreciate, if you could guide me more how to overcome this problem.

And also, please comment on, going for a non-standard tablespace.

Thanking you.

mAg
 

Tom Kyte
September 23, 2005 - 9:24 am UTC

if your largest row is really 464 bytes and has less than 254 columns - it should not be chained.

I notice sometimes you use a schema name, and sometimes not - are we always looking at the SAME table.

you didn't accidently miss a column or two in the vsize calls?

and they are all nvl'ed to zero right?

And - there are 12 rows here, have you *looked* at them?

Row chaining with lots of null columns

Michael Friedman, October 11, 2005 - 3:03 am UTC

"Tables having more than 255 columns will ALWAYS have chained rows"

Will rows be chained even if the trailing columns over the 255 column limit are Null?

We need to implement user defineable field functionality similar to Oracle EBS flexfields with multi-lingual support for up to 5 simulataneous languages.

If we follow the Oracle lead and support up to 30 user defineable fields then we need to add about 190 columns per table (Unlike Oracle we will store foreign keys in our segments but will denormalize the multi-lingual names of the referenced objects into our user defineable field structure).

Obviously we will often have more than 255 columns per table if we do this.

If we structure our tables so that the multi-lingual columns come last and in order of languages used (in other words, all L1 columns, then L2 columns, ... then L5 columns) so that the trailing columns are all null unless all 5 languages are being used will all rows still be chained no matter how many languages are in use?

If the rows are chained say that we make sure that our queries only select trailing columns if they are being used in that installation? So we don't query any of the columns past the 255 column limit. Will the DB still get the chained block or is it smart enough to only bring back the blocks with the columns that we are selecting in them?

If chaining will always happen and if queries will bring back chained blocks even if we don't select the columns in them then we need to reconsider our design so we don't slow things down in bi and tri-lingual installations to support very rare 5 language installations.

If not then our planned approach will work.

Tom Kyte
October 11, 2005 - 6:45 am UTC

tables having more than 255 columns AND having a column past 255 having a non-null value will ALWAYS have chained rows...


the database will only walk the chain if you ask for a column that makes it walk the chain.




Very high row chain count

Mette Stephansen, January 10, 2007 - 6:28 am UTC

I have a table like this:

CREATE TABLE TXLIFEREQUEST
(
ID NUMBER(28),
INMESSAGEID NUMBER(28),
CREATEDDATE DATE,
TRANSTYPE NUMBER(10),
TRANSREFGUID VARCHAR2(256 BYTE) CONSTRAINT REFG_NOT_NULL NOT NULL
)
TABLESPACE USERS
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
)
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
NOMONITORING;

There are 2.6 mio records and 560.000 of them are chained/migrated.

I have tried to do a simple "alter table xx move" -> just the same no of chained/mgrated records.

There are no long/raw/lobs, not more than 255 cols, a db blksie of 8K, and a max record size far under the limit of the 10% freespace.

The avg rowlength is 30 (the max length of the varchar2 field is 80).

What am I missing in this equation?

regards
Mette

Can you help a Dane ion distress?

Mette, February 13, 2007 - 6:36 am UTC

Hi Tom

I have the above mentioned table, and we still do not know how to make the table have no row chaining.

What have we missed?

regards
Mette
Tom Kyte
February 13, 2007 - 10:00 am UTC

if you did the alter table t move, I don't believe you have any chained rows.

Perhaps your process of measuring chained rows was flawed. Tell us precisely what you did.


(most important question I have is "so, why do you care"? have you identified this as being a problem that truly needs to be fixed or are you grasping at straws hoping for something, anything - magic - to give you a bit more speed)

more than 255 columns. not all rows are chained

VeredM, December 11, 2007 - 7:41 am UTC

Hi Tom,
We have a third party vendor table that has 324 columns.
One of the columns is a long.
the long is always null.
When start dealing with the table, We had 6000000 records chained/migrated out of 7000000.
When we rebuild the table using create as select we become to have 210,000 chained/migrated rows.
I could not find any specific characteristic to the migrated/chained rows.
1. Can I be sure that we are speaking about chained rows (and not migrated), because those reocrds were inserted?
2. Are they intra-block chaining? because when I took as an example few records and calculated their lenght, it was about 700 bytes (we are speaking about 8k block size)
3. The rows I took in my example had columns after the 256 column that conatined not null values, But I have found other rows which were the same and still not chained. so what can be the reason for the difference between those two cases?

thanks in advance,

Vered
Tom Kyte
December 11, 2007 - 9:28 am UTC

1) if after a create table as select you have chained rows, you can be sure they are "truly chained" and not migrated rows.

2) they could be on the same or across block chaining. The row pieces may or may not be on the same block.

3) not following you. "I have found other rows which where the same" - the same as what?

4) what release - looks like things changed between 9i and 10g with regards to how this works..

chained rows with more than 255 columns-continue

VeredM, December 17, 2007 - 5:21 am UTC

Hi,
2. In my example I was speaking abount a row tha is 700 bytes size.
Created by insert as select.
Has more then 255 columns.
If I understand the way Oracle is inserting records then,
if the row size is less then block size (8k), and Oracle choose an empty block to insert the row, then, if it still chained, then the chain is will probably be at the same block.
Am I wrong?
3. By saying : "I have found other rows which where the same" , I ment I have found other rows which had columns after the 256 column that conatined not null values.
Still those rows where not chained.
So how can it be that in some cases like this the row is chained and for others not?
4. The release we have is 10.2.0.3.0 - 64bit

Thanks in advance,

Vered


Tom Kyte
December 17, 2007 - 10:58 am UTC

see #4 above, things changed. It is not reporting all of the chained rows as chained anymore if they are contained on the same block (apparently, I did not dump blocks, but you could if you wanted to - it certainly appears that way)

Row chaining and migration

Arindam, September 19, 2010 - 10:38 am UTC

Hi Tom,

I tested this thing in my personal database which is in 9i. The db_block_size is 8kb in my database.
I created a table:
 create table test (id number primary key,a char(2000),b char(2000),c char(2000),d char(2000),e char(2000));


Observation 1) In that table first of all I inserted values first for ID column:
ID=1,ID=2 and ID=3. I got 1,2,3 for row1,row2,row3 respectively when full scanned the table.

Observation 2) Then for ID=3,2,1 I updated a,b,c column respectively (i.e. in reverse order). When I full scanned the table I got row3,row2,row1 from top to bottom in the order I updated along with their other values respectively. I knew that row2 and row1 were migrated, row3 is in the same block. But I couldn't find any increment of value in v$sysstat view with name='table fetch continued row'.

Observation 3) Then I also updated d,e column only for row3 only. Now while doing
select * from test where id=3
I got the value of v$sysstat is increased by 1 for name='table fetch continued row'.

Observation 4) While doing full table scan I got the v$sysstat value is increased by further 1 for name='table fetch continued row'.

Now why for observation 2 the v$sysstat value was not incremented while doing full table scan even if my row2 and row3 was migrated?
To me observation 3 was as expected. But why observation 4 has got only single value incremented for v$sysstat, as I thought it should be incremented by 2?

Tom Kyte
September 20, 2010 - 2:23 pm UTC

during a full scan, you will NOT see any table fetch continued row. When full scanning - if we hit a migrated row head piece, we ignore it. We KNOW the real row will appear later - we read it then. That is why you got the rows in a different order - we hit the head pieces in order 1, 2, 3 - but we ignored them - and then we hit the real rows in order 3, 2, 1 and you saw them.


when you did id=3, you probably did an index unique scan, we read the index, got a rowid, that rowid pointed us to a row, we read that row, it was the head rowpiece which pointed us to another row and then we read that.



as for observation 4 - show your work, I want to see what you did step by step from the beginning -you know, just like I always do for you guys.

Rowmigration and Rowchaining

Arindam, September 21, 2010 - 6:11 am UTC

Hi Tom,

I am providing the screenshot with the observations I made:

Observation1) Table creation:
SQL> create table test (id number primary key,a char(2000),b char(2000),c char(2000),d char(2000),e
char(2000));
SQL> insert into test(id) values(1);
SQL> insert into test(id) values(2);
SQL> insert into test(id) values(3);
SQL> commit;
SQL> select id from test;

        ID
----------
         1
         2
         3

Observation2) Updating Column A,B,C for 3 rows reversely:
SQL> select name,value from v$sysstat where name='table fetch continued row';

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
table fetch continued row                                                91
SQL> update test set a='a3',b='b3',c='c3' where id=3;
SQL> update test set a='a2',b='b2',c='c2' where id=2;
SQL> update test set a='a1',b='b1',c='c1' where id=1;
SQL> commit;
SQL> select id from test;

        ID
----------
         3
         2
         1

So definitely Row migration for Row2 and Row1. Now Full table scan:
SQL> select id,a,b,c from test;
SQL> select name,value from v$sysstat where name='table fetch continued row';

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
table fetch continued row                                                94

I got the value incremented by 3. What is the reason?
Observation3) I updated row3 which is not migrated till now.
SQL> update test set d='d3',e='e3' where id=3;
SQL> commit;
SQL> select name,value from v$sysstat where name='table fetch continued row';

NAME VALUE
---------------------------------------------------------------- ----------
table fetch continued row 97

What is the reasone behind increase by 3.
Observation4) Full table scan:
SQL> select * from test;
SQL> select name,value from v$sysstat where name='table fetch continued row';

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
table fetch continued row                                               102

Now why this?
So can you please explain the same?
Tom Kyte
September 21, 2010 - 4:01 pm UTC

one wonders why this example doesn't match what you wrote the first time at all.

first time you wrote (after updating)

.... Observation 2) Then for ID=3,2,1 I updated a,b,c column respectively (i.e. in reverse order). When I full scanned the table I got row3,row2,row1 from top to bottom in the order I updated along with their other values respectively. I knew that row2 and row1 were migrated, row3 is in the same block. But I couldn't find any increment of value in v$sysstat view with name='table fetch continued row'. ...

this time you write (for the "same" sequence of steps)

... I got the value incremented by 3. What is the reason? ...


interesting that, isn't it????

a 4k blocksize - ugh, you make it hard to demonstrate things with...

Now, here is a direct cut and paste from my 8k blocksize database - and this is what we would expect:

ops$tkyte%ORA11GR2> create table t (id number primary key,a char(2000),b char(2000),c char(2000),d char(2000),e char(2000));

Table created.

ops$tkyte%ORA11GR2> insert into t(id) values(1);

1 row created.

ops$tkyte%ORA11GR2> insert into t(id) values(2);

1 row created.

ops$tkyte%ORA11GR2> insert into t(id) values(3);

1 row created.

ops$tkyte%ORA11GR2> commit;

Commit complete.

ops$tkyte%ORA11GR2> column a noprint
ops$tkyte%ORA11GR2> select id, a from t;

        ID
----------
         1
         2
         3

<b>that would be a full scan (selecting id, a would make it go to the table, not the index, you can verify with autotrace if you like).  The rows appear "in order".  Now, let's update:</b>

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> update t set a='a',b='b',c='c',d='d',e='e' where id=3;

1 row updated.

ops$tkyte%ORA11GR2> delete from chained_rows;

3 rows deleted.

ops$tkyte%ORA11GR2> analyze table t list chained rows;

Table analyzed.

ops$tkyte%ORA11GR2> select count(*) from chained_rows;

  COUNT(*)
----------
         1

<b>we can see and verify - row is chained...</b>

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> update t set a='a',b='b',c='c',d='d',e='e' where id=2;

1 row updated.

ops$tkyte%ORA11GR2> update t set a='a',b='b',c='c',d='d',e='e' where id=1;

1 row updated.

ops$tkyte%ORA11GR2> commit;

Commit complete.

ops$tkyte%ORA11GR2> delete from chained_rows;

1 row deleted.

ops$tkyte%ORA11GR2> analyze table t list chained rows;

Table analyzed.

ops$tkyte%ORA11GR2> select count(*) from chained_rows;

  COUNT(*)
----------
         3

<b>and now all three are - note that we updated in reverse order...</b>

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> connect /
Connected.
ops$tkyte%ORA11GR2> @mystat "table fetch continued row"
ops$tkyte%ORA11GR2> set echo off

NAME                        VALUE
---------------------- ----------
table fetch continued           0
row


ops$tkyte%ORA11GR2> column a noprint
ops$tkyte%ORA11GR2> select id, a from t;

        ID
----------
         3
         2
         1

ops$tkyte%ORA11GR2> @mystat2
ops$tkyte%ORA11GR2> set echo off

NAME                        VALUE DIFF
---------------------- ---------- ------------------
table fetch continued           0                0
row

<b>that is what we expect - to have zero - and you want to verify that we are doing a FULL SCAN (if we used an index - you would get the table fetch continued row...)</b>


ops$tkyte%ORA11GR2> set autotrace traceonly explain
ops$tkyte%ORA11GR2> select id, a from t;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     3 |  6045 |     5   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| T    |     3 |  6045 |     5   (0)| 00:00:01 |
--------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement (level=2)

ops$tkyte%ORA11GR2> set autotrace off

Rowchaining & Rowmigration

Arindam, September 21, 2010 - 6:17 am UTC

Sorry the DB_BLOCK_SIZE is 4kb in my database now.

Jaruwan Na Ranong, June 05, 2012 - 4:05 pm UTC

Hi Tom,
I have few questions that would like to have clear understanding.
1. During insert data into a table, will oracle try to fit that row into a current used available block first or find a brand new empty block to avoid chained row.
2. Chained row will ONLY happen when we insert a row.
3. Migration row will ONLY happen when we make an update to a row.
4. If there is high wait event "db file sequential read" during index creation, is that mean table is so fragmented? or something else. My understanding is that the wait event should be "db scatter read".
5. Is that true that full tablescan will not be effected by chained row or migrated row.
6. Only full index scan will be effected by chained row or migrated row.
7. column dba_tables.chain_cnt is reflecting only chained row but not migrated row.

Thank you in advance.
Tom Kyte
June 05, 2012 - 5:42 pm UTC

1) we will only chain a row during insert if the row is too big to fit on a block or has more than 255 columns.

If there is a block on the free list with space that is not sufficient for the row and the row COULD fit on a block (and has 255 or less columns) we will not put it on that block.


2) false, there are three cases an update could chain a row

a) the row is updated to be larger than a block.
b) the row is updated to be too large to fit on the current block (this is also known as a migrated row, but a migrated row IS a chained row)
c) the row was inserted with only the first 255 columns (or less) filled in - the rest were NULL. then you update a column to be non-null and that column is beyond column 255 - we will chain that row since row chaining always happens when the row has more than 255 columns.

3) correct

4) no, that just means we are reading blocks from disk in order to add them into the index structure or the index structure is so large that it doesn't fit in the cache (or the cache has lots of other stuff going on and we cannot fit the entire index into the cache) and we need to reread portions of the index back in again to add more rows to them

5) pretty much, in general, we ignore the head of a migrated row and just wait till we hit the row later. In the case of a row in many pieces because it has >255 columns OR because it is >blocksize that could be a problem depending on what columns you access.

6) A full index scan would not be affected at all. It is a table access by index rowid that might be. If we just full scanned an index - we wouldn't even touch the table. it takes touching the table to make an impact.

7) false, a migrated row is just a special type of chained row.

Chain Count Percentage

djb, July 10, 2012 - 9:02 am UTC

Tom, all the queries I see to calculate chaining seem to rely on analyzing the table to get various statistics including the number of chained rows.
Suppose the tables aren't analyzed but instead the chained_rows table has been populated by 'list chained rows'. How would you construct a query using count(*) for each distinct table in chained_rows and num_rows from dba_tables to give a % chained ?

Thanks
Tom Kyte
July 11, 2012 - 5:00 pm UTC

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select x.owner_name, x.table_name, x.cnt, y.num_rows
  2    from (
  3  select owner_name, table_name, count(*) cnt
  4    from chained_rows
  5   group by owner_name, table_name
  6         ) x,
  7         dba_tables y
  8   where y.owner = x.owner_name
  9     and y.table_name= x.table_name
 10   order by x.owner_name, x.table_name
 11  /

no rows selected

Thanks

DJB, July 12, 2012 - 3:54 am UTC

Tom -most grateful

Does a migrated row ever return to its original block?

Richard, May 20, 2016 - 3:16 pm UTC

If a row is migrated during an update, because it's been made too big to fit in the block, then will it return to the original block if it's made small enough to fit during a later update? Or, does the row remain forwarded for as long as it exists?
Chris Saxon
May 21, 2016 - 1:38 am UTC

Once it's moved, it wont move back.

row chaining

Boris, November 11, 2018 - 5:28 am UTC

I am confused with the below statements made by Tom

Statement 1:

"A row CHAINS over multiple blocks when it is too big to fit on a block. If you have a table:

create table t ( x varchar2(4000) );

and you use a 2k block size
and you put 4000 characters in there

you will have a chained row -- that is not "fixable" by any means short of putting it into a database with a larger block size."

statement 2: Make sure the rows can fit on a single oracle block. If not, just drop the idea of fixing it, you'd need a bigger block.

Could you please explain the reason why this issue can't be resolved with the larger block size?
Chris Saxon
November 12, 2018 - 3:24 pm UTC

You can fix these by increasing the block size. If the column width is larger than the block size, then using larger blocks is the only way to avoid row chaining.

row chaining

Boris, November 14, 2018 - 12:50 am UTC

Thanks for the clarification Chris. Much appreciated.