Sagi
A reader, October 30, 2001 - 11:15 am UTC
Hi Tom,
I also one of the guys who thank you for your valuable time and efforts shown in answering all our questions.
In the above explanation you said that T1 are migrated and T2 are chained after you executed the Query using VSIZE. How can we seeing the VSIZE 4002 or 10001 & 10002 say that they migrated or chained. Is it becuse the VSIZE is constant in T1 where as its changed in T2.
Please explain.
Thank you.
October 30, 2001 - 1:16 pm UTC
sorry -- i should have been more clear.
I'm using an 8k block size. I KNOW a row that is 4002 bytes fits in an 8k block. i KNOW Oracle will not chain a row that can fit on a block, it will migrate it instead. Hence, these rows that could fit on a block are migrated.
10001 is bigger then a block, hence I KNOW the row is chained, not migrated.
100% migrated.. how is this possible
rob, November 13, 2003 - 1:47 pm UTC
Tom,
I came across a table that the recent statistics show 100% chaining. avg_row_length is 1200 for an 8k block size for a segment with a 10 PCT_FREE. statistically wouldn't at least a single row somewhere be in a block without a pointer?
I guess it would have been possible that with a small pct_free (10%), an update that affected every row could cause every row in the table to migrate? does that make sense?
part of this is the loader's fault i think also. The loader inserts a pk, and than updates 20 columns in subsequent statements. since loading is a serial row by row process (not multi-threaded, and another row isn't loaded until the last one finished), I can see how a large percentage of a block would be migrated based on this practice, but the first 4 or 5 rows in the block should have gone in without migration. right?
This leads me back to the fact that it was probably an update, but that single column update would have had to update a column with a length of at least 10% of the block size, since 10% should have been free in the block anyway, and I don't think that was the case.
November 13, 2003 - 2:33 pm UTC
well, they can all be migrated (simple test case below).
it is well within the realm of possible that they would all migrate. that "loader" is really bad
ops$tkyte@ORA920LAP> create table t ( x int, y varchar2(4000) ) tablespace manual pctfree 10;
Table created.
ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> insert into t select rownum, null from all_objects where rownum <= 660;
660 rows created.
ops$tkyte@ORA920LAP> select dbms_rowid.rowid_block_number( rowid ), count(*) from t
2 group by
3 dbms_rowid.rowid_block_number( rowid )
4 /
DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) COUNT(*)
------------------------------------ ----------
98 660
ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> analyze table t compute statistics;
Table analyzed.
ops$tkyte@ORA920LAP> select chain_cnt, avg_row_len from user_tables where table_name = 'T';
CHAIN_CNT AVG_ROW_LEN
---------- -----------
0 7
ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> update t set y = rpad( '*', 1200, '*' );
660 rows updated.
ops$tkyte@ORA920LAP> analyze table t compute statistics;
Table analyzed.
ops$tkyte@ORA920LAP> select chain_cnt, avg_row_len from user_tables where table_name = 'T';
CHAIN_CNT AVG_ROW_LEN
---------- -----------
660 1216
Pls. Explain
A reader, November 13, 2003 - 5:08 pm UTC
Tom,
Assuming the block size is 8k and pctfree is 10%. So free space available for rows is 8k - 8k * 10% - Blk Overhead which should be around 7300 bytes.
Initially the total bytes occupied after the insert is 660 * 7 (avg_row_len) = 4620 bytes.
So around 2500 bytes will still be available after insert.
The update sets the column Y to 1200 bytes and so the remaining space after the update of 1st row is 2500 - 1200 = 1300. So the second row will occupy another 1200 bytes which leaves around only 100 bytes in the first block. In this way, two rows can live in the same block so that the chain count should only be 330.
Can you pls. explain how the chain_cnt became 660 after the update. ie. how all the rows migrated after the update?
November 13, 2003 - 9:23 pm UTC
there is overhead for each row -- you are looking at avg_row_len, that is NOT all that is there. there is a "are we null or not null byte" for example and other stuff.
you do not have that free space (else row 661 would fit on that block -- if you insert 661 rows, you'll find 2 blocks are needed)
Chained rows stats in user_tab_partitions
Bogy, January 26, 2004 - 7:23 am UTC
Hi Tom,
I am a bit confused about information I get from view user_tab_partitions in columns:
EmptyBlocks, AvgSpace and ChainCount.
The story is following:
I have created one partitioned table (name: my_part_table) and in one of the partitions inserted around 500000 records with AvgRowLen 1100 on 8k block size. I did some additional data manipulation in partition p_11 and after analyzing table I got information that I have for partition p_11:
EmptyBlocks: 63
AvgSpace : 1157
ChainCount : 19446
After this I did following:
CREATE TABLE t_1 as
select *
from my_part_table partition (p_11);
alter table my_part_table truncate partition p_11 drop storage update global indexes;
Analyze....with dbms_stats package, compute statistics for partition p_11 for all columns.
After this I have no rows in partition p_11 but statistics still show me for this partition:
EmptyBlocks: 63
AvgSpace : 1157
ChainCount : 19446
I do not understand how I can have now chained rows in this partition?
If I execute:
analyze table my_part_table partition (p_11) list chained rows into chained_rows
I got no chained rows.
Could you please put some light on this.
Thanks a lot!
Bogy
January 26, 2004 - 9:05 am UTC
those statistics are not collected by dbms_stats as they are not used by the CBO to develop plans.
they are only gathered by ANALYZE.
You can see this with a regular table as well:
ops$tkyte@ORA920PC> create table t ( x int, y char(2000) );
Table created.
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> insert into t (x) select rownum from all_users;
42 rows created.
ops$tkyte@ORA920PC> commit;
Commit complete.
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> update t set y = 'x';
42 rows updated.
ops$tkyte@ORA920PC> commit;
Commit complete.
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> analyze table t compute statistics;
Table analyzed.
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> select num_rows, empty_blocks, avg_space, chain_cnt from user_tables where table_name = 'T';
NUM_ROWS EMPTY_BLOCKS AVG_SPACE CHAIN_CNT
---------- ------------ ---------- ----------
42 4 3768 39
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> truncate table t;
Table truncated.
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> exec dbms_stats.gather_table_stats( user, 'T' );
PL/SQL procedure successfully completed.
ops$tkyte@ORA920PC> select num_rows, empty_blocks, avg_space, chain_cnt from user_tables where table_name = 'T';
NUM_ROWS EMPTY_BLOCKS AVG_SPACE CHAIN_CNT
---------- ------------ ---------- ----------
0 4 3768 39
ops$tkyte@ORA920PC> analyze table t compute statistics;
Table analyzed.
ops$tkyte@ORA920PC> select num_rows, empty_blocks, avg_space, chain_cnt from user_tables where table_name = 'T';
NUM_ROWS EMPTY_BLOCKS AVG_SPACE CHAIN_CNT
---------- ------------ ---------- ----------
0 8 0 0
Gathering Statistics of Chained Rows
Sergey Isenko, June 16, 2004 - 7:10 am UTC
Dear Tom!
I'd like to ask you whether any other ways to populate chained rows statistics
into DBA_TABLES?
As I understand if you want to count/rebuild some chained rows - you have to
collect statistic by ANALYZE command into your CHAINED_ROWS table.
And, if you need to estimate a trend of chained rows only - it would be better
to use a global repository for it (DBA_TABLES I have in mind) IMHO. Am I right?
Well, I can use V$SYSSTAT certainly, but it keeps data only during uptime.
But I couldn't found any procedure to do it on a schema level, you see,
Analyze Table ... List Chained Rows into CHAINED_ROWS doesn't populate
this statistics into DBA_TABLES.
I can do it only by ANALYZE TABLE COMPUTE/ESTIMATE STATISTICS.
Furthermore:
>>>
Oracle Corporation strongly recommends that you use the DBMS_STATS package rather
than ANALYZE to collect optimizer statistics. That package lets you collect
statistics in parallel, collect global statistics for partitioned objects,
and fine tune your statistics collection in other ways.
Further, the cost-based optimizer will eventually use only statistics that have been
collected by DBMS_STATS.
However, you must use the ANALYZE statement rather than DBMS_STATS for
statistics collection not related to the cost-based optimizer, such as:
* To use the VALIDATE or LIST CHAINED ROWS clauses
* To collect information on freelist blocks
>>>
Whether it means I should collect this kind of statistic by own procedure/script,
by ANALYZE command?
Plans Oracle includes some procedure into DBMS_STATS package for next releases
for simplification this process?
Please explain.
Thanks In Advance.
This is a little testcase for clarification:
SQL>
SQL> Analyze Table VIP.MIKE_CDA Delete Statistics;
Table analyzed.
SQL> select table_name, chain_cnt, num_rows,
2 to_char(last_analyzed,'dd-MM hh24:mi:ss') LAST_ANAL
3 from dba_tables where owner = 'VIP' and table_name = 'MIKE_CDA';
TABLE_NAME CHAIN_CNT NUM_ROWS LAST_ANAL
------------------------------ ---------- ---------- --------------
MIKE_CDA
SQL>
SQL> Analyze Table VIP.MIKE_CDA Compute Statistics;
Table analyzed.
SQL> select table_name, chain_cnt, num_rows,
2 to_char(last_analyzed,'dd-MM hh24:mi:ss') LAST_ANAL
3 from dba_tables where owner = 'VIP' and table_name = 'MIKE_CDA';
TABLE_NAME CHAIN_CNT NUM_ROWS LAST_ANAL
------------------------------ ---------- ---------- --------------
MIKE_CDA 2009 13394 15-06 13:44:12
SQL>
SQL> Analyze Table VIP.MIKE_CDA Delete Statistics;
Table analyzed.
SQL> exec dbms_stats.gather_table_stats (ownname => 'VIP', tabname => 'MIKE_CDA');
PL/SQL procedure successfully completed.
SQL> select table_name, chain_cnt, num_rows,
2 to_char(last_analyzed,'dd-MM hh24:mi:ss') LAST_ANAL
3 from dba_tables where owner = 'VIP' and table_name = 'MIKE_CDA';
TABLE_NAME CHAIN_CNT NUM_ROWS LAST_ANAL
------------------------------ ---------- ---------- --------------
MIKE_CDA 0 13394 15-06 13:44:13
SQL>
SQL> Analyze Table VIP.MIKE_CDA Delete Statistics;
Table analyzed.
SQL> truncate table chained_rows;
Table truncated.
SQL> Analyze Table VIP.MIKE_CDA List Chained Rows into CHAINED_ROWS;
Table analyzed.
SQL> select table_name, chain_cnt, num_rows,
2 to_char(last_analyzed,'dd-MM hh24:mi:ss') LAST_ANAL
3 from dba_tables where owner = 'VIP' and table_name = 'MIKE_CDA';
TABLE_NAME CHAIN_CNT NUM_ROWS LAST_ANAL
------------------------------ ---------- ---------- --------------
MIKE_CDA <--- !!!!
SQL> select owner_name, table_name, count(*) from chained_rows
2 group by owner_name, table_name order by count(*) desc;
OWNER_NAME TABLE_NAME COUNT(*)
------------------------------ ------------------------------ ----------
VIP MIKE_CDA 2009
June 16, 2004 - 12:47 pm UTC
the question is, are table fetch continued rows a problem on your system.
the only way to get this information is (and will be) ANALYZE.
but unless you have a very very large percentage of your rows including table fetch continued row -- you don't even need to begin to start looking for them.
and if they are, you analyze the individual tables and fix the problem by perhaps reorging them with a better pctfree/pctused setting.
confused
SZB, November 28, 2006 - 3:28 pm UTC
Apps owns T1. why list chained rows dose not work?
it's 9i R2 db.
apps> analyze table t1 compute statistics
2 /
Table analyzed.
apps> analyze table t1 list chained rows;
analyze table t1 list chained rows
*
ERROR at line 1:
ORA-01495: specified chain row table not found
apps> analyze table t1 list chained rows into
2 chained_rows;
analyze table t1 list chained rows into
*
ERROR at line 1:
ORA-01495: specified chain row table not found
apps>
November 28, 2006 - 8:03 pm UTC
well, umm, do you have that table created???
the one called chained_rows that you refer to
i got.
SZB, November 28, 2006 - 4:14 pm UTC
Can we have chained rows at database level ?
Star Nirav, December 11, 2006 - 6:35 am UTC
Dear Tom,
Can we have chained rows at database level....? I am running the following query....
SELECT (select value from v$sysstat where name = 'table fetch continued row') / ((select value from v$sysstat where name = 'table scan rows gotten') + (select value from v$sysstat where name = 'table fetch by rowid'))
2 *100 FROM DUAL;
Value
-------
.294952668
What we believe is chained rows are always at table level. But Some tools are giving w.r.t. database level.
Pls. suggest at the earliest.
Regards,
Star Nirav
December 11, 2006 - 8:14 am UTC
define "database level", what does that mean to you.
I don't know why you did your computation the way you did.
table scan rows gotten will not be affected by MIGRATED rows.
table fetch by rowid - will.
so it does not really mean anything to use the table scan rows gotten here, as it avoids the migrated row issue all together.
You are not looking at something at "the database level". You are looking at instance statistics and computing a ratio. The statistic "table fetch continued row" is an aggregate number of times we did that activity - across all sessions, across all tables. Not sure what you mean by "at database level" at all in this context.
A chained or migrated row is a "table thing", indeed. You are looking at aggregate numbers across all tables.
you should really avoid unnecessary acronyms/abbreviations - makes communication hard. Plug things like pls and wrt into www.google.com...
HPOV tool is giving output with this query...
Star Nirav, December 14, 2006 - 11:17 am UTC
Our HPOV tool is giving this information.
Customer wants justification whether it should be db level or tables/index level...
Regards,
Star Nirav
December 15, 2006 - 8:24 am UTC
what the heck is an HPOV tool.
I know I've mentioned to you specifically more than once that acronyms just do not make a good conversation tool and should be avoided.
I can assure you it is NOT an index thing, as - well, only rows in tables are chained or migrated - think about it.....
Nikhil, March 02, 2007 - 10:24 am UTC
Hello Tom,
In our testing environment I have observed many tables has lot of chained rows. Presence of chain rows could be reason for high buffer reads and disk reads? Even though bind variables are used the parsing is also very high. Instead of tweaking the shared pool should I try to move the table using ALTER TABLE T1 MOVE? I want to within the tablespace.
Thanks
Anil
March 04, 2007 - 6:13 pm UTC
the presence of chained rows may or may not be the cause of high logical and physical IO.
First, you need to ascertain whether these rows are MIGRATED or TRULY CHAINED.
A truly chained row will not become unchained by moving. A truly chained row is a row that exceeds your block size or has more than 255 columns. Moving the table would do nothing.
A migrated row on the other hand can cause an additional IO on a rowid (index) access to the row. They can be "fixed"
So first - determine if you are truly chained or migrated. Second, determine if this is a "problem" (do you have lots of table fetch continued rows???? a system statistic).
Now, the talk about parsing is another talk all together - has nothing to do with this. Parsing is what Oracle does when a programmer tells us to. The presence or lack of bind variables not withstanding. If you say "PARSE THIS", we do. The only way - repeat the ONLY WAY to reduce parsing is to fix the application that is telling us "parse parse parse parse parse..."
With bind variables, at least they are the hugely expensive soft parse - not the horrendously expensive hard parse. Using session cached cursors (init.ora parameter) could turn the hugely expensive soft parse into a really expensive softer parse - but that is about it.
Thanks
Nikhil, March 05, 2007 - 4:52 am UTC
Hello Tom,
Thanks for the reply. For migrated rows moving the table would solve the problem?
Thanks
March 05, 2007 - 12:56 pm UTC
I'm asking you to
a) confirm you have a problem
b) confirm you have a solvable problem
yes, moving the table would un-migrate rows - but if you are not suffering from a migrated row problem OR the migrated rows are infrequently accessed - you are done already.
Thanks
Kashif, August 21, 2008 - 12:42 pm UTC
Hi Tom,
You said that only true chained rows can not be fixed. True Chain rows = Row size is > block size or number of column > 255.
Have a look on my example.
We have 700GB big partitions table. For simplicity I take example of only one partition.
ANALYZE TABLE P LIST CHAINED ROWS INTO CHAINED_ROWS; count(head_rowid) row_count FROM chained_rows
Where table_name='P' and partition_name 'PP'
GROUP BY table_name,partition_name
ROW_COUNT
----------
441754
Then I used Move command to get rid of chained rows
e.g.
Alter table P move Partition pp;
ANALYZE TABLE P LIST CHAINED ROWS INTO CHAINED_ROWS;
SELECT count(head_rowid) row_count 3 FROM chained_rows
Where table_name='P' and partition_name 'PP'
GROUP BY table_name,partition_name
ROW_COUNT
----------
28845 (Nearly 412909 rows fixed)
Maximum row size in this table is
SELECT 20 * round((nvl(vsize(NO_AUGMENTED_CARE_PERIODS),0)+1 +¿¿¿¿¿.. )row_size from P group by row_size order by 1
Row_size_in_bytes Number of rows
1760 5
So 1760 < block size i.e. 16K
Then I created another test table selecting all rows from this partition
Create table PT SELECT * FROM P PARTITION (PP);
ANALYZE TABLE PT LIST CHAINED ROWS INTO CHAINED_ROWS;
----------
28838 (Further 7 rows were fixed)
Create table ptemp
Select * from PT
Where rowid in (SELECT head_rowid from chained_rows where table_name =PT');
ANALYZE TABLE ptemp LIST CHAINED ROWS INTO CHAINED_ROWS;
ROW_COUNT
----------
143 (After CTAS 28695 rows were fixed)
Delete from PT
Where rowid in (SELECT head_rowid from chained_rows where table_name = 'PT');
commit;
Insert into PT select * FROM ptemp;
Commit;
ANALYZE TABLE PT LIST CHAINED ROWS INTO CHAINED_ROWS;
----------
1486 (Still these rows exist)
My questions are if it was not true row chain then why first move command didn¿t fix the issue? and if it true row chain why subsequent ctas/delete/insert commands fixed the issue? Is it a bug?
Thanks
Chirag, April 22, 2010 - 2:08 pm UTC
Hi Tom,
Thanks for the details on how to reduce the migration of rows.
I have a table in my schema and I ran the steps that you have mentioned, but every time I run the analyze, I find records in the chained_rows tables. I carry out the same process again. every time the number is reducing by a small amount. Is this a normal process?
And how long should I continue doing this??
Thanks for all the help,
Chirag
April 22, 2010 - 2:17 pm UTC
what process did you run - are you sure you don't have a "T2" situation? See my original answer...
A reader, September 26, 2016 - 10:08 am UTC
Row chaining on table more than 300 rows does affects the performance of the database or not?
If yes then how we can determine?
can we can see statistics regarding row chaining?
September 27, 2016 - 12:10 am UTC
Keep an eye on the "table fetch continued row" statistic - this reflects the number of times you had to "follow" a row throughout the chain. If its low, then you probably dont need to be concerned.