Skip to Main Content
  • Questions
  • High "table fetch continued row" delta when using a view

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question, Myron.

Asked: March 03, 2016 - 10:55 pm UTC

Last updated: March 08, 2016 - 1:30 am UTC

Version: 11.2.0.4

Viewed 1000+ times

You Asked

I have notice in our warehouse database that we are getting a large number of "table fetch continued row" counts. I've checking our tables for migrated/chained rows and didn't find any. What I discovered was that when I select from a particular view that the "table fetch continued row" count climbs very fast, but if I select each individual table used by the view, "table fetch continued row" count doesn't change. The purpose of the view is to flatten the model so the user doesn't have to know how to do the joins. The view uses 24 tables and returns 151 columns with the average data size of 787 bytes per row with a max row size of 838. The plan from the view does all nested loop joins on the 24 tables. There is no group by or order by on the statement in the view. The query is a simple "select * from view".

Why does the "table fetch continued row" counts climb fast when using a view when none of the underlying tables have migrated rows?

and Connor said...

What also matters is how you *access* the data.

If I do a full scan on a table with migrated rows, then it might not be necessary to follow the chain, eg (stat=697 is table fetch continued row in 12c)


SQL> drop table t1 purge;

Table dropped.

SQL>
SQL> create table T1 ( id int, x varchar2(80), y int);

Table created.

SQL>
SQL> insert into t1
  2  select rownum, 'x', rownum
  3  from dual
  4  connect by level <= 1000;

1000 rows created.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> update t1
  2  set x = x || 'lots and lots of extra data to force migration';

1000 rows updated.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> select * from v$mystat where statistic# = 697;

       SID STATISTIC#      VALUE     CON_ID
---------- ---------- ---------- ----------
        17        697          0          0

1 row selected.

SQL>
SQL> select count(*) from t1;

  COUNT(*)
----------
      1000

1 row selected.

SQL>
SQL> select * from v$mystat where statistic# = 697;

       SID STATISTIC#      VALUE     CON_ID
---------- ---------- ---------- ----------
        17        697          0          0

1 row selected.

SQL>
SQL> select * from t1
  2  where y = 678;

        ID X
---------- --------------------------------------------------------------------------------
         Y
----------
       678 xlots and lots of extra data to force migration
       678


1 row selected.

SQL>
SQL> select * from v$mystat where statistic# = 697;

       SID STATISTIC#      VALUE     CON_ID
---------- ---------- ---------- ----------
        17        697          0          0

1 row selected.

SQL>
SQL> analyze table t1 compute statistics;

Table analyzed.

SQL>
SQL> select chain_cnt from user_tables where table_name = 'T1';

 CHAIN_CNT
----------
       876

1 row selected.

SQL>
SQL> create index T1_IX on T1 ( id ) ;

Index created.

SQL>
SQL> select y from t1
  2  where id = 456;

         Y
----------
       456

1 row selected.

SQL>
SQL> select * from v$mystat where statistic# = 697;

       SID STATISTIC#      VALUE     CON_ID
---------- ---------- ---------- ----------
        17        697          1          0

1 row selected.

SQL>


In the full scan, I knew that eventually I'd hit the migrated rows anyway...whereas via the index, I had to follow the chain.

Rating

  (1 rating)

Is this answer out of date? If it is, please let us know via a Comment

Comments

High "table fetch continued row" delta when using a view

Myron Higerd, March 07, 2016 - 3:22 pm UTC

Thanks for your reply and you were correct.

In my testing, I new I had to get it to use indexes to see the statistics climb and I made sure on the first table I looked at that the plan was using an index. Unfortunately, that table didn't have any migrated rows and when I cloned the process to look at other tables I made a mistake and the plan was a full table scan, which didn't reveal the problem. I also didn't use the analyze table because of the size of the database.
Chris Saxon
March 08, 2016 - 1:30 am UTC

Glad you got to the bottom of it.