Skip to Main Content

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question, Bap.

Asked: November 12, 2020 - 8:39 am UTC

Last updated: November 17, 2020 - 11:23 am UTC

Version: 11.2.0.4

Viewed 1000+ times

You Asked

We are using version 11.2.0.4 of oracle. We have a got a table having already ~270 columns and we just got request from dev team to add couple of more columns. But considering the rows chaining after ~255 columns we are asking team to not add those new columns but to plan for dropping those existing columns such that total number of columns will be restricted within ~255.

But Dev team asking, can we show some evidence of what amount of overhead are we currently bearing having those additional columns in that table. And i was trying to see if by anyway we can relate statistics "table fetch continued row" with the database time.

I do see in dba_hist_sysystat , there is ~400million "table fetch continued row" stats getting noted in per hour of AWR snapshot. But i am not sure , how that can be converted to the amount DB time its contributes to? So is there any way out for this? And also i am not able to associate these stats to any sql_id, so wanted to know if we have some AWR/ASH view which stores that stats("table fetch continued row") for specific sql_ids?

Secondly, i tried running a query manually which in reality getting executed million of times/day from application. But when i tried fetching the stats "table fetch continued row" from v$sysystat for that session, I am seeing "0" value for that. So it means atleast at current stage this table is not suffered from "row chaining". But how to ensure that by adding couple of more columns we will still be safe and we wont suffer from row chaining symptom?


The table is range partitioned by column CRT_DT and hold ~1Tb of data with Num Rows = 1,400,752,800 and AV_ROW_LEN noted as "236" at global level and it spans across ~73 partitions.


SELECT * FROM TRAN_TAB WHERE ID = :B2 AND CRT_DT = to_date(:B1 ,'MM/DD/YYYY HH24:MI:SS');

Plan hash value: 2186597613

-------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name               | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |                    |       |       |     3 (100)|          |       |       |
|   1 |  PARTITION RANGE SINGLE            |                    |     1 |   265 |     3   (0)| 00:00:01 |   KEY |   KEY |
|   2 |   TABLE ACCESS BY LOCAL INDEX ROWID| TRAN_TAB           |     1 |   265 |     3   (0)| 00:00:01 |   KEY |   KEY |
|   3 |    INDEX UNIQUE SCAN               | TRAN_TAB_UK        |     1 |       |     2   (0)| 00:00:01 |   KEY |   KEY |
-------------------------------------------------------------------------------------------------------------------------




I see few blogs stating issue with columns beyond ~255. But still trying to relate/match it to this database, if we are going to be affected by such issues in future( if will add N number of more columns) or we are already getting impacted unknowingly.

https://jonathanlewis.wordpress.com/2018/02/28/255-columns-3/

and Chris said...

And i was trying to see if by anyway we can relate statistics "table fetch continued row" with the database time.

I'm not aware of a way to tie this specific stat to database time. You could assess the impact on specific queries on tables with many chained rows by running:

- select first_col
- select last_col

And comparing the difference.

there is ~400million "table fetch continued row" stats getting noted in per hour

While that does sound high, how does it compare to other performance metrics such as consistent gets?

And also i am not able to associate these stats to any sql_id, so wanted to know if we have some AWR/ASH view which stores that stats("table fetch continued row") for specific sql_ids?

Again, I don't know of a way to tie this stat to the execution of a specific SQL statement, sorry.

But how to ensure that by adding couple of more columns we will still be safe and we wont suffer from row chaining symptom?

Given that you already have 270 columns in the table and you're seeing lots of "table fetch continued row" activity, you're already suffering from this problem!

I don't see the benefit of blocking requests to add more columns at this point.

Particularly as the actions you could take to avoid it are potentially lots of work:

Ensure queries only select values from position (last column - 255) onward when necessary

Assuming you're following good programming practice and only selecting the columns you need, this may make little/no difference.

If this is not the case, changing the queries to only access the columns needed is worth doing regardless of any row chaining issues:

- It may enable the optimizer to use covering indexes
- It reduces the volume of data transferred over the network

Drop unnecessary columns so you get under the 255 column limit

This avoids the problem but is only possible if you have deprecated columns. If there is, there may be (lots) of development effort needed to remove these columns from code. Plus there's the risk it turns out you need to keep the columns for some reason (e.g. auditing/reporting) after you dropped them!

Split the table into two, each with < 255 columns

This means you'll need to join the two tables to reconstruct the full row. This is likely to be slower than any overhead you have from row chaining. Plus add lots of development work to support this change.

If there are performance issues in the application, investigate what these are. If it turns out that row chaining is a significant cause of these, then start looking into ways to avoid it.

It's likely there are other changes you can make which are easier and give bigger performance gains.

Also note - just because a table has > 255 columns doesn't automatically mean you have chained rows. And if you, the last 255 columns of rows are chained, so you can hit this much earlier than you'd expect.

https://asktom.oracle.com/pls/apex/asktom.search?tag=table-255-columns-row-chaining

Is this answer out of date? If it is, please let us know via a Comment

More to Explore

Design

New to good database design? Check out Chris Saxon's full fundamentals class.