Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, ezio.

Asked: March 02, 2016 - 7:23 am UTC

Last updated: March 05, 2016 - 1:41 am UTC

Version: 9i 10g

Viewed 1000+ times

You Asked

Hi all:
I got a mission this days, I want to identify which tables have 'fragment'or row chains. after search a few days, I have some questions,hope you can help me.
1. select owner,table_name,round((blocks*8),2) "size (kb)" ,
round((num_rows*avg_row_len/1024),2) "actual_data (kb)",
(round((blocks*8),2) - round((num_rows*avg_row_len/1024),2)) "wasted_space (kb)"
from dba_tables
where (round((blocks*8),2) > round((num_rows*avg_row_len/1024),2))
order by 3 desc


the statement above shows the waste space ,but it need to analyze the table first, that's right?


2. after I get the tables , I have two methods to wipe the fragment,one is use move or imp(impdp)。 I want to know which one is better and why?
3. The Segment Space management of my tablespace is manual , Should I change it to AUTO? Can It help to reduce the fragment?

4. I wonder if Oracle have the concept of 'fragment', or It maybe call row chains, can you give me some docs?
5.please give me some advice about how to identify the table that have 'fragment' , and how often should handle it(use move or shrink or imp)

At last, my English is not good ,hope you can understand it, waiting for you reply.

and Connor said...

OK, we've got some mixed up terminology there - we'll sort that out first.

1) chaining/migration

This is when a row is updated and is too large to fit where it was originally stored. Normally not a large concern - you want to check the "table fetch continued row" statistic. If this is *large*, then you might want to consider a reorganisation of affected tables. You would do this (for example) with:

alter table T move
(and then rebuild any indexes for T)

in an available maintenance window. There are ways of doing it online as well.

2) fragmentation

This is when you've deleted rows in a table, and you now have pockets of free space littered throughout the table blocks. Once again, normally not a large cause for concern. We will automatically re-use that space when more rows are inserted. It's really only worth looking at more carefully when

a) the table is now static (ie, no more rows expected). In this case, you might do: alter table T move compress, to make it as dense and as compact a spossible.

b) you did something 'abnormal', for example, you deleted millions of rows, so you've got huge amounts of free space that you may never re-use. In this case, you might do: alter table T move

But you typically need only worry about it in these cases, ie, an activity that it not an ordinary event. For just standard insert, delete, update etc - we will take care of re-using that free space for you.

Rating

  (3 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Some question more

ezio, March 03, 2016 - 5:24 am UTC

First Thanks for answer me so fast, following your answer I got some more question:
1. I will check "table fetch continued row" statistic but I want to know if there is a baseline about whether it is normal .
for example,is that normal when this value increase about 1000 in an hours .
And How do I identify that table which have chaining/migration

2.You said there are two situation will make table have fragment.
So it means I can not identify this fragmentation using oracle views?
Only I can do is asking AP team which tables are in that situation?

3. Will MSSM can take care of our free space?


Thanks

Some question more

ezio, March 03, 2016 - 5:24 am UTC

First Thanks for answer me so fast, following your answer I got some more question:
1. I will check "table fetch continued row" statistic but I want to know if there is a baseline about whether it is normal .
for example,is that normal when this value increase about 1000 in an hours .
And How do I identify that table which have chaining/migration

2.You said there are two situation will make table have fragment.
So it means I can not identify this fragmentation using oracle views?
Only I can do is asking AP team which tables are in that situation?

3. Can MSSM take care of our free space?


Thanks
Connor McDonald
March 03, 2016 - 7:27 am UTC

1) Some good information here on Tanel's blog

http://blog.tanelpoder.com/2009/11/04/detect-chained-and-migrated-rows-in-oracle/

But there is a good chance you might not need to be overly concerned with it, if you are getting 1000 per hour.

2) Not really. You could actually count the rows per block in each table, eg see this post for an example

https://asktom.oracle.com/pls/apex/f?p=100:11:::NO:RP:P11_QUESTION_ID:9526802100346241789

3) MSSM will be ok, but ASSM is probably preferred.

some more

ezio, March 04, 2016 - 7:04 am UTC

Thanks for answer,I still have some confusion.
1.what's the minimum unit of High water mark. extent?
2.As I know .In ASSM oracle have a Low HWM to increase performance of table scan,But will low HWM fall auto when some rows is deleted.
3.When a table full scan occurs ,what is minimum unit the oracle scan. extent?

Hope to answer me ,that will very helpful for me to understand the row chains and fragmentation .
Connor McDonald
March 05, 2016 - 1:41 am UTC

1) No, if I create a table with initial=512m, and add 1 row to it, I do not have to scan 512m. The hwm is more granular than that.

2) https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:492636200346818072

3) It could be as low as a block. Typically we will try to read 'db_file_multiblock_read_count' blocks at a time. But we might find some of the blocks are already in the buffer cache. So we might *plan* to read blocks 1-128 from a file, but block 2 is in the buffer cache, so we'll read block 1, then blocks 3-128.

This is less likely nowadays since most table scans are done with direct reads.

More to Explore

Data Pump

All of the database utilities including Data Pump are explained in the Utilities guide.