• Questions
• # How to calculate the actual size of a table?

Thanks for the question, Reji.

Asked: June 02, 2000 - 9:46 am UTC

Last updated: September 10, 2012 - 6:24 pm UTC

Version: 8.1.5 & 8.1.6

Viewed 100K+ times! This question is

Tom:
How to calculate the actual size of a table?
vsize and that may be one way of doing column by column
but if there is a tool to do at once that will be a great
help.

TIA
Reji

## and Tom said...

see
for a discussion of the use of ANALYZE and DBMS_SPACE to this end. Basically -- you can find out how many blocks are allocated to the table (whether used or NOT), how many blocks have never been used (subtract them from above) and on the blocks that are used -- the average free space.

For example, in the example from above that I linked to -- we see:

ops\$tkyte@DEV8I.WORLD> create table t ( x int,
y char(2000) default '*' )
2 storage ( initial 40k next 40k minextents 5 )
3 tablespace system;

Table created.

A table that will create ~2k rows for each row inserted. makes it easy to do the math

ops\$tkyte@DEV8I.WORLD> insert into t (x) values ( 1 );
1 row created.

ops\$tkyte@DEV8I.WORLD> analyze table t compute statistics;
Table analyzed.

ops\$tkyte@DEV8I.WORLD> compute sum of blocks on report
ops\$tkyte@DEV8I.WORLD> break on report
ops\$tkyte@DEV8I.WORLD> select extent_id, bytes, blocks
2 from user_extents
3 where segment_name = 'T'
4 and segment_type = 'TABLE'
5 /

EXTENT_ID BYTES BLOCKS
---------- ---------- ----------
2 81920 10
3 122880 15
4 163840 20
0 40960 5
1 40960 5
----------
sum 55

ops\$tkyte@DEV8I.WORLD> clear breaks
ops\$tkyte@DEV8I.WORLD> select blocks, empty_blocks,
2 avg_space, num_freelist_blocks
3 from user_tables
4 where table_name = 'T'
5 /

BLOCKS EMPTY_BLOCKS AVG_SPACE NUM_FREELIST_BLOCKS
---------- ------------ ---------- -------------------
1 53 6091 1

Ok, the above shows us:

o we have 55 blocks allocated to the table
o 53 blocks are totally empty (above the HWM)
o 1 block contains data (the other block is used by the system)
o we have an average of about 6k free on each block used.

Therefore, our table

o consumes 1 block
o of which 1block * 8k blocksize - 1 block * 6k free = 2k is used for our data.

Now, lets put more stuff in there...

ops\$tkyte@DEV8I.WORLD> insert into t (x)
2 select rownum
3 from all_users
4 where rownum < 50
5 /
49 rows created.

ops\$tkyte@DEV8I.WORLD> analyze table t compute statistics;
Table analyzed.

ops\$tkyte@DEV8I.WORLD> compute sum of blocks on report
ops\$tkyte@DEV8I.WORLD> break on report
ops\$tkyte@DEV8I.WORLD> select extent_id, bytes, blocks
2 from user_extents
3 where segment_name = 'T'
4 and segment_type = 'TABLE'
5 /

EXTENT_ID BYTES BLOCKS
---------- ---------- ----------
2 81920 10
3 122880 15
4 163840 20
0 40960 5
1 40960 5
----------
sum 55

ops\$tkyte@DEV8I.WORLD> clear breaks
ops\$tkyte@DEV8I.WORLD> select blocks, empty_blocks,
2 avg_space, num_freelist_blocks
3 from user_tables
4 where table_name = 'T'
5 /

BLOCKS EMPTY_BLOCKS AVG_SPACE NUM_FREELIST_BLOCKS
---------- ------------ ---------- -------------------
19 35 2810 3

Ok, the above shows us:

o we have 55 blocks allocated to the table (still)
o 35 blocks are totally empty (above the HWM)
o 19 blocks contains data (the other block is used by the system)
o we have an average of about 2.8k free on each block used.

Therefore, our table

o consumes 19 blocks of storage in total.
o of which 19 blocks * 8k blocksize - 19 block * 2.8k free = 98k is used for our data.

Given our rowsize, this is exactly what we expected.

## Rating

(100 ratings)

Is this answer out of date? If it is, please let us know via a Comment

### Well described, thank you tom.

Gururaj Kulkarni, January 30, 2002 - 2:54 pm UTC

### avg_row_len vs vsize

Prince, August 05, 2002 - 4:55 pm UTC

Tom,

In the following example "avg_row_len" from all_tables shows "1009", while the row_size calculated based on vsize is 1002. Why is the difference?

Does avg_row_len include rowid?

Also, does avg_row_len is the number of characters as we see them or the bytes needed for the physical storage?

ex. vsize(100000) = 2.
Is the avg_row_len in this case is 2 or 6?.

create table t ( col1 number, col2 char(1000) default '*' );

insert into t (col1) select rownum from all_tables where rownum < 50;

analyze table t compute statistics ;

select NUM_ROWS, BLOCKS, EMPTY_BLOCKS, AVG_SPACE, AVG_ROW_LEN, NUM_ROWS, NUM_FREELIST_BLOCKS
from all_tables where owner = 'DBUSR' and table_name = 'T';

NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_SPACE AVG_ROW_LEN NUM_ROWS NUM_FREELIST_BLOCKS
---------- ---------- ------------ ---------- ----------- ---------- -------------------
49 20 11 1530 1009 49 4

select sum(vsize(rowid)), sum(vsize(col1)), sum(vsize(col2)), avg(vsize(col1)), avg(vsize(col2)) from t ;

SUM(VSIZE(COL1)) SUM(VSIZE(COL2)) AVG(VSIZE(COL1)) AVG(VSIZE(COL2))
---------------- ---------------- ---------------- ----------------
98 49000 2 1000

August 07, 2002 - 8:48 am UTC

vsize is not taking into consideration the null/not null flags and the leading byte count on numbers/chars.

it is not the rowid (rowid is the ADDRESS of the row -- the file, block and slot on the block). The rowid is INFERRED data, not data physically present on the row. The rows location is it's rowid. The rowid is not stored with the row but rather derived from its location.

Thorsten, August 16, 2002 - 4:39 am UTC

This is the first time I seek for information here ...
and I'm impressed !
Perfect. You have the ability to guide people through
Thanks.

### Calculate total size of "keep" buffer pool

A reader, September 29, 2003 - 4:09 pm UTC

Tom, how can I calculate the current consumed size of the keep buffer pool? (This is in order to determine if I should increase the size of my keep buffer pool). Thanks.

September 29, 2003 - 4:42 pm UTC

well, i don't see how the currently consumed size will tell you that? We keep consistent read versions of blocks in there so the same block could be there over and over -- some of them never to actually be used. So, you could be "full" but only really use 5% of the data in there. Would you add more??

but you can look at v\$buffer_pool_statistics to sort of calculate a "hit ratio", not that a hit ratio will be meaningful (our goal is actually to REDUCE LIO's -- my tuning approach actually LOWERS the cache hit!!)

you could use v\$bh to count blocks as well -- there is an objd - the database object number, you can find all of the blocks for the objects in the keep pool

### I used this based on your earlier solution

A reader, September 29, 2003 - 4:46 pm UTC

I used this SQL based on your earlier solution:
select sum(bytes), sum(user_extents.blocks)
from user_extents, user_tables
where segment_name=table_name and buffer_pool='KEEP';

BTW, if I alter storage of certain tables to buffer_pool keep, is there a way to keep automatically move all the indexes on these tables to keep pool as well? Or, do I have to "alter index" buffer_pool keep for each index?

Thanks.

September 29, 2003 - 4:52 pm UTC

??? you should let a cache cache. it looks like you are trying to allocate enough space to hold "all data" -- that'll never work (multi-versioning).

anyway... remember logical IO is darn expensive, hugely so. don't look to the cache as "fast=true". look to reduce IO across the board.

you assign segments to the buffer pools, like you assign segments to tablespaces.

### Growth rate calculations

Arun Gupta, June 29, 2004 - 9:06 am UTC

Tom,
Every three months, we have to find out the growth rate of all tables and indexes in all of our production databases. Based on these findings, we estimate the disk requirements. We have heap organized tables, partitioned tables, B*Tree indexes, partitioned indexes, bitmap indexes, IOTs and function based indexes. My thought was to create some script to capture a baseline size for all these objects in a table and in a separate set of columns, keep updating the size on a daily basis. The difference in baseline size and updated size on any day will give us the growth rate over that period of time. My questions are:
a) Is this the right approach? If not, please suggest a good way of doing this.
b) For disk space planning purpose, should I consider actual space used by the table as you demonstrated above or just the total segment space allocated to the object?

We are on 9ir2.

Thanks

June 29, 2004 - 3:54 pm UTC

why every day?

I would just one time:

create table sizes_of_things
as
select sysdate the_date, a.* from dba_segments a

and every three months, just:

insert into sizes_of_things select sysdate, a.* from dba_segments a;

now you got everything you need.

I would only consider space actually allocated to an object -- that space needs to exist, any number smaller than that isn't meaningful.

### Growth rate calculations

Arun Gupta, June 30, 2004 - 9:48 am UTC

Tom,
I will do as you advise. I just wanted to run the program everyday for a couple of weeks to make sure that there are no points of failure.

Please see Metalink note 10640.1 to estimate table/index sizes in 8i. Will it be correct to estimate table/index size as described in the note in Oracle 9i with LMT and ASSM? What is a good way to estimate table/index sizes given the number of rows expected to go into the table with LMT and ASSM?

Thanks

June 30, 2004 - 10:33 am UTC

i know of no reliable way to estimate these things short of:

a) create them
b) load them with some % of REPRESENTATIVE data
c) measure their size
d) multiply

everything else gives you a "max" sort of estimate. worst case.

### Storage used by numbers

A reader, November 01, 2004 - 12:09 pm UTC

I thought that a NUMBER column used a fixed amount of storage in Oracle.

But

select vsize(to_number(1)) from dual; gives 2
select vsize(to_number(1.1)) from dual; gives 3
select vsize(to_number(1.1111)) from dual; gives 4

What gives?

Thanks

November 02, 2004 - 5:58 am UTC

no, numbers are a varying length "string" from 0 to 22 bytes in length.

it has always been that way (written about many times on this site as well).

the concepts guide goes into this as well:
</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96524/c13datyp.htm#784 <code>

### Reason for size difference

Arun Gupta, November 09, 2004 - 1:28 pm UTC

Tom,
I have a test database and a production database. The test database is completely identical to production database. In both databases, there is a table T1. In test database this table is taking up 384MB less space than in production (calculated by adding up the bytes in dba_extents). On checking the number of rows, in production database, T1 has 2048 rows more than in test database. My question is how can 2048 rows account for 384MB size difference? The average row length is around 3500 and pct_free is 10. Even if I assume that one row fits each block, this makes up only 2048 blocks. With 8k block size, this is 16MB. In both the databases, the tablespaces are LMT with ASSM and automatic extent allocation. This is on 9ir2. Please guide me on what to look for.
Thanks

November 09, 2004 - 1:35 pm UTC

"completely identical"

but different :)

different sizes
different numbers of rows.

different everything.

i'll guess: in production, the table got big over time due to inserts/updates and such. along came some deletes. in test, this never happened.

So, table is bigger in production because as some point in its past - it was bigger.

Assuming other things are "constant" such as the extent managment, extent sizes, etc.

### diiference between table to cache and keeping in buffer_pool_keep

Anurag, December 11, 2004 - 5:37 am UTC

Hi Tom,

I am not clear about the difference between putting table to cache with alter table ....cache command and using execute ....keep('table')

best regards,

Anurag

December 11, 2004 - 7:58 am UTC

and that relates to the calculating the actual size of a table in what fashion?

chong, January 05, 2005 - 9:00 pm UTC

Hi tom,
Is avg_space from user_tables include the table pctfree space?

thanks

January 06, 2005 - 10:00 am UTC

avg_space is the avg free space, all free space, on the block

### how to estimate space for new project

Sadiq, February 16, 2005 - 2:36 am UTC

Thanks for the inputs.

In a new project, how can we calculate the space for each tables
(and sum for all is space required for whole database, if only this software is running)

One way I tried is: I inserted the dummy data in all tables and used your above logic for estimation of space. (Inserted dummy data based on client's input on how many transaction for a month etc...)

February 16, 2005 - 8:05 am UTC

well, you "one way I tried" -- that would be "what my view is", that is the way I've suggested dozens of times on this site.

### Wrong size using stats from user_tables if xmltypes in table

Flemming Andersen, March 30, 2005 - 10:52 am UTC

```Hi Tom
I have a table containing a XMLTYPE column
SQL> desc xml_transformer
Name                                      Null?    Type
----------------------------------------- -------- ----------------------------
ID                                        NOT NULL NUMBER(10)
NAME                                      NOT NULL VARCHAR2(100 CHAR)
MSK_ID                                    NOT NULL NUMBER(10)
DESCRIPTION                                        VARCHAR2(1000 CHAR)
STATUS                                    NOT NULL VARCHAR2(15 CHAR)
TYPE                                      NOT NULL VARCHAR2(25 CHAR)
TMP_TRANSFORMER                                    CLOB
TRANSFORMER                                        XMLTYPE

I have computed the size of each column in all rows of the table
(simply using vsize and length of xml-data)

DESCRIPTION.............................1465 B / 1,43 KB
ID......................................54 B
MSK_ID..................................132 B
NAME....................................566 B
STATUS..................................145 B
TMP_TRANSFORMER.........................0 B
TRANSFORMER.............................78635 B / 76,79 KB
TYPE....................................163 B

Total size..............................81160 B / 79,26 KB

PL/SQL procedure successfully completed.

SQL> select blocks, avg_space, avg_row_len, num_rows
from user_tables where table_name = 'XML_TRANSFORMER';

BLOCKS  AVG_SPACE AVG_ROW_LEN   NUM_ROWS
---------- ---------- ----------- ----------
7       2740        1382         27

This should give a size of (8-2,7)*7 = 37.1 KB or 27*1.4 = 37,8 KB
Allthough not exact comparable, but this is a much smaller number, than my own computation

If I look at the length of the xml-data, then the average is much larger than avg_row_len

SQL> select length(a.transformer.getclobval()) from xml_transformer a;

LENGTH(A.TRANSFORMER.GETCLOBVAL())
----------------------------------
1198
1665
1120
6001
1008
1080
730
6481
531
1895
1078
536
4835
536
8255
4530
10079
4537
1072
1023
686
739
903
903
2512
11073
3629

27 rows selected.

SQL> select avg(length(a.transformer.getclobval())) from xml_transformer a;

AVG(LENGTH(A.TRANSFORMER.GETCLOBVAL()))
---------------------------------------
2912,40741

The xml-column is stored with default options
(enable storage in row chunk 8192) pctfree is 10
Some of the xml-data is probably not stored in row, because of the size,
but can't we rely on the stats in user_tables if we have xmltype (lobs) ?

Best Regards Flemming Andersen
```

March 30, 2005 - 11:05 am UTC

the avg_row_len from dbms_stats is not taking LOBS into consideration as of this writing.

### Computing avg_row_len for clobs

Flemming Andersen, March 30, 2005 - 12:48 pm UTC

Thanks Tom
Your answer gave me the impression, that this could be a bug, so I searched metalink. Bug 1954150 mention using analyze which must be working for their testcase a long column, but I don't see any difference when I use analyze in my example. Do you have some additional information, or should I file a tar ?
/Flemming

March 30, 2005 - 1:07 pm UTC

there is a file bug Bug 1954150, analyze is not the thing to use for stats so not sure if you need to file a bug on it or not, that would be your call, they fixed dbms_stats

### Actual table space used

Ranko Mosic, September 11, 2005 - 2:56 pm UTC

Your quote 'I would only consider space actually allocated to an object -- that space needs
to exist, any number smaller than that isn't meaningful. '

This will give wrong projection for segments currently empty but with initial extents allocated.

Also in one of the postings you advocate that all estimates should be done at block level.
I disagree. Let's say you are dealing with new project where there is no history of data growth. You need to estimate space growth. Inputs you might have are: limited simulation and numbers of transaction per year.
It is much easier to deal with transactions at row level than at block level. You can say sale table will have 2 new rows for each sale, as opposed to 0.03 blocks.

Regards, Ranko

September 11, 2005 - 6:24 pm UTC

wrong projection for what exactly?

Some asked - how big is my table. That is the allocated space, that is the amount of space they need on disk for that table right now.

where do I advocate that? Let's see the context, you cannot disagree with it unless you have it at your finger tips because you might be remembering it "not 100% as it was said".

(sizing by blocks - no, by rows? absolutely not -- I've said elsewhere if you want to project future growth, watch your extents, how often do you add a new one, thats the metric you want to look at)

### Capacity planning

Ranko Mosic, September 12, 2005 - 5:20 pm UTC

I will repeat, it will give you wrong projection for
tables that have allocated extents ( any table has at least initial ), but still no data inserted.
If you base your estimate on space allocated, and table is actually empty, it is not correct estimate, right ?
--------
Pointer:

'And average row length would be an EXCESSIVELY poor predictor of actual space
used. The only way to do it is look at the blocks, that is all that matters.'

September 12, 2005 - 5:46 pm UTC

who is saying anything about projections here?

I'm measuring SPACE USED ON DISK, ask me the question "how much space does this table T take"

Lets say I count those rows up and say "it takes 1m"

You say "well, why do you have 50m of space allocated, whats up with that"

I'm reporting to you the SPACE ACTUALLY allocated, that is all. nothing more, nothing less.

### Actual space used

Ranko, September 12, 2005 - 10:34 pm UTC

You are using word 'predictor' above, linked to average row size as being 'EXCESSIVELY poor predictor'.
dictionary.com says:
predictor

n 1: someone who makes predictions of the future (usually on the basis of special knowledge) [syn: forecaster, prognosticator, soothsayer] 2: information that supports a probabilistic estimate of future events; "the weekly bulletin contains several predictors of mutual fund performance"

So it looks like we are talking future here.

September 13, 2005 - 11:46 am UTC

using the average row size in this case would be a very poor predictor of how much space your table is currently consuming on disk

yes or no?

Yes, it would be. There might be zero rows in the table, but the table is still consuming 100mb.

look, you like using the row size, cool -- great. When someone asks me "how big is my table", I'm not going to count rows and average row sizes. I'm just going to query the dictionary and ask it "how much space is allocated"

### ?

Ranko, September 13, 2005 - 4:58 pm UTC

Yes, this is the way it is normally done & said.
But in case you are doing capacity planning,as I am now, allocated space is not precise enough - even my pm with just little bit of database knowledge knows it.
Why don't you take into account system, temp and other dead weight too ?
I find it comical that Oracle doesn't have view to quickly expose real data size, but then there are countless other "pearls" like that.

September 13, 2005 - 5:11 pm UTC

to do future capacity estimates, I would look to historical growth (how often does that new extent get added)

to guesstimate how big a table of 10,000,000 rows will be - I'd likely load 1 to 10% of it, analyze it and see (using GOOD representative data).

Counting row sizes does take into consideration "oracle's space".

So, I'm still at the aggregate level.

### New Project Capacity Planning

Ranko, September 13, 2005 - 9:50 pm UTC

History, of course.
But I am dealing with newborn with no history here - so no new extents are added.
So yes, I have fully migrated data ( acutually "DA" does it, believe it or not ); I analyzed data to get avg_row_len; added 2 bytes of overhead per row for row directory; then we know approximate numbers of transactions that will happen and tables(indexes)/numbers of rows per table affected so we can guesstimate future growth.
There is no way to simulate real life load and there is no production database, but we still have to cough up figures.

September 14, 2005 - 7:55 am UTC

so, then you

a) create new empty table
b) load representative amount of data with REPRESENTATIVE values
c) analyze

pctfree, block overhead, Oracle's decision to put 7 rows where you thought 8 could have fit -- etc, you don't need to think about them.

### Modeling exercise

ranko, September 14, 2005 - 2:47 pm UTC

What I am trying to explain here is:
THERE IS NO REPRESENTATIVE LOAD ( except for inital migrated data).

THIS IS MODELING EXERCISE. WE ARE TRYING TO PREDICT FUTURE.

rm

September 14, 2005 - 3:29 pm UTC

tell me how you size a row then please.

and accomidate for the block overhead.

During a modeling exercise, I would certain create the tables, and insert representative data

eg: this is a name field, we anticipate it to be 80 characters max, with most values being 30 characters (fill in random data)

this is date...

this is a number field with numbers beween 1 and 10000, evenly distributed.

Why? because the number
999999999999
takes lots more storage than
100000000000000000000000

does - it is all (except for the dates) varying width data.

So, to size it out, to get a good guestimate - I load representative data, some percentage of it. Analyze.

that is my method.

A reader, September 14, 2005 - 10:10 pm UTC

To get row size:
- fully migrate data from mainframe
- analyze tables to get num_rows and avg_row_len
- this is baseline
- inputs into model are: number of transactions / year ( sales etc)
- this is the weakest part of the model: now I am trying to see which tables will have rows inserted and how many; some limited simulation comes into play here
- <number of transactions per year( sales )> * num_Rows * avg_row_len + row overhead
- summarize and get projected table size and % growth

Ranko.

September 15, 2005 - 7:31 am UTC

```how about this

- fully migrate data from mainframe
- analyze tables
- this is baseline
- compute the %increase in rows per year

If you know the number of rows, you hence know the percentage increase, it is 6 one way (my way) 1/2 dozen the other (your way)

Actually, your way misses the block overhead mostly -- it is more than 2 bytes/row.

As an example, suppose your base table starts with 50k rows (about) and you estimate that you'll add 50k rows/year.

ops\$tkyte@ORA10G> drop table t;

Table dropped.

ops\$tkyte@ORA10G>
ops\$tkyte@ORA10G>
ops\$tkyte@ORA10G> create table t
2  as
3  select * from all_objects;

Table created.

ops\$tkyte@ORA10G>
ops\$tkyte@ORA10G>
ops\$tkyte@ORA10G> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops\$tkyte@ORA10G>
ops\$tkyte@ORA10G> select num_rows, blocks, empty_blocks, avg_row_len
2    from user_tables
3   where table_name = 'T';

NUM_ROWS     BLOCKS EMPTY_BLOCKS AVG_ROW_LEN
---------- ---------- ------------ -----------
48681        691            0          93

<b>so, we start with about 50k rows - 691 blocks is what I need to have allocated.  Now, we use our respective guesses:</b>

ops\$tkyte@ORA10G>
ops\$tkyte@ORA10G> select ceil((num_rows * (avg_row_len+2))/8192), blocks,
2         ceil((num_rows * (avg_row_len+2))/8192)+blocks your_guess,
3         2* blocks my_guess
4    from user_tables where table_name = 'T';

CEIL((NUM_ROWS*(AVG_ROW_LEN+2))/8192)     BLOCKS YOUR_GUESS   MY_GUESS
------------------------------------- ---------- ---------- ----------
565        691       1256       1382

<b>I'm guessing -- well, 691 blocks (100% growth that year).  You are guessing 565 blocks -- the total sizes of the tables using:

a) blocks = 1382
b) avg row lengths = 1256
</b>

ops\$tkyte@ORA10G>
ops\$tkyte@ORA10G> insert into t select * from t;

48681 rows created.

ops\$tkyte@ORA10G> commit;

Commit complete.

ops\$tkyte@ORA10G>
ops\$tkyte@ORA10G>
ops\$tkyte@ORA10G> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops\$tkyte@ORA10G>
ops\$tkyte@ORA10G> select num_rows, blocks, empty_blocks, avg_row_len
2    from user_tables
3   where table_name = 'T';

NUM_ROWS     BLOCKS EMPTY_BLOCKS AVG_ROW_LEN
---------- ---------- ------------ -----------
97362       1398            0          93

<b>we are both short, me by 16 blocks, you by 142 -- if I've gotten your approach right?  </b>

When forcasting future growth, I'll stick with extents and blocks like this.  Average row length doesn't capture it all.
```

A reader, September 15, 2005 - 10:21 pm UTC

In "my way, or no way" PCTFREE is missing, and something else, I guess block overhead is higher - there is stuff added per column of this and that type, etc.

In "your way, or no way" I find it quite amazing that 2+2=5. How come that same data has length of 691 blocks, then 707 blocks ? Please do tell. Or maybe explanation is on the first page of each and every Oracle manual:

'The Programs are not intended for use in any nuclear, aviation, mass transit, medical, or other inherently
dangerous applications. It shall be the licensee's responsibility to take all appropriate fail-safe, backup,
redundancy and other measures to ensure the safe use of such applications if the Programs are used for such
purposes, and we disclaim liability for any damages caused by such use of the Programs. '

rm

September 16, 2005 - 7:59 am UTC

2+2=5 likely because of ASSM and the space it uses to manage space itself.

I was using an ASSM tablespace (automatic segment space management) which even more throws off the row approach.

### Confusion output

Suvendu, September 20, 2005 - 2:37 pm UTC

```Hi Tom,

Here is confusion on output I'm getting from data dictionary tables.
Our database with db_block_size=16k on Oracle 9iR2. When I'm calculating a tableÂ’s size, IÂ’m getting one result from
USER_SEGMENTS and USER_EXTENTS where they saying there are 12 BLOCKS allocated to the table, but getting another result
from USER_TABLES saying 10 BLOCKS used, but ZERO in EMPTY_BLOCKS column.

Here, are you going to say, missing these 2 BLOCKS are used by the system as you told on first response to this thread.

I would appreciate, if you could focus on it.

Thanking you.

Regards,
Suvendu

SQL> exec dbms_stats.gather_table_stats(user,'BRAND_MASTER');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.04
SQL> select table_name, num_rows,blocks, empty_blocks,avg_row_len, last_analyzed,num_freelist_blocks
2   from user_tables
3  where table_name='BRAND_MASTER';

TABLE_NAME        NUM_ROWS     BLOCKS EMPTY_BLOCKS AVG_ROW_LEN LAST_ANAL NUM_FREELIST_BLOCKS
--------------- ---------- ---------- ------------ ----------- --------- -------------------
BRAND_MASTER          4662         10            0          26 20-SEP-05                   0

Elapsed: 00:00:00.00
SQL>  select segment_name,segment_type, bytes, bytes/16384 BLOCKS from user_segments
2   where segment_name='BRAND_MASTER';

SEGMENT_NAME    SEGMENT_TYPE            BYTES     BLOCKS
--------------- ------------------ ---------- ----------
BRAND_MASTER    TABLE                  196608         12

Elapsed: 00:00:00.04
SQL> select segment_name, extent_id, blocks from user_extents
2   where segment_name='BRAND_MASTER';

SEGMENT_NAME     EXTENT_ID     BLOCKS
--------------- ---------- ----------
BRAND_MASTER             0          4
BRAND_MASTER             1          4
BRAND_MASTER             2          4

Elapsed: 00:00:00.15
SQL>

```

September 21, 2005 - 1:26 pm UTC

are you using assm or not.

### No, its in MANUAL mode...

Suvendu, September 21, 2005 - 2:25 pm UTC

No, its in MANUAL mode..., please could you elaborate how its differ from AUTO to MANUAL mode.

September 21, 2005 - 8:03 pm UTC

in automatic segment space management, oracle uses blocks all over the place to manage space.

here you are just seeing the normal block overhead. extent and freelist space management blocks.

Ranko, September 23, 2005 - 1:34 pm UTC

Yes, if tablespace is dictionary managed then your way is ok
( but obsolete now with LMTs ).
In locally managed one block is allocated for segment header, another for First Level Bitmap Block.
On my database your way was only 2 blocks short of linear prediction in LMT. Don't know what is in 16 blocks you had as difference.
Average overhead per row in this test case is 13 bytes, not 2 I started my calculation with.

### How can i calculate growth of my database on daily basis

Amit, November 05, 2005 - 6:47 am UTC

hi tom,
How can i calculate growth of my database on daily basis??
also major table growth on daily basis??

November 05, 2005 - 5:53 pm UTC

likely by watching the history and extrapolating.

Every day, or every week, or every (UNIT OF TIME HERE) - snapshot the relevant tables (maybe dba_segments) so you have a history of how big things are - you could get fancy and just add records to this history when the size of a segment actually changes.

Or, use the packs that are available with enterprise manager (the capacity planner for example), it automates what I just described.

### Size of Disk Space for a Given Tables?

Bill, November 08, 2005 - 4:49 pm UTC

Thanks for the insights on table sizing. How does one translate from Oracle allocation bytes to Unix disk space bytes?

Thanks.

November 08, 2005 - 10:42 pm UTC

not sure what you mean?

### 2nd Try for Disk Space

Bill Carpenter, November 09, 2005 - 8:28 am UTC

Sorry for the poorly asked question.

I have a data warehouse that is empty. I know what the structure of the environment is, and I know how many rows of data will be in each table.

Other than re-typing all of the column definitions into Excel and adding the bytes together, is there a query method for determining how much Unix disk space is required for a given row count, for a given table?

With this information I will then order my disk.

November 11, 2005 - 9:58 am UTC

given that different values take different space (10000000 takes less room than 9999 does for example), my suggestion is always to

load a representative set of data - anywhere from 0.1 to 10% of the data (based on the size of the table in rows - more rows - smaller percent)

analyze them

multiply up....

things like ASSM vs manual segment space management, compression, different values - all affect the ultimate answer. by loading a bit, you'll get a good idea by "how much" they will affect it.

### How much space consumed

Yoav, January 06, 2006 - 12:47 pm UTC

Hi Mr. Kyte

In our production database (9iR2) each table has a column named INSERT_DATE.
Since we are short with a disk space , i tried to figure out how much space we consumed last year.
using this column i counted the number of rows we had in each table at the beggining of the year 2005 and at the end of that year.

I am trying to calculate how much space we consumed for each table during the year 2005.

using the following select statment:
select blocks, empty_blocks,
avg_space, num_freelist_blocks
from user_tables
where table_name = 'The_table_name'

will help me to know how much space is currently used for our data.
But how can i know , how much space we consumed during the year 2005 ?

Regards.

January 06, 2006 - 4:43 pm UTC

you cannot - not unless you looked at it last year and remember what the values were.

you cannot "reliably" know, not unless you were tracking it.

It could be that (due to deletions or just having the space allocated) the tables consume no more disk space today than they did a year ago - even if they doubled the number of rows!

### A suggestion, perhaps

A reader, January 06, 2006 - 4:54 pm UTC

How about if he created a new table ..where the_date_column < trunc( sysdate, 'YY' ).. and count it from it ? of course, that would not imply deletions, etc.. as Tom pointed out.

January 06, 2006 - 4:59 pm UTC

if they want to know "how many more extents did this table add in the last year", the only way to get that is to know how many it had at the beginning of the year.

Unless every table was 100% "full" at the beginning of last year - you cannot look at row counts and really tell.

### Reason for larger extent size?

Tim, January 11, 2006 - 6:15 pm UTC

Tom,
We have a table with ~ 5.1 million rows which uses ~ 16,900 extents uniformly sized at 64KB per extent in LMT. User_tables reports average row size of 188 after analyzing. We get a fairly consistent growth on this table of ~ 5,000 rows per weekday - which (I hazard a guess) equates to ~ 17 extents growth per weekday. This is on an Oracle 9.2.0.4 database used in an OLTP application (with some batch processing) by about 2,000 internal customers. SQL operations on this table are sluggish - so performance improvements would be welcome here.

Based on what I have seen you say here - it sounds like the extent size for this table is too small. However (as I would have to present a reasonable argument for change - even in the DEV/TEST environments) - I am not clear on the reason for a larger extent size. Where would we expect to see a performance gain or other improvement?

Thanks.

January 12, 2006 - 10:55 am UTC

what sql operations are "sluggish", can you quantify that.

I would say the extent size is obviously wrong for this table - but I cannot say it is the cause of your "sluggish" response times.

### Reason for larger extent size

Tim, January 11, 2006 - 6:19 pm UTC

Sorry. I meant to ask this in the thread which was discussing LMT extent sizes - placed it in the wrong place.
:(

### datafiles of a table

Asim Naveed, February 14, 2006 - 2:24 am UTC

A datafile may contain data of different tables, and
a table may have its data in different datafiles.

A)
Is there a way to find out which datafiles are
storing (some or all) data of a given table.

B)
Is there a way to find out list of tables that are
(completely or partially ) stored in a given datafile.

February 14, 2006 - 7:46 am UTC

a) dba_extents has file numbers in there, yes, you can query that to see what extents of a segment are in what files.

b) same view, dba_extents

### how we relate file nos. and filenames

Asim Naveed, February 14, 2006 - 8:04 am UTC

Which column of the dba_extents have file no,
is it FILE_ID or RELATIVE_FNO.

How can I get the filename from this file no.

here is desc dba_extents output

OWNER
SEGMENT_NAME
PARTITION_NAME
SEGMENT_TYPE
TABLESPACE_NAME
EXTENT_ID
FILE_ID
BLOCK_ID
BYTES
BLOCKS
RELATIVE_FNO

February 14, 2006 - 8:24 am UTC

what do you want to do with this file number. file id is the file number, relative fno was in support of lots of files in a database.

### filename

Asim Naveed, February 14, 2006 - 8:32 am UTC

I want to see that how much a given table
is distributed accross the phyiscal disks.

For this first i want to get the data filenames
containing data of that table and then I will
see that on how many physical disk the data

How can I get the FILENAME from file_id

Thanks

February 14, 2006 - 9:10 am UTC

dba_extents -> dba_data_files joined by (tablespace_name, relative_fno) or by (file_id)

You'll actually get the "extents" of the segment and then pick up the filenames.

Asim Naveed, February 14, 2006 - 10:18 am UTC

Ok, now I made the following query to get data filenames
of a given table.

A) Is it correct?
B) Is there any need of OUTER joining dba_extents and dba_data_files.

select distinct b.file_name
FROM dba_extents a, dba_data_files b
where a.file_id = b.file_id
and a.segment_name = 'MY_TABLE_NAME'
AND A.SEGMENT_TYPE = 'TABLE'

February 14, 2006 - 12:55 pm UTC

except for missing the "owner" constraint - dba_extents has information about ALL extents, not just your extents.

if you had two tables with the same name - it would be misleading.

Maybe something like this:

big_table@ORA10GR2> select file_name, round( bytes/1024/1024 ) mbytes,
2 round( ratio_to_report(bytes) over () * 100, 2 ) pct
3 from (
4 select sum(a.bytes) bytes, b.file_name
5 from dba_extents a,
6 dba_data_files b
7 where a.owner = USER
8 and a.segment_name = 'BIG_TABLE'
9 and a.segment_type = 'TABLE'
10 and a.file_id = b.file_id
11 group by b.file_name
12 )
13 order by file_name
14 /

FILE_NAME MBYTES PCT
------------------------------ ---------- ----------
2/ORA10GR2/datafile/o1_mf_big_
tabl_1z3zzj1k_.dbf

2/ORA10GR2/datafile/o1_mf_big_
tabl_1z3zzj3z_.dbf

2/ORA10GR2/datafile/o1_mf_big_
tabl_1z3zzj6w_.dbf

2/ORA10GR2/datafile/o1_mf_big_
tabl_1z3zzj8r_.dbf

2/ORA10GR2/datafile/o1_mf_big_
tabl_1z3zzjbj_.dbf

Lahrash, February 15, 2006 - 12:09 pm UTC

Randy, February 28, 2006 - 6:27 pm UTC

Tom,

Can you explain me how do I calculate size of a index like you are doing for tables? Pardon me if my question is bit out of contect.

March 01, 2006 - 8:06 am UTC

an index is a segment, a table is a segment, a segment is a segment, same methods work.

### How to find the size of Index

Randy, March 02, 2006 - 2:05 pm UTC

You solution says to select from user_tables....
select blocks, empty_blocks,
avg_space, num_freelist_blocks
from user_tables
where table_name = 'T'
/

How that will work for Index ?

March 02, 2006 - 2:29 pm UTC

use user_indexes and query relevant columns

alternatively, analyze index I validate structure; and query index_stats (but be aware that the analyze command that populates index_stats LOCKS the index)

### Index size

Randy, March 02, 2006 - 5:01 pm UTC

```This what I get in my user_index view
SQL> desc user_indexes
Name                                  Null?    Type
----------------------------------- -------- --------- INDEX_NAME                          NOT NULL VARCHAR2(30)
INDEX_TYPE                         VARCHAR2(27)
TABLE_OWNER                        NOT NULL VARCHAR2(30)
TABLE_NAME                         NOT NULL VARCHAR2(30)
TABLE_TYPE                         VARCHAR2(11)
UNIQUENESS                         VARCHAR2(9)
COMPRESSION                        VARCHAR2(8)
PREFIX_LENGTH                      NUMBER
TABLESPACE_NAME                    VARCHAR2(30)
INI_TRANS                                          NUMBER
MAX_TRANS                                          NUMBER
INITIAL_EXTENT                                     NUMBER
NEXT_EXTENT                                        NUMBER
MIN_EXTENTS                                        NUMBER
MAX_EXTENTS                                        NUMBER
PCT_INCREASE                                       NUMBER
PCT_THRESHOLD                                      NUMBER
INCLUDE_COLUMN                                     NUMBER
FREELISTS                                          NUMBER
FREELIST_GROUPS                                    NUMBER
PCT_FREE                                           NUMBER
LOGGING                                         VARCHAR2(3)
BLEVEL                                             NUMBER
LEAF_BLOCKS                                        NUMBER
DISTINCT_KEYS                                      NUMBER
AVG_LEAF_BLOCKS_PER_KEY                            NUMBER
AVG_DATA_BLOCKS_PER_KEY                            NUMBER
CLUSTERING_FACTOR                                  NUMBER
STATUS                                         VARCHAR2(8)
NUM_ROWS                                           NUMBER
SAMPLE_SIZE                                        NUMBER
LAST_ANALYZED                                      DATE
DEGREE                                        VARCHAR2(40)
INSTANCES                                    VARCHAR2(40)
PARTITIONED                                  VARCHAR2(3)
TEMPORARY                                    VARCHAR2(1)
GENERATED                                    VARCHAR2(1)
SECONDARY                                    VARCHAR2(1)
BUFFER_POOL                                  VARCHAR2(7)
USER_STATS                                   VARCHAR2(3)
DURATION                                     VARCHAR2(15)
PCT_DIRECT_ACCESS                            NUMBER
ITYP_OWNER                                   VARCHAR2(30)
ITYP_NAME                                    VARCHAR2(30)
PARAMETERS                                   VARCHAR2(1000)
GLOBAL_STATS                                 VARCHAR2(3)
DOMIDX_STATUS                                VARCHAR2(12)
DOMIDX_OPSTATUS                              VARCHAR2(6)
FUNCIDX_STATUS                               VARCHAR2(8)
JOIN_INDEX                                   VARCHAR2(3)

Tell me I am blind but I dont see any column index_stats. Also what are the equivalent fields for index?

```

March 03, 2006 - 7:57 am UTC

index_stats is a view, populated after an analyze.

I said use the relevant columns - you wanted to see how big the index was right? leaf blocks, avg_leaf_blocks_per_key, avg_data_blocks_per)key, num_rows, distinct_keys.

What might you be interested in seeing, retrieve that.

### INDEX_STATS is name of view/table

Robert, March 02, 2006 - 5:34 pm UTC

Randy...Could you be looking for the INDEX_STATS view?
e.g.: desc index_stats

### Object growth rate

ABC, March 03, 2006 - 1:17 pm UTC

I am trying to set up weekly growth report for my database.
Tablewise like

create table space_growth (owner,tablespace , Name ,GB)
as
SELECT owner,tablespace_name,SEGMENT_NAME,sum(bytes)/1024/1024/1024 FROM DBA_SEGMENTS where BYTES > (1024*1024*1024)
GROUP BY owner,tablespace_name,SEGMENT_NAME order by owner;

But it is not as per my requirement.
Appreciate if you provide help

March 03, 2006 - 2:16 pm UTC

sorry it is not as per your requirement.

Guess I don't know why it is not as per your reqirement since you wrote it?

I would think you would need a table with

owner
table_name
table_space
size
DATE_OF_SOME_SORT <<<====

and you would insert into this table the results of a query every week (dbms_job could be useful there to automate that) and the DATE_OF_SOME_SORT would be populated by something like "SYSDATE"

### Could even serve up a nice graph via Application Express

jim, March 03, 2006 - 2:40 pm UTC

I currently track growth this way (using dbms_job to schedule) and use analytics to give me a view as to when the tablespace will run out of room.(projected)

I've been fooling around with Application Express (formaly known as HTMLdb). I could put all of the output on a nice grap on a page or two.

### OBJECT GROWTH

abc, March 03, 2006 - 4:38 pm UTC

I have this now
create table space_growth (owner,tablespace , Name ,GB,week)
as
SELECT owner,tablespace_name,SEGMENT_NAME,sum(bytes)/1024/1024/1024,(select sysdate from dual) FROM DBA_SEGMENTS where BYTES > (1024*1024*1024)
GROUP BY owner,tablespace_name,SEGMENT_NAME order by segment_name;

Schedule following run for every weekend.

insert into space_growth SELECT owner,tablespace_name,SEGMENT_NAME,sum(bytes)/1024/1024/1024,(select sysdate from dual)
FROM DBA_SEGMENTS where BYTES > (1024*1024*1024)
GROUP BY owner,tablespace_name,SEGMENT_NAME order by segment_name;
commit;

But Now How can i get a report with difference of sysdate and sysdate-7

Does it make sense?
create OR REPLACE view weekly_growth
as
select S.* FROM SPACE_GROWTH S WHERE S.GB IN
( select GB from space_growth where to_char(week,'dd-mm')=to_char(sysdate,'dd-mm')
minus
select GB from space_growth where to_char(week,'dd-mm')=to_char(sysdate-7,'dd-mm') );

March 03, 2006 - 5:57 pm UTC

why the "(select sysdate from dual)" - just use "sysdate", it will more than suffice

to get an observation by observation report you would:

select owner,
tablespace,
name,
gb,
lag(gb) over (partition by owner, name order by week) last_gb,
week,
week-lag(week) over (partition by owner, name order by week) num_days
from space_growth;

### Object growth

abc, March 03, 2006 - 6:02 pm UTC

I published following srt up to my team .Plese validate and do let me know If anything wrong

Create table space_growth (owner, tablespace, Name, GB, week)

as

SELECT owner,tablespace_name,SEGMENT_NAME,sum(bytes)/1024/1024/1024,(select sysdate from dual) FROM DBA_SEGMENTS where BYTES > (1024*1024*1024)

GROUP BY owner,tablespace_name,SEGMENT_NAME order by segment_name;

Schedule following run for every weekend.

Insert into space_growth SELECT owner,tablespace_name,SEGMENT_NAME,sum(bytes)/1024/1024/1024,(select sysdate from dual)

FROM DBA_SEGMENTS where BYTES > (1024*1024*1024) GROUP BY owner,tablespace_name,SEGMENT_NAME order by segment_name;

Commit;

Create OR REPLACE view weekly_growth

As

Select S.* FROM SPACE_GROWTH S WHERE S.GB IN

(Select GB from space_growth where to_char(week,'dd-mm')=to_char(sysdate,'dd-mm')

Minus

Select GB from space_growth where to_char(week,'dd-mm')=to_char(sysdate-7,'dd-mm') );

Select * from weekly_growth;

March 03, 2006 - 8:12 pm UTC

"srt" - I have no clue what "srt" is.

when you tested your procedure, did it do what you designed it to do?

I sure wouldn't use minus, I'd be using lag() to do week over week analysis.

### Table Size No change after insert

Girish, March 09, 2006 - 7:16 am UTC

After I did all steps there is no change
1 create table t ( x int,
2 y char(2000) default '*' )
3 storage ( initial 40k next 40k minextents 5 )
4* tablespace system
girish@PIDEV03> /

Table created.

girish@PIDEV03> insert into t (x) values ( 1 );

1 row created.

girish@PIDEV03> commit;

Commit complete.

girish@PIDEV03> analyze table t compute statistics;

Table analyzed.

girish@PIDEV03> compute sum of blocks on report
girish@PIDEV03> break on report
girish@PIDEV03> select extent_id,bytes,blocks
2 from user_extents
3 where segment_name='T'
4 and segment_type='TABLE';

EXTENT_ID BYTES BLOCKS
---------- ---------- ----------
0 40960 5
1 40960 5
2 65536 8
3 98304 12
4 147456 18
----------
sum 48

girish@PIDEV03> clear breaks
breaks cleared
girish@PIDEV03> select blocks,empty_blocks,avg_space,num_freelist_blocks
2 from user_tables
3 where table_name='T';

BLOCKS EMPTY_BLOCKS AVG_SPACE NUM_FREELIST_BLOCKS
---------- ------------ ---------- -------------------
1 46 6067 1

girish@PIDEV03> insert into t(x) select rownum from all_objects where rownum < 100;

99 rows created.

girish@PIDEV03> commit;

Commit complete.

girish@PIDEV03> compute sum of blocks on report
girish@PIDEV03> break on report
girish@PIDEV03> select extent_id,
2 bytes,blocks
3 from user_extents
4 where segment_name='T'
5 and segment_type='TABLE';

EXTENT_ID BYTES BLOCKS
---------- ---------- ----------
0 40960 5
1 40960 5
2 65536 8
3 98304 12
4 147456 18
----------
sum 48

girish@PIDEV03> clear breaks
breaks cleared
girish@PIDEV03> select blocks,empty_blocks,avg_space,num_freelist_blocks
2 from user_tables where table_name='T';

BLOCKS EMPTY_BLOCKS AVG_SPACE NUM_FREELIST_BLOCKS
---------- ------------ ---------- -------------------
1 46 6067 1

There is no change after insert

March 09, 2006 - 2:30 pm UTC

tweak the example - do you see your initial allocations were different from the get go?

do you see what I was trying to show? You'll need to play with the numbers on your system to see the same thing since your extent sizes are obviously different.

### VSIZE on BLOB

Robert Hanrahan, March 24, 2006 - 9:44 am UTC

hi Tom,

I was searching for a script which shows me the average row lenght in bytes of a table and I found this:

</code> http://www.allenhayden.com/cgi-bin/getdoc.pl?file=ascr83.htm
```
It works, but it has a BUG: you can't use VSIZE on BLOB.

It should be using dbms_lob.getlength(BLOB_COLUMN) right?

To find the actual size of a row I did this:

/* TABLE */

select
3 + avg(nvl(dbms_lob.getlength(CASE_DATA),0)+1 +
nvl(vsize(CASE_NUMBER   ),0)+1 +
nvl(vsize(CASE_DATA_NAME),0)+1 +
nvl(vsize(LASTMOD_TIME_T),0)+1
) "Total bytes per row"
from t
where case_number = 301;

Total bytes per row
--------------------
3424

/* INDEX */

select sum(COLUMN_LENGTH)
from dba_ind_columns
where  TABLE_NAME = 't';

SUM(COLUMN_LENGTH)
------------------------
22

So, the total (avg) bytes used is 3424 + 22 = 3446 bytes.

Robert James Hanrahan
No Dramas Productions Publishing
```
http://rjh.keybit.net <code>

March 24, 2006 - 10:02 am UTC

the average row length - of what use is it to you? dbms_stats will get it for you.

blobs over 4000 bytes are stored out of line, it would be wrong to add their entire length in, they are not part of the row.

### NUM_FREELIST_BLOCKS is 0 and high water mark

Jade, April 26, 2006 - 5:06 pm UTC

Tom, I have 2 questions:
1. I have used dbms_stats.gather_schema_stats to gather statistics, but why the values for the column NUM_FREELIST_BLOCKS in user_tables for the schema are all 0?
2. we have a table named it tableA. This is a staging table to which new transactions are inserted and from which old ones are deleted in minutes. This table has never hold more than 2000 rows, avg_row_len is 132. block_size is 4k (I know it is small!). So the total size of the table is actually around 64 blocks. but BLOCKS in user_tables for this table is 6726! The table balloons! Is that a way to keep the high water mark lower without mannually adjusting it by using truncate table?

Thanks.

April 27, 2006 - 2:51 pm UTC

1) dbms_stats only gathers that which the optimizer uses, things like chain_cnt for example - are not gathered by it.

dbms_space.free_blocks can be used (if in a manual segement space managed tablespace).

dbms_space.space_usage can be used with automatic segment space management (ASSM)

2) that means at some point you had more than "a couple of rows". In 10g, you can online fix this using alter table T shrink space compact and alter table t shrink space - in 9i, you can use dbms_redefinition if you need this online. Or, if you can do offline, you can do alter table T MOVE - instead of truncate (but you have to rebuild indexes as well after that)

### NUM_FREELIST_BLOCKS still 0

jade, April 28, 2006 - 9:46 am UTC

Tom, thanks for the answers which are really helpful... but I tried analyze table table_name compute statistics. NUM_FREELIST_BLOCKS is still 0. I use 9ir2.

April 28, 2006 - 12:40 pm UTC

did you see my response above???

You are likely using ASSM which doesn't even use freelists. use the dbms_ packages I described.

### ASSM does not use FREELIST

jade, May 01, 2006 - 3:35 pm UTC

Yes I use ASSM. I didn't know ASSM does not use freelist. I just checked the Oracle doc, it does mention ASSM eliminates any need to specify and tune the PCTUSED, FREELISTS, and FREELISTS GROUPS storage parameters... while manual segment space management is the opposite.

thanks.

### Analyze or not

Darvishzadeh, May 03, 2006 - 4:27 am UTC

Hello Tom,
We are discussing here about the calculation of the table size. I say an analzye IS necessary before quering the dba_extents or dba_segments table for calculating the size of data in the database. My collegues do not agree. Could you please give us an advise?

Thanx

May 03, 2006 - 7:00 am UTC

dba_extents and dba_segments are fully populated without gathering statistics.

### But...

Darvishzadeh, May 03, 2006 - 8:36 am UTC

You do analyze everywhere and you say always "Analayze" as one of your steps before calculating. Could you please kindly explain this discrepancy.

Thank you very much.

May 03, 2006 - 12:51 pm UTC

what "discrepancy"

You wrote:

... I say an analzye
IS necessary before quering the dba_extents or dba_segments ....

I said

"no, that is not true"

However, I would not necessarily use those views for sizing - load, gather stats and use the information from others views - sure, but dba_extents and dba_segments reports on ALLOCATED space - not USED space - for sizing, I'd be more interested in the latter.

### why there is a difference ?

Parag Jayant Patankar, October 10, 2006 - 8:42 am UTC

Hi Tom,

I am using method given by you for calculating table size on 9.2 on dictionary managed tablespace after doing dbms_stats on a particular table

from user_extents

SUM(BLOCKS)
-----------
171924

from user_tables

BLOCKS EMPTY_BLOCKS AVG_SPACE NUM_FREELIST_BLOCKS
---------- ------------ ---------- -------------------
171909 0 0 0

my block size is 4k. I have following questions

1/ Why there is difference between total no of blocks allocated and number of blocks used + empty_blocks + num_freelist_blocks + header block ?

2/ In this situation, can I simply calcaulate actual table size by 171909 * 4k ?

thanks & regards
PJP

October 10, 2006 - 12:09 pm UTC

are you using automatic segment space management, or what - need details of your underlying "storage" there.

guessing you are using ASSM (automatic segment space management) - that would account for the difference.

sure, *4k would be likely more than sufficient for your guesstimate.

### why the difference ?

Parag J Patankar, October 11, 2006 - 12:42 am UTC

```Hi Tom,

I am giving you details from dba_tablespaces

SQL> select tablespace_name, block_size, extent_management, ALLOCATION_TYPE, SEGMENT_SPACE_MANAGEMENT
2  from   user_tablespaces
3  /

TABLESPACE_NAME                BLOCK_SIZE EXTENT_MAN ALLOCATIO SEGMEN
------------------------------ ---------- ---------- --------- ------
SYSTEM                               4096 DICTIONARY USER      MANUAL
TEMP                                 4096 DICTIONARY USER      MANUAL
USERS                                4096 DICTIONARY USER      MANUAL
USER_D01                            4096 DICTIONARY USER      MANUAL
USER_I01                            4096 DICTIONARY USER      MANUAL
UNDOTBS1                             4096 LOCAL      SYSTEM    MANUAL
DB_AUDIT_D01                         4096 LOCAL      SYSTEM    MANUAL

My table is located in USE_D01.

thanks & regards
PJP
```

A reader, October 18, 2007 - 4:55 pm UTC

user_tables shows us blocks, empty_blocks, avg_space. Is this same level of info available for LOB segments and LOB indexes?

### Space used by index

Jeet, October 19, 2007 - 3:35 pm UTC

Tom

To estimate data space I am using your formula of using dbms_stats and then muliplying expected rows with blocks/per row. But I am confused over index calculation:

If I do ANALYZE INDEX .. VALIDATE STRUCTURE;

I can get num_rows from user_indexes
and blocks from index_stats

so would this be the corect way to estimate ? or do I have to look at some other columns as well ?

Thanks

### How to calculate the actual size of a table?

Gs, December 03, 2007 - 5:45 pm UTC

```Tom,
I read your explanation about difference between caliculating VSIZE and AVG_ROW_LEN.I have done small test and the following are the details....

SQL> create table emp ( ename varchar2(36) );

Table created.

SQL> insert into emp values('MARK ANTHONY     ');

1 row created.

SQL> commit;

Commit complete.

SQL> ANALYZE TABLE EMP COMPUTE STATISTICS;

Table analyzed.

SQL> SELECT VSIZE(ENAME) FROM EMP;

VSIZE(ENAME)
------------
17

SQL> select NUM_ROWS, BLOCKS, EMPTY_BLOCKS, AVG_SPACE, AVG_ROW_LEN, NUM_ROWS, NUM_FREELIST_BLOCKS
fr  2  om all_tables where owner = 'SYS' and table_name = 'EMP';

NUM_ROWS     BLOCKS EMPTY_BLOCKS  AVG_SPACE AVG_ROW_LEN   NUM_ROWS NUM_FREELIST_BLOCKS
---------- ---------- ------------ ---------- ----------- ---------- -------------------
1          1            0       8055          21          1                   1

These above results obtained were in consistent with your explanation. Now I inserted a new value in to table emp and in the following shown the VSIZE is much larger than the AVERAGE ROW LENG (even though avg_row_len is supposed to take in to consideration any leading bytes...etc.)

SQL> INSERT INTO EMP VALUES('12345678901234567890123456789012345');

1 row created.

SQL> commit;

Commit complete.

SQL> ANALYZE TABLE EMP COMPUTE STATISTICS;

Table analyzed.

SQL> SET LINESIZE 125
SQL>
SQL> select NUM_ROWS, BLOCKS, EMPTY_BLOCKS, AVG_SPACE, AVG_ROW_LEN, NUM_ROWS, NUM_FREELIST_BLOCKS
from  2   all_tables where owner = 'SYS' and table_name = 'EMP';

NUM_ROWS     BLOCKS EMPTY_BLOCKS  AVG_SPACE AVG_ROW_LEN   NUM_ROWS NUM_FREELIST_BLOCKS
---------- ---------- ------------ ---------- ----------- ---------- -------------------
2          1            0       8014          30          2                   1

SQL> SELECT VSIZE(ENAME) FROM EMP;

VSIZE(ENAME)
------------
17
35

```

Alexander the ok, May 05, 2008 - 4:59 pm UTC

Hey Tom,

Could you suggest the best way to get the size of a column? In my case, I'm looking at clobs.

I'm starting to think I might need to write code, so I can

select f_size(myColum) from....

I'm doing this because we are replicating clobs, and our latency often falls behind. I want to know if we are asking too much of the network (so I can prove it with numbers. E.g., the network is 100 mb, the clobs are 30meg, we store two per transactions, etc etc)

Thanks as always.
May 06, 2008 - 12:56 am UTC

ummm

select dbms_lob.getlength( col ) from t;

??

a 100mb connection says nothing about latency - it says something about bandwidth though...

### What's the difference between these two queries ?

Artur Costa, July 03, 2008 - 1:27 pm UTC

Hi Tom,

I'm trying to calculate the size occupied by a LOB column in a table. And I'm using two distinct aproaches.

The first is:

select sum(dbms_lob.getlength( c ))/1024/1024 from t

This should calculate the size in Mb of my BLOB column.

And in my case the value is: 121 Mb, aproximately.

Then I used a different approach and tryed the query:

select bytes/1024/1024 from user_segments where segment_name in (select segment_name from user_lobs where table_name = 'T' and column_name = 'C')

This, I think, should also give me the size in Mb of my BLOB column.

But in this case I got, Exactly 80 Mb. Which it's different from the first value, and strangely less than it. And it's an exact number.

Why do I have this difference ?

I'm using ORACLE 10R2 on HP-UX Itanium 64 Bit platform.

Regards,

Artur Costa
July 07, 2008 - 10:17 am UTC

Say your chunk size is 16k

say your average lob is 33k.

You will have 3 chunks allocated per lob (48k). A chunk is not shared across lobs....

A lob is a complex data structure stored in blocks - allocated in chunks - segments allocate extents.

The query against user_segments shows the amount of space allocated - it does not show the used space (dbms_space can be used to see that)

### VSIZE

karthick pattabiraman, November 18, 2008 - 3:37 am UTC

```SQL> create table size_test(val varchar2(10))
2  /

Table created.

SQL> edit
Wrote file afiedt.buf

1  insert into size_test
3    from dual
4* connect by level <= 1000000
SQL> /

1000000 rows created.

SQL> commit
2  /

Commit complete.

SQL> analyze table size_test compute statistics
2  /

Table analyzed.

SQL> select total_size_mb,
2         free_size_mb,
3         total_size_mb - free_size_mb used_size_mb
4    from (select (blocks*value)/1024/1024 total_size_mb,
5            (blocks*avg_space)/1024/1024 free_size_mb,
6            num_freelist_blocks
7       from user_tables,
8            (select value from v\$parameter where name = 'db_block_size')
9      where table_name = 'SIZE_TEST')
10  /

TOTAL_SIZE_MB FREE_SIZE_MB USED_SIZE_MB
------------- ------------ ------------
17.2890625   1.77280426   15.5162582

SQL> select sum(vsize(val))/1024/1024 used_size_mb
2    from size_test
3  /

USED_SIZE_MB
------------
9.53674316```

The query given by you and the query using vsize has a difference of 6MB.

You can see i hae not used NUMBER or CHAR data type in my table. Then why is the
big difference?

Does it has any thing to do with this statement of yours

"vsize is not taking into consideration the null/not null flags"

Thanks,
Karthick.
November 18, 2008 - 7:42 pm UTC

vsize simply tells you how much a single instance of that column takes

the other query tells you how much a table is consuming.

a block is a complex data structure.

that null/not null flags is part of it, but just the tip of the iceberg.

### Determin the size of data to be deleted

karthick pattabiraman, November 19, 2008 - 1:30 am UTC

Why i asked this question in first place is that i have a requirement as follows.

I am doing an application in which it will shrink a database based on employee detail.

Suppose the production database has 1000 employee and his details. If the user wants to shrink it to say 100 employee and give it to development team, this application will do it.

In this i am suppose to say an estimate of the data size reduction. Lets say the data size is 1gb for 1000 employee and if you apply the following criteria then the data base size will be reduced by 60%.

For such an application do you think i can use VSIZE to determine the actual size and estimated size?

I hope i am not confusing you too much :)
November 21, 2008 - 5:46 pm UTC

this is much easier than you are making it out to be.

1000 employees = 1gb of data.

100 employees = 10% of 1000.

Therefore, it will be ABOUT 10% of 1gb.

Or it will take 90% less space, about.

### Tests in 11g Gives different results ???

sriram, January 20, 2009 - 10:42 am UTC

```Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create table t ( x int,
2                  y char(2000) default '*' )
3    storage ( initial 40k next 40k minextents 5 )
4    tablespace users;

Table created.

SQL> insert into t (x) values ( 1 );

1 row created.

SQL> commit;

Commit complete.

SQL> analyze table t compute statistics;

Table analyzed.

SQL> compute sum of blocks on report
SQL> break on report
SQL> select extent_id, bytes, blocks
2  from user_extents
3  where segment_name = 'T'
4  and segment_type = 'TABLE'
5  /

EXTENT_ID      BYTES     BLOCKS
---------- ---------- ----------
0      65536          8
1      65536          8
2      65536          8
3      65536          8
----------
sum                           32

SQL> select 65536/1024 from dual;

65536/1024
----------
64

SQL> select blocks, empty_blocks,
2  avg_space, num_freelist_blocks
3  from user_tables
4  where table_name = 'T'
5  /

BLOCKS EMPTY_BLOCKS  AVG_SPACE NUM_FREELIST_BLOCKS
---------- ------------ ---------- -------------------
5           27       7671                   0
----------
5

SQL> show parameter db_block

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_block_buffers                     integer     0
db_block_checking                    string      FALSE
db_block_checksum                    string      TYPICAL
db_block_size                        integer     8192
SQL> select 8192-7671 from dual;

8192-7671
----------
521

I understand that we can use ANALYZE for computing space so we really dont need to use the DBMS_STATS , is this correct?

Why is it that we are getting different results than what you arrived at.

Thanks```

January 20, 2009 - 11:17 am UTC

look at your tablespace, it looks like

a) locally managed
b) with system allocated extents

which would be very different from 8i which had only

a) dictionary managed
b) manual extent sizing

do not use analyze, use dbms_stats to gather stats

and dbms_space to do a deep dive on allocated segment space

### table size

A reader, January 20, 2009 - 6:12 pm UTC

### BLOB size

Florin, February 16, 2009 - 1:02 pm UTC

```Dear Tom,

I have a strange situation in one of my tables TB_XML.
The tabke has 300k records and contains BLOB field.
When I'm calculating the avg blob size I got 11.

SQL> select avg(dbms_lob.getlength ('XML_CONTENT')) from user_lobs where table_name='TB_XML'

AVG(DBMS_LOB.GETLENGTH('XML_CONTENT'))
--------------------------------------
11
The max it's the same:

SQL> select max(dbms_lob.getlength ('XML_CONTENT')) from user_lobs where table_name='TB_XML';

MAX(DBMS_LOB.GETLENGTH('XML_CONTENT'))
--------------------------------------
11

The BLOB is created as "enable storage in row" with chunk 8k, pctversion 10 and CACHE.

In user_segments the table is 60 MB while its SYS_LOB% is 14 GB !!!

Would you please expalain me this phenomena.

Florin

```

February 16, 2009 - 1:07 pm UTC

ummm

you are taking a constant string 'XML_CONTENT' and converting it on the fly into a lob for each row in the view user_lobs and asking us "how long is this piece of string"

to which we respond "it is 11"

x,m,l,_,c,o,n,t,e,n,t - 11 characters.

you would sort of, well, you know - query the data you wanted to measure.

Say you have a table T with a column X and X is a blob.

select avg( dbms_lob.getlength(x) ) from T;

would be appropriate to measure the average length of the blob x contained in the table T.

### Estimate the Space required for a table.

Samy, June 01, 2009 - 6:06 am UTC

Hi Tom,

How to estimate the Space required for a table.

example : if we take SCOTT.EMP table and there will be monthly 6000 records inserted into this table and we will keep 3 Years Data i.e. 36 * 6000 = 216000 records. Now i have to calculate the Space required for this table so that Admin will alot it.

Can you tell me the method how will you calculate the Space required by this table with 3 years of Data.

June 01, 2009 - 8:12 pm UTC

did you search for anything related on this site at all????

### size of columns

Anil, July 07, 2009 - 11:56 am UTC

Hello Tom,
Is this the right query to calculate the each size of the columns?

All the columns are non-CLOB columns and we are using US7ASCII as character set in Oracle 9i.

SELECT sum(VSIZE(COL1)), sum(VSIZE(COL2)), sum(VSIZE(COL3)
FROM T1

If so, the sum of values in the above query is 2GB and the below is 15GB which is not matching.

SELECT SUM(BYTES)/1024/1024
FROM USER_SEGMENTS
WHERE SEGMENT_NAME = 'T1'

Am I making any mistake?
July 07, 2009 - 6:44 pm UTC

ummm, there is so little data here as to make this pretty impossible to answer.

The space allocated to a segment is a function of many things.

It could be that when you created the table (for all I know) you asked for 15gb to be allocated. And you have used only 2gb of it.

You have to remember that 2gb of raw data will consume more that 2gb of space when loaded - there is the block overhead (not all of the data block is YOURS, some is ours). There is pctfree.

It could be that the table contained 15gb of data one day, and you deleted 13/15ths of the data (leaving just 2gb behind). The storage would still be allocated to the table (user_segments would say "15gb"), but the table would be mostly empty.

Probably, the last bit is the correct answer, the table was bigger and you removed much of it in a purge or deletes over time.

### How to find no of rows in each table

A reader, November 16, 2009 - 6:26 am UTC

Tom.

My client doesn't want to run Analyze table to collect statistics. Hence, I would not be able to use user_tables, all_all_tables to retrieve approx no. of rows in each table in database.

I don't want to create another table to have updated no of rows. Can i somehow crack this in one query.

November 23, 2009 - 9:37 am UTC

select count(*) from t1 union all
select count(*) from t2 union all
select count(*) from t3 union all
.....

and they shouldn't use analyze table, they should use dbms_stats.

Daniel Blondowski, December 23, 2009 - 11:34 am UTC

```Tom...any ideas why I would get different total numbers:

select sum(bytes)/1024/1024/1024 from dba_segments seg,
dba_tables ind
where seg.segment_name = table_name
and seg.owner = 'TEST'
2    3    4    5
SQL> /

SUM(BYTES)/1024/1024/1024
-------------------------
43.8671875

SQL> select sum(bytes)/1024/1024/1024 from dba_segments where owner = 'TEST' and segment_type = 'TABLE'
2  /

SUM(BYTES)/1024/1024/1024
-------------------------
55.4130859```

December 31, 2009 - 12:14 pm UTC

dba_segments has an owner column
dba_tables has an owner column

you don't have a predicate on dba_tables.owner at all.

The first query will

o find all segments owned by the owner 'TEST'.
o find all tables - all of them - for everyone.
o join those two sets together by segment_name to table_name

So, you'll find all of 'TEST's segments (say they are named A-table, B-index, C-lobsegment, D-lobindex). You will find then all tables named A (there might be two), and all tables named B and so on. And then report on them all as if there were table segments.

The second query sums up all of TEST's tables.

The first query - adds up a mess of junk.

### recyclebin

Daniel Blondowski, December 23, 2009 - 11:38 am UTC

Tables in recyclebin.
December 31, 2009 - 12:17 pm UTC

no, the fist query is junk, wrong, incorrect.

select sum(bytes)/1024/1024/1024
from dba_segments seg, dba_tables ind
where seg.segment_name = table_name
and seg.owner = 'TEST'

is just like:

select ...
from (select * from dba_segments where owner='TEST') a,
(select * from dba_tables) b
where b.table_name = a.segment_name
/

you need A LOT more in there.

You need to get segment types of just TABLES.
You need to join by owner.

Else, it is just garbage.

### Mismatch of Block Counts ?

Yogesh Purabiya, May 19, 2011 - 6:21 am UTC

Is this possible ? When ?

```16:47:05 stores1@ >select ut.Table_Name, ut.Empty_Blocks teb, ut.Blocks tub, ut.Empty_Blocks + ut.Bl
ocks ttb, us.blocks sb
16:47:06   2    from tabs ut,
16:47:06   3       ( select Segment_Name, sum(Blocks) Blocks from user_segments
16:47:06   4          group by Segment_Name
16:47:06   5       ) us
16:47:06   6   where ut.table_Name = us.Segment_Name
16:47:06   7     and us.Blocks < ut.Empty_Blocks + ut.Blocks  -- 1 / 280
16:47:06   8   order by ut.Table_Name
16:47:06   9  /

TABLE_NAME                            TEB        TUB        TTB         SB
------------------------------ ---------- ---------- ---------- ----------
PLGROUP                                10         25         35         32

real: 297
16:47:08 stores1@ >```

If I remove the last predicate (and us.Blocks < ut.Empty_Blocks + ut.Blocks ), then I get 280 rows.

Table PLGROUP contains 7996 rows.
May 19, 2011 - 8:58 am UTC

is what possible - why would I reverse engineer your query to try and figure out what you MIGHT have possibly meant to retrieve?

why wouldn't you describe what you are trying to get, describe what you got, and describe why you think it is wrong?

I can say that comparing user_segments - which has real time information on the segment size with tabs - which has information AS OF last statistics gathering - would be problematic in general...

```ops\$tkyte%ORA11GR2> create table t as select * from stage;

Table created.

ops\$tkyte%ORA11GR2> select table_name, blocks, empty_blocks from tabs where table_name = 'T';

TABLE_NAME                         BLOCKS EMPTY_BLOCKS
------------------------------ ---------- ------------
T

ops\$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops\$tkyte%ORA11GR2> select table_name, blocks, empty_blocks from tabs where table_name = 'T';

TABLE_NAME                         BLOCKS EMPTY_BLOCKS
------------------------------ ---------- ------------
T                                    1054            0

ops\$tkyte%ORA11GR2> truncate table t;

Table truncated.

ops\$tkyte%ORA11GR2> select table_name, blocks, empty_blocks from tabs where table_name = 'T';

TABLE_NAME                         BLOCKS EMPTY_BLOCKS
------------------------------ ---------- ------------
T                                    1054            0

ops\$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops\$tkyte%ORA11GR2> select table_name, blocks, empty_blocks from tabs where table_name = 'T';

TABLE_NAME                         BLOCKS EMPTY_BLOCKS
------------------------------ ---------- ------------
T                                       0            0

```

### Blocks count

Yogesh Purabiya, May 21, 2011 - 1:15 am UTC

My query is whether sum / total (Empty_Blocks + Blocks) from User_Tables can be greater / bigger than Blocks from User_Segments ?

```USER                                  UID TERMINAL            ENTRYID
------------------------------ ---------- ---------------- ----------
SESSIONID DATE_AND_TIME
------------- -----------------
STORES1                                24 WVMDPC8118T               0
1,16,17,517 21/05/11 11:37:34

11:37:34 stores1@ >
11:37:40 stores1@ >begin
11:37:45   2  dbms_stats.gather_table_stats ( user, 'PlGroup' );
11:37:45   3  end;
11:37:45   4  /

PL/SQL procedure successfully completed.

real: 281
11:37:49 stores1@ >select ut.Table_Name, ut.Empty_Blocks teb, ut.Blocks tub, ut.Empty_Blocks + ut.Blocks ttb, us.blocks sb
11:37:57   2    from tabs ut,
11:37:57   3       ( select Segment_Name, sum(Blocks) Blocks from user_segments
11:37:57   4          group by Segment_Name
11:37:57   5       ) us
11:37:57   6   where ut.table_Name = us.Segment_Name
11:37:57   7     and us.Blocks < ut.Empty_Blocks + ut.Blocks  -- 1 / 280
11:37:57   8   order by ut.Table_Name
11:37:57   9  /

TABLE_NAME                            TEB        TUB        TTB         SB
------------------------------ ---------- ---------- ---------- ----------
PLGROUP                                10         25         35         32

real: 297
11:37:59 stores1@ >spo off
```

We are using Oracle 8.1.7 on Windows/2003

May 23, 2011 - 11:49 am UTC

re-read my example, I sort of demonstrated that didn't I?

I truncated the table (it is now "empty" and sort of "block free") but the information in user_tables isn't updated UNTIL I gather statistics.

### one question

kuna, May 23, 2011 - 3:35 pm UTC

Hi,

one question--i have six scheamas suggest me query which provide top ten table most occupied tables in each scheama

Thanks

May 23, 2011 - 5:20 pm UTC

define for me "most occupied", do you mean the ones with the most blocks allocated? used?

and what if one of the schema's has 100 tables - and every table is the same exact size. Which ten did you want?

did you want the set of tables with the 10 largest sizes (could be way more than 10)

did you want a random set of 10 tables - from the above set?

In any case - assuming you know how to report on tables in the six schemas and sort by schema/table size (you should, pretty easy - just query dba_tables), you can use the techniques discussed here:

to easily get the top ten by schema - regardless of how you define "top ten".

### Block Counts

Yogesh Purabiya, May 24, 2011 - 1:12 am UTC

The table I referred (PLGROUP) is part of our legacy application now migrated to some ERP. Therefore, the tablespace is made READ ONLY. I gathered statisitcs. Even then, the count mismatch is seen.

Please see my last query where I have gathered the fresh statistics.

The tablespace is READ ONLY for last few years. So, there is no chance of User_Tables to get updated.

Still, I see the mismatch.
May 24, 2011 - 7:49 am UTC

user_tables gets updated regardless of what mode the tablespace is in. Heck, I can drop that table in the read only tablespace and user_tables will reflect that.

```ops\$tkyte%ORA11GR2> create table t ( x int ) tablespace p1;

Table created.

ops\$tkyte%ORA11GR2> insert into t values ( 1 );

1 row created.

ops\$tkyte%ORA11GR2> commit;

Commit complete.

ops\$tkyte%ORA11GR2> alter tablespace p1 read only;

Tablespace altered.

ops\$tkyte%ORA11GR2>
ops\$tkyte%ORA11GR2> select table_name from user_tables where table_name = 'T';

TABLE_NAME
------------------------------
T

ops\$tkyte%ORA11GR2> drop table t;

Table dropped.

ops\$tkyte%ORA11GR2> select table_name from user_tables where table_name = 'T';

no rows selected

ops\$tkyte%ORA11GR2>
ops\$tkyte%ORA11GR2> alter tablespace p1 read write;

Tablespace altered.

```

so, there are definitely not only chances, but absolutely the ability for user_tables to be modified as far as the data you see with a read only table - no doubt.

If the data is 100% up to date everywhere, they should be showing "the same" - it shouldn't be that way.

But then, 8.1.7 (not even supported on your really archaic operating system, not really even supported at all really, software from last century)??

This table is so tiny, it falls into the category of "so what", there could have been an issue (that I'm not personally aware of) with user tables/user segments from way in the past.

### Size of table and its related index

A reader, June 23, 2011 - 8:55 am UTC

Hi Tom,

What could be an ideal query for estimatating the table and its related index?

I was using this query

select SEGMENT_NAME, sum(bytes)/1024 as "KB", sum(bytes)/1024/1024 as "MB" from USER_SEGMENTS where segment_type = 'TABLE';

But this doesn't give me an exact idea of size of indexes related to table. Do we need to join user_ind_columns with it?
June 23, 2011 - 9:39 am UTC

if we make the simple assumption that all indexes on a table are owned by the owner of the table and you have no indexes on tables outside of your schema, then something like:

```ops\$tkyte%ORA11GR2> select segment_name, table_bytes, index_bytes
2    from (
3  select SEGMENT_NAME, bytes table_bytes
4    from USER_SEGMENTS
5   where segment_type = 'TABLE'
6         ) t,
7             (
8  select ui.table_name, sum(us.bytes) index_bytes
9    from user_indexes ui, user_segments us
10   where ui.index_name = us.segment_name
11     and us.segment_type = 'INDEX'
12   group by ui.table_name
13         ) i
14   where t.segment_name = i.table_name(+)
15  /

SEGMENT_NAME                   TABLE_BYTES INDEX_BYTES
------------------------------ ----------- -----------
A1                                   65536       65536
EMP                                  65536       65536
T                                    65536
TEST_DUPLICATE                       65536
DEPARTMENTS                          65536
T2                                   65536
LOCATIONS                            65536
T_HIERARCHY                          65536
CONF                                 65536
B                                    65536
A                                    65536
USR                                  65536

12 rows selected.
```

will get it - for NORMAL indexes anyway, it won't get lob segments, lob indexes and so on - just table and regular index information

Alexander, June 23, 2011 - 3:52 pm UTC

Tom,

This query gave me an idea for something we could use around here. I'm to modify what you did to a table, it's size, it's indexes and size, and the columns in the index.

This is going to be for a specific table a person supplies, so I want 1 column listing for the table and size. Also, I'd like the column_name column to contain the list of columns not one per row.

This is what I have:

```SQL> select decode( grouping(rownum), 1, to_char(NULL),segment_name) TABLE_NAME,
2         table_size,
3         dc.column_name,
4         i.index_name,
5         index_size
6  from (
7      select segment_name, bytes/1024/1024 TABLE_SIZE
8       from dba_segments
9      where segment_type = 'TABLE'
10        and segment_name not like '%BIN%'
11        and segment_name  = 'CCC_LS_SNAPSHOT_01_T') t,
12       (
13      select ui.table_name, ui.index_name, sum(us.bytes/1024/1024) INDEX_SIZE
14        from dba_indexes ui, dba_segments us
15      where ui.index_name = us.segment_name
16        and us.segment_type = 'INDEX'
17      group by ui.table_name, ui.index_name) i,
18      dba_tab_cols dc
19  where t.segment_name = i.table_name(+)
20    and dc.table_name = i.table_name
21    and column_name = 'LS_AMT'
22  group by rollup (segment_name, table_size, dc.column_name, i.index_name, index_size, rownum)
23  /

TABLE_NAME                TABLE_SIZE COLUMN_NAME                    INDEX_NAME                     INDEX_SIZE
------------------------- ---------- ------------------------------ ------------------------------ ----------
CCC_LS_SNAPSHOT_01_T      127.320313 LS_AMT                         CCC_LS_SNAPSHOT_01_PK           154.21875
127.320313 LS_AMT                         CCC_LS_SNAPSHOT_01_PK           154.21875
127.320313 LS_AMT                         CCC_LS_SNAPSHOT_01_PK
CCC_LS_SNAPSHOT_01_T      127.320313 LS_AMT                         CCC_LS_SNAPSHOT_01_X1          48.0078125
127.320313 LS_AMT                         CCC_LS_SNAPSHOT_01_X1          48.0078125
127.320313 LS_AMT                         CCC_LS_SNAPSHOT_01_X1
CCC_LS_SNAPSHOT_01_T      127.320313 LS_AMT                         CCC_LS_SNAPSHOT_01_X2          90.6796875
127.320313 LS_AMT                         CCC_LS_SNAPSHOT_01_X2          90.6796875
127.320313 LS_AMT                         CCC_LS_SNAPSHOT_01_X2
CCC_LS_SNAPSHOT_01_T      127.320313 LS_AMT                         CCC_LS_SNAPSHOT_01_X3           59.203125
127.320313 LS_AMT                         CCC_LS_SNAPSHOT_01_X3           59.203125
127.320313 LS_AMT                         CCC_LS_SNAPSHOT_01_X3
CCC_LS_SNAPSHOT_01_T      127.320313 LS_AMT                         CCC_LS_SNAPSHOT_01_X4          104.859375
127.320313 LS_AMT                         CCC_LS_SNAPSHOT_01_X4          104.859375
127.320313 LS_AMT                         CCC_LS_SNAPSHOT_01_X4
CCC_LS_SNAPSHOT_01_T      127.320313 LS_AMT                         CCC_LS_SNAPSHOT_01_X5          62.2734375
127.320313 LS_AMT                         CCC_LS_SNAPSHOT_01_X5          62.2734375
127.320313 LS_AMT                         CCC_LS_SNAPSHOT_01_X5
127.320313 LS_AMT
127.320313

22 rows selected.```

There's something not right about this because some of these are composite indexes and it's only list 1 column. But this is what I'm looking for:

```TABLE_NAME                TABLE_SIZE COLUMN_NAME                    INDEX_NAME                     INDEX_SIZE
------------------------- ---------- ------------------------------ ------------------------------ ----------
CCC_LS_SNAPSHOT_01_T      127.320313 LS_AMT, COL2, COL3             CCC_LS_SNAPSHOT_01_PK           154.21875
LS_AMT, COL2, COL3             CCC_LS_SNAPSHOT_01_X1          48.0078125
LS_AMT, COL2, COL3             CCC_LS_SNAPSHOT_01_X2          90.6796875
LS_AMT, COL2, COL3             CCC_LS_SNAPSHOT_01_X3           59.203125
LS_AMT, COL2, COL3             CCC_LS_SNAPSHOT_01_X4          104.859375
LS_AMT, COL2, COL3             CCC_LS_SNAPSHOT_01_X5          62.2734375```

I'm probably going to have an option to do all tables to that's why I'd like to have it broken up by table. Hope that's as clear as mud.
June 23, 2011 - 4:36 pm UTC

```ops\$tkyte%ORA11GR2> with tables
2  as
3  (
4  select segment_name tname, to_char(bytes/1024/1024,'999,999.99') table_size
5    from user_segments
6   where segment_type = 'TABLE'
7     and segment_name not like 'BIN%'
8  ),
9  indexes
10  as
11  (
12  select table_name, index_name, scbp, rn,
13         (select to_char(bytes/1024/1024,'999,999.99') from user_segments where segment_name = INDEX_NAME and segment_type = 'INDEX') index_size
14    from (
15  select table_name, index_name,
16         substr( max(sys_connect_by_path( column_name, ', ' )), 3) scbp,
17         row_number() over (partition by table_name order by index_name) rn
18    from user_ind_columns
20  connect by prior table_name = table_name
21     and prior index_name = index_name
22     and prior column_position+1 = column_position
23   group by table_name, index_name
24         )
25  )
26  select decode( nvl(rn,1), 1, tables.tname ) tname,
27         decode( nvl(rn,1), 1, tables.table_size ) table_size,
28             rn "INDEX#",
29             indexes.scbp,
30             indexes.index_name,
31             indexes.index_size
32    from tables, indexes
33   where tables.tname = indexes.table_name(+)
34   order by tables.tname, indexes.rn
35  /

TNAME           TABLE_SIZE      INDEX# SCBP                                                INDEX_NAME      INDEX_SIZE
--------------- ----------- ---------- --------------------------------------------------- --------------- -----------
A1                      .06          1 ISSUEDESC                                           SYS_C0019422            .06
B                       .06
CONF                    .06
DEPARTMENTS             .06
EMP                     .06          1 ENAME                                               EMP_ENAME_IDX           .06
LOCATIONS               .06
T                      9.00          1 OWNER, OBJECT_TYPE, OBJECT_NAME                     T_IDX1                 5.00
2 LAST_DDL_TIME, TIMESTAMP, OWNER, OBJECT_ID, CREATED T_IDX2                 5.00
3 OBJECT_ID                                           T_PK                   2.00
T2                      .06
TEST_DUPLICATE          .06
T_HIERARCHY             .06
USR                     .06

13 rows selected.

ops\$tkyte%ORA11GR2> with tables
2  as
3  (
4  select segment_name tname, to_char(bytes/1024/1024,'999,999.99') table_size
5    from user_segments
6   where segment_type = 'TABLE'
7     and segment_name not like 'BIN%'
8  ),
9  indexes
10  as
11  (
12  select table_name, index_name, scbp, rn,
13         (select to_char(bytes/1024/1024,'999,999.99') from user_segments where segment_name = INDEX_NAME and segment_type = 'INDEX') index_size
14    from (
15  select table_name, index_name,
16         substr( max(sys_connect_by_path( column_name, ', ' )), 3) scbp,
17         row_number() over (partition by table_name order by index_name) rn
18    from user_ind_columns
20  connect by prior table_name = table_name
21     and prior index_name = index_name
22     and prior column_position+1 = column_position
23   group by table_name, index_name
24         )
25  )
26  select decode( nvl(rn,1), 1, tables.tname ) tname,
27         decode( nvl(rn,1), 1, tables.table_size ) table_size,
28             rn "INDEX#",
29             indexes.scbp,
30             indexes.index_name,
31             indexes.index_size
32    from tables, indexes
33   where tables.tname = indexes.table_name(+)
34     and tables.tname = 'T'
35   order by tables.tname, indexes.rn
36  /

TNAME           TABLE_SIZE      INDEX# SCBP                                                INDEX_NAME      INDEX_SIZE
--------------- ----------- ---------- --------------------------------------------------- --------------- -----------
T                      9.00          1 OWNER, OBJECT_TYPE, OBJECT_NAME                     T_IDX1                 5.00
2 LAST_DDL_TIME, TIMESTAMP, OWNER, OBJECT_ID, CREATED T_IDX2                 5.00
3 OBJECT_ID                                           T_PK                   2.00

```

Alexander, June 24, 2011 - 10:26 am UTC

Thanks a lot, this is great.

Because we dbas would be running this with our account, I am hitting DBA_* views, and it tends to be a bit slow. Do you have any suggestions how I can speed it up?

```  1  with tables
2      as
3      (
4      select segment_name table_name, to_char(bytes/1024/1024,'999,999.99') table_size
5        from dba_segments
6       where segment_type = 'TABLE'
7         and segment_name not like 'BIN%'
8      ),
9      indexes
10      as
11      (
12      select table_name, index_name, columns, rn,
13             (select to_char(bytes/1024/1024,'999,999.99') from dba_segments where segment_name =
14  INDEX_NAME and segment_type = 'INDEX' and rownum = 1) index_size
15        from (
16      select table_name, index_name,
17             substr( max(sys_connect_by_path( column_name, ', ' )), 3) COLUMNS,
18             row_number() over (partition by table_name order by index_name) rn
19        from dba_ind_columns
21      connect by prior table_name = table_name
22         and prior index_name = index_name
23         and prior column_position+1 = column_position
24       group by table_name, index_name
25             )
26      )
27      select decode( nvl(rn,1), 1, tables.table_name ) table_name,
28             decode( nvl(rn,1), 1, tables.table_size ) table_size,
29                 rn "INDEX#",
30                 indexes.columns,
31                 indexes.index_name,
32                 indexes.index_size
33        from tables, indexes
34       where tables.table_name = indexes.table_name(+)
35         and tables.table_name = 'CCC_LS_SNAPSHOT_01_T'
36*      order by tables.table_name, indexes.rn
SQL> /

6 rows selected.

Elapsed: 00:00:17.55

Execution Plan
----------------------------------------------------------
Plan hash value: 3780812850

------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |                           |     3 |  6675 |   231   (2)| 00:00:03 |
|   1 |  SORT ORDER BY                             |                           |     3 |  6675 |   231   (2)| 00:00:03 |
|*  2 |   HASH JOIN OUTER                          |                           |     3 |  6675 |   230   (1)| 00:00:03 |
|   3 |    VIEW                                    | SYS_DBA_SEGS              |     3 |   510 |   156   (1)| 00:00:02 |
|   4 |     UNION-ALL                              |                           |       |       |            |          |
|   5 |      NESTED LOOPS OUTER                    |                           |     1 |   113 |    69   (0)| 00:00:01 |
|   6 |       NESTED LOOPS                         |                           |     1 |   109 |    68   (0)| 00:00:01 |
|   7 |        NESTED LOOPS                        |                           |     1 |   102 |    67   (0)| 00:00:01 |
|   8 |         NESTED LOOPS                       |                           |     1 |    96 |    67   (0)| 00:00:01 |
|   9 |          NESTED LOOPS                      |                           |     1 |    70 |    66   (0)| 00:00:01 |
|  10 |           TABLE ACCESS BY INDEX ROWID      | OBJ\$                      |     2 |    72 |    65   (0)| 00:00:01 |
|* 11 |            INDEX SKIP SCAN                 | I_OBJ2                    |     2 |       |    63   (0)| 00:00:01 |
|* 12 |           VIEW                             | SYS_OBJECTS               |     1 |    34 |     1   (0)| 00:00:01 |
|  13 |            UNION-ALL PARTITION             |                           |       |       |            |          |
|* 14 |             TABLE ACCESS CLUSTER           | TAB\$                      |     1 |    23 |     2   (0)| 00:00:01 |
|* 15 |              INDEX UNIQUE SCAN             | I_OBJ#                    |     1 |       |     1   (0)| 00:00:01 |
|* 16 |             FILTER                         |                           |       |       |            |          |
|  17 |              TABLE ACCESS BY INDEX ROWID   | TABPART\$                  |     1 |    16 |     1   (0)| 00:00:01 |
|* 18 |               INDEX UNIQUE SCAN            | I_TABPART_OBJ\$            |     1 |       |     0   (0)| 00:00:01 |
|* 19 |             FILTER                         |                           |       |       |            |          |
|  20 |              TABLE ACCESS CLUSTER          | CLU\$                      |     1 |    13 |     2   (0)| 00:00:01 |
|* 21 |               INDEX UNIQUE SCAN            | I_OBJ#                    |     1 |       |     1   (0)| 00:00:01 |
|* 22 |             TABLE ACCESS BY INDEX ROWID    | IND\$                      |     1 |    20 |     2   (0)| 00:00:01 |
|* 23 |              INDEX UNIQUE SCAN             | I_IND1                    |     1 |       |     1   (0)| 00:00:01 |
|* 24 |             FILTER                         |                           |       |       |            |          |
|  25 |              TABLE ACCESS BY INDEX ROWID   | INDPART\$                  |     1 |    16 |     1   (0)| 00:00:01 |
|* 26 |               INDEX UNIQUE SCAN            | I_INDPART_OBJ\$            |     1 |       |     0   (0)| 00:00:01 |
|* 27 |             FILTER                         |                           |       |       |            |          |
|* 28 |              TABLE ACCESS BY INDEX ROWID   | LOB\$                      |     1 |    20 |     2   (0)| 00:00:01 |
|* 29 |               INDEX UNIQUE SCAN            | I_LOB2                    |     1 |       |     1   (0)| 00:00:01 |
|* 30 |             FILTER                         |                           |       |       |            |          |
|* 31 |              TABLE ACCESS BY INDEX ROWID   | TABSUBPART\$               |     1 |    52 |     0   (0)| 00:00:01 |
|  32 |               INDEX FULL SCAN              | I_TABSUBPART_POBJSUBPART\$ |     1 |       |     0   (0)| 00:00:01 |
|* 33 |             FILTER                         |                           |       |       |            |          |
|* 34 |              TABLE ACCESS BY INDEX ROWID   | INDSUBPART\$               |     1 |    52 |     0   (0)| 00:00:01 |
|  35 |               INDEX FULL SCAN              | I_INDSUBPART_POBJSUBPART\$ |     1 |       |     0   (0)| 00:00:01 |
|* 36 |             TABLE ACCESS BY INDEX ROWID    | LOBFRAG\$                  |     1 |    17 |     1   (0)| 00:00:01 |
|* 37 |              INDEX UNIQUE SCAN             | I_LOBFRAG\$_FRAGOBJ\$       |     1 |       |     0   (0)| 00:00:01 |
|* 38 |          TABLE ACCESS CLUSTER              | SEG\$                      |     1 |    26 |     1   (0)| 00:00:01 |
|* 39 |           INDEX UNIQUE SCAN                | I_FILE#_BLOCK#            |     1 |       |     0   (0)| 00:00:01 |
|* 40 |         INDEX UNIQUE SCAN                  | I_FILE2                   |     1 |     6 |     0   (0)| 00:00:01 |
|  41 |        TABLE ACCESS CLUSTER                | TS\$                       |     1 |     7 |     1   (0)| 00:00:01 |
|* 42 |         INDEX UNIQUE SCAN                  | I_TS#                     |     1 |       |     0   (0)| 00:00:01 |
|  43 |       TABLE ACCESS CLUSTER                 | USER\$                     |     1 |     4 |     1   (0)| 00:00:01 |
|* 44 |        INDEX UNIQUE SCAN                   | I_USER#                   |     1 |       |     0   (0)| 00:00:01 |
|  45 |      NESTED LOOPS                          |                           |     1 |    75 |     5   (0)| 00:00:01 |
|  46 |       NESTED LOOPS OUTER                   |                           |     1 |    68 |     4   (0)| 00:00:01 |
|  47 |        NESTED LOOPS                        |                           |     1 |    64 |     3   (0)| 00:00:01 |
|  48 |         NESTED LOOPS                       |                           |     1 |    34 |     2   (0)| 00:00:01 |
|* 49 |          TABLE ACCESS BY INDEX ROWID       | UNDO\$                     |     1 |    28 |     2   (0)| 00:00:01 |
|* 50 |           INDEX RANGE SCAN                 | I_UNDO2                   |     1 |       |     1   (0)| 00:00:01 |
|* 51 |          INDEX UNIQUE SCAN                 | I_FILE2                   |     1 |     6 |     0   (0)| 00:00:01 |
|* 52 |         TABLE ACCESS CLUSTER               | SEG\$                      |     1 |    30 |     1   (0)| 00:00:01 |
|* 53 |          INDEX UNIQUE SCAN                 | I_FILE#_BLOCK#            |     1 |       |     0   (0)| 00:00:01 |
|  54 |        TABLE ACCESS CLUSTER                | USER\$                     |     1 |     4 |     1   (0)| 00:00:01 |
|* 55 |         INDEX UNIQUE SCAN                  | I_USER#                   |     1 |       |     0   (0)| 00:00:01 |
|  56 |       TABLE ACCESS CLUSTER                 | TS\$                       |     1 |     7 |     1   (0)| 00:00:01 |
|* 57 |        INDEX UNIQUE SCAN                   | I_TS#                     |     1 |       |     0   (0)| 00:00:01 |
|  58 |      NESTED LOOPS OUTER                    |                           |     1 |    56 |    82   (2)| 00:00:01 |
|  59 |       NESTED LOOPS                         |                           |     1 |    52 |    81   (2)| 00:00:01 |
|* 60 |        HASH JOIN                           |                           |     1 |    45 |    80   (2)| 00:00:01 |
|* 61 |         TABLE ACCESS FULL                  | SEG\$                      |    26 |   910 |    78   (2)| 00:00:01 |
|  62 |         TABLE ACCESS BY INDEX ROWID        | FILE\$                     |    91 |   910 |     2   (0)| 00:00:01 |
|* 63 |          INDEX FULL SCAN                   | I_FILE2                   |     1 |       |     1   (0)| 00:00:01 |
|  64 |        TABLE ACCESS CLUSTER                | TS\$                       |     1 |     7 |     1   (0)| 00:00:01 |
|* 65 |         INDEX UNIQUE SCAN                  | I_TS#                     |     1 |       |     0   (0)| 00:00:01 |
|  66 |       TABLE ACCESS CLUSTER                 | USER\$                     |     1 |     4 |     1   (0)| 00:00:01 |
|* 67 |        INDEX UNIQUE SCAN                   | I_USER#                   |     1 |       |     0   (0)| 00:00:01 |
|  68 |    VIEW                                    |                           |     1 |  2055 |    73   (0)| 00:00:01 |
|* 69 |     VIEW                                   |                           |     1 |  2048 |    73   (0)| 00:00:01 |
|  70 |      WINDOW NOSORT                         |                           |     1 |   173 |    73   (0)| 00:00:01 |
|  71 |       SORT GROUP BY                        |                           |     1 |   173 |    73   (0)| 00:00:01 |
|* 72 |        CONNECT BY WITH FILTERING           |                           |       |       |            |          |
|  73 |         NESTED LOOPS OUTER                 |                           |   305 |    99K|  1288   (1)| 00:00:16 |
|  74 |          NESTED LOOPS                      |                           |   305 | 86925 |  1286   (1)| 00:00:16 |
|* 75 |           HASH JOIN                        |                           |   305 | 68930 |   981   (2)| 00:00:12 |
|  76 |            TABLE ACCESS FULL               | USER\$                     |   427 |  1708 |     6   (0)| 00:00:01 |
|* 77 |            HASH JOIN                       |                           |   305 | 67710 |   974   (2)| 00:00:12 |
|* 78 |             HASH JOIN                      |                           |   305 | 43310 |   790   (2)| 00:00:10 |
|  79 |              TABLE ACCESS FULL             | USER\$                     |   427 |  1708 |     6   (0)| 00:00:01 |
|* 80 |              HASH JOIN                     |                           |   305 | 42090 |   783   (1)| 00:00:10 |
|* 81 |               TABLE ACCESS FULL            | ICOL\$                     |  4272 |   120K|   293   (1)| 00:00:04 |
|* 82 |               HASH JOIN                    |                           |  4308 |   458K|   489   (2)| 00:00:06 |
|* 83 |                TABLE ACCESS FULL           | IND\$                      |  4308 |   122K|   305   (1)| 00:00:04 |
|  84 |                TABLE ACCESS FULL           | OBJ\$                      | 60446 |  4722K|   183   (2)| 00:00:03 |
|  85 |             TABLE ACCESS FULL              | OBJ\$                      | 60446 |  4722K|   183   (2)| 00:00:03 |
|* 86 |           TABLE ACCESS CLUSTER             | COL\$                      |     1 |    59 |     1   (0)| 00:00:01 |
|  87 |          TABLE ACCESS BY INDEX ROWID       | ATTRCOL\$                  |     1 |    48 |     1   (0)| 00:00:01 |
|* 88 |           INDEX UNIQUE SCAN                | I_ATTRCOL1                |     1 |       |     0   (0)| 00:00:01 |
|  89 |         NESTED LOOPS                       |                           |     1 |   173 |    73   (0)| 00:00:01 |
|  90 |          NESTED LOOPS                      |                           |     1 |   169 |    72   (0)| 00:00:01 |
|  91 |           NESTED LOOPS OUTER               |                           |     1 |   165 |    71   (0)| 00:00:01 |
|  92 |            NESTED LOOPS                    |                           |     1 |   117 |    70   (0)| 00:00:01 |
|  93 |             NESTED LOOPS                   |                           |     1 |    92 |    69   (0)| 00:00:01 |
|  94 |              NESTED LOOPS                  |                           |     1 |    81 |    68   (0)| 00:00:01 |
|  95 |               NESTED LOOPS                 |                           |     2 |   100 |    66   (0)| 00:00:01 |
|  96 |                NESTED LOOPS                |                           |       |       |            |          |
|  97 |                 BUFFER SORT                |                           |       |       |            |          |
|  98 |                  CONNECT BY PUMP           |                           |       |       |            |          |
|  99 |                 TABLE ACCESS BY INDEX ROWID| OBJ\$                      |     2 |    62 |    65   (0)| 00:00:01 |
|*100 |                  INDEX SKIP SCAN           | I_OBJ2                    |     2 |       |    63   (0)| 00:00:01 |
| 101 |                TABLE ACCESS CLUSTER        | ICOL\$                     |     1 |    19 |     1   (0)| 00:00:01 |
|*102 |                 INDEX UNIQUE SCAN          | I_OBJ#                    |     1 |       |     0   (0)| 00:00:01 |
|*103 |               TABLE ACCESS BY INDEX ROWID  | OBJ\$                      |     1 |    31 |     1   (0)| 00:00:01 |
|*104 |                INDEX UNIQUE SCAN           | I_OBJ1                    |     1 |       |     0   (0)| 00:00:01 |
|*105 |              TABLE ACCESS BY INDEX ROWID   | IND\$                      |     1 |    11 |     1   (0)| 00:00:01 |
|*106 |               INDEX UNIQUE SCAN            | I_IND1                    |     1 |       |     0   (0)| 00:00:01 |
|*107 |             TABLE ACCESS CLUSTER           | COL\$                      |     1 |    25 |     1   (0)| 00:00:01 |
|*108 |            TABLE ACCESS CLUSTER            | ATTRCOL\$                  |     1 |    48 |     1   (0)| 00:00:01 |
| 109 |           TABLE ACCESS CLUSTER             | USER\$                     |     1 |     4 |     1   (0)| 00:00:01 |
|*110 |            INDEX UNIQUE SCAN               | I_USER#                   |     1 |       |     0   (0)| 00:00:01 |
| 111 |          TABLE ACCESS CLUSTER              | USER\$                     |     1 |     4 |     1   (0)| 00:00:01 |
|*112 |           INDEX UNIQUE SCAN                | I_USER#                   |     1 |       |     0   (0)| 00:00:01 |
| 113 |         NESTED LOOPS OUTER                 |                           |   531 |   171K|  1516   (1)| 00:00:19 |
| 114 |          NESTED LOOPS                      |                           |   531 |   146K|  1512   (1)| 00:00:19 |
|*115 |           HASH JOIN                        |                           |   531 |   116K|   981   (2)| 00:00:12 |
| 116 |            TABLE ACCESS FULL               | USER\$                     |   427 |  1708 |     6   (0)| 00:00:01 |
|*117 |            HASH JOIN                       |                           |   531 |   114K|   974   (2)| 00:00:12 |
|*118 |             HASH JOIN                      |                           |   531 | 74340 |   790   (2)| 00:00:10 |
| 119 |              TABLE ACCESS FULL             | USER\$                     |   427 |  1708 |     6   (0)| 00:00:01 |
|*120 |              HASH JOIN                     |                           |   531 | 72216 |   783   (1)| 00:00:10 |
| 121 |               TABLE ACCESS FULL            | ICOL\$                     |  7435 |   196K|   293   (1)| 00:00:04 |
|*122 |               HASH JOIN                    |                           |  4308 |   458K|   489   (2)| 00:00:06 |
|*123 |                TABLE ACCESS FULL           | IND\$                      |  4308 |   122K|   305   (1)| 00:00:04 |
| 124 |                TABLE ACCESS FULL           | OBJ\$                      | 60446 |  4722K|   183   (2)| 00:00:03 |
| 125 |             TABLE ACCESS FULL              | OBJ\$                      | 60446 |  4722K|   183   (2)| 00:00:03 |
|*126 |           TABLE ACCESS CLUSTER             | COL\$                      |     1 |    59 |     1   (0)| 00:00:01 |
| 127 |          TABLE ACCESS BY INDEX ROWID       | ATTRCOL\$                  |     1 |    48 |     1   (0)| 00:00:01 |
|*128 |           INDEX UNIQUE SCAN                | I_ATTRCOL1                |     1 |       |     0   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("SEGMENT_NAME"="INDEXES"."TABLE_NAME"(+))
11 - access("O"."NAME"='CCC_LS_SNAPSHOT_01_T')
filter("O"."NAME"='CCC_LS_SNAPSHOT_01_T' AND "O"."NAME" NOT LIKE 'BIN%')
12 - filter("O"."TYPE#"="SO"."OBJECT_TYPE_ID")
14 - filter(DECODE(BITAND("T"."PROPERTY",8192),8192,'NESTED TABLE','TABLE')='TABLE' AND
BITAND("T"."PROPERTY",1024)=0)
15 - access("T"."OBJ#"="O"."OBJ#")
16 - filter(NULL IS NOT NULL)
18 - access("TP"."OBJ#"="O"."OBJ#")
19 - filter(NULL IS NOT NULL)
21 - access("C"."OBJ#"="O"."OBJ#")
22 - filter(DECODE("I"."TYPE#",8,'LOBINDEX','INDEX')='TABLE' AND ("I"."TYPE#"=1 OR "I"."TYPE#"=2 OR
"I"."TYPE#"=3 OR "I"."TYPE#"=4 OR "I"."TYPE#"=6 OR "I"."TYPE#"=7 OR "I"."TYPE#"=8 OR "I"."TYPE#"=9))
23 - access("I"."OBJ#"="O"."OBJ#")
24 - filter(NULL IS NOT NULL)
26 - access("IP"."OBJ#"="O"."OBJ#")
27 - filter(NULL IS NOT NULL)
28 - filter(BITAND("L"."PROPERTY",64)=0 OR BITAND("L"."PROPERTY",128)=128)
29 - access("L"."LOBJ#"="O"."OBJ#")
30 - filter(NULL IS NOT NULL)
31 - filter("TSP"."OBJ#"="O"."OBJ#")
33 - filter(NULL IS NOT NULL)
34 - filter("ISP"."OBJ#"="O"."OBJ#")
36 - filter(DECODE("LF"."FRAGTYPE\$",'P','LOB PARTITION','LOB SUBPARTITION')='TABLE')
37 - access("LF"."FRAGOBJ#"="O"."OBJ#")
38 - filter("S"."TYPE#"="SO"."SEGMENT_TYPE_ID")
39 - access("S"."TS#"="SO"."TS_NUMBER" AND "S"."FILE#"="SO"."HEADER_FILE" AND
40 - access("S"."TS#"="F"."TS#" AND "S"."FILE#"="F"."RELFILE#")
filter("F"."TS#" IS NOT NULL AND "F"."RELFILE#" IS NOT NULL)
42 - access("S"."TS#"="TS"."TS#")
44 - access("O"."OWNER#"="U"."USER#"(+))
49 - filter("UN"."STATUS\$"<>1)
50 - access("UN"."NAME"='CCC_LS_SNAPSHOT_01_T')
filter("UN"."NAME" NOT LIKE 'BIN%')
51 - access("UN"."TS#"="F"."TS#" AND "UN"."FILE#"="F"."RELFILE#")
filter("F"."TS#" IS NOT NULL AND "F"."RELFILE#" IS NOT NULL)
52 - filter(("S"."TYPE#"=1 OR "S"."TYPE#"=10) AND DECODE("S"."TYPE#",1,'ROLLBACK',10,'TYPE2 UNDO')='TABLE')
53 - access("S"."TS#"="UN"."TS#" AND "S"."FILE#"="UN"."FILE#" AND "S"."BLOCK#"="UN"."BLOCK#")
55 - access("S"."USER#"="U"."USER#"(+))
57 - access("S"."TS#"="TS"."TS#")
60 - access("S"."TS#"="F"."TS#" AND "S"."FILE#"="F"."RELFILE#")
filter(TO_CHAR("F"."FILE#")||'.'||TO_CHAR("S"."BLOCK#")='CCC_LS_SNAPSHOT_01_T' AND
TO_CHAR("F"."FILE#")||'.'||TO_CHAR("S"."BLOCK#") NOT LIKE 'BIN%')
61 - filter("S"."TYPE#"<>6 AND "S"."TYPE#"<>5 AND DECODE("S"."TYPE#",2,'DEFERRED
ROLLBACK',3,'TEMPORARY',4,'CACHE',9,'SPACE HEADER','UNDEFINED')='TABLE' AND "S"."TYPE#"<>8 AND "S"."TYPE#"<>10
AND "S"."TYPE#"<>1)
63 - filter("F"."TS#" IS NOT NULL AND "F"."RELFILE#" IS NOT NULL)
65 - access("S"."TS#"="TS"."TS#")
67 - access("S"."USER#"="U"."USER#"(+))
69 - filter("TABLE_NAME"='CCC_LS_SNAPSHOT_01_T')
72 - access("BASE"."NAME"=PRIOR "BASE"."NAME" AND "IDX"."NAME"=PRIOR "IDX"."NAME")
filter("IC"."POS#"=PRIOR "IC"."POS#"+1)
75 - access("BO"."USER#"="BASE"."OWNER#")
77 - access("IC"."BO#"="BASE"."OBJ#")
78 - access("IO"."USER#"="IDX"."OWNER#")
80 - access("IC"."OBJ#"="IDX"."OBJ#")
81 - filter("IC"."POS#"=1)
82 - access("IDX"."OBJ#"="I"."OBJ#")
83 - filter("I"."TYPE#"=1 OR "I"."TYPE#"=2 OR "I"."TYPE#"=3 OR "I"."TYPE#"=4 OR "I"."TYPE#"=6 OR
"I"."TYPE#"=7 OR "I"."TYPE#"=9)
86 - filter("IC"."BO#"="C"."OBJ#" AND "C"."INTCOL#"=DECODE(BITAND("I"."PROPERTY",1024),0,"IC"."INTCOL#","IC"."
SPARE2"))
88 - access("C"."OBJ#"="AC"."OBJ#"(+) AND "C"."INTCOL#"="AC"."INTCOL#"(+))
100 - access("BASE"."NAME"=PRIOR "BASE"."NAME")
filter("BASE"."NAME"=PRIOR "BASE"."NAME")
102 - access("IC"."BO#"="BASE"."OBJ#")
103 - filter("IDX"."NAME"=PRIOR "IDX"."NAME")
104 - access("IC"."OBJ#"="IDX"."OBJ#")
105 - filter("I"."TYPE#"=1 OR "I"."TYPE#"=4 OR "I"."TYPE#"=9 OR "I"."TYPE#"=3 OR "I"."TYPE#"=2 OR
"I"."TYPE#"=6 OR "I"."TYPE#"=7)
106 - access("IDX"."OBJ#"="I"."OBJ#")
107 - filter("IC"."BO#"="C"."OBJ#" AND "C"."INTCOL#"=DECODE(BITAND("I"."PROPERTY",1024),0,"IC"."INTCOL#","IC"."
SPARE2"))
108 - filter("C"."OBJ#"="AC"."OBJ#"(+) AND "C"."INTCOL#"="AC"."INTCOL#"(+))
110 - access("IO"."USER#"="IDX"."OWNER#")
112 - access("BO"."USER#"="BASE"."OWNER#")
115 - access("BO"."USER#"="BASE"."OWNER#")
117 - access("IC"."BO#"="BASE"."OBJ#")
118 - access("IO"."USER#"="IDX"."OWNER#")
120 - access("IC"."OBJ#"="IDX"."OBJ#")
122 - access("IDX"."OBJ#"="I"."OBJ#")
123 - filter("I"."TYPE#"=1 OR "I"."TYPE#"=2 OR "I"."TYPE#"=3 OR "I"."TYPE#"=4 OR "I"."TYPE#"=6 OR
"I"."TYPE#"=7 OR "I"."TYPE#"=9)
126 - filter("IC"."BO#"="C"."OBJ#" AND "C"."INTCOL#"=DECODE(BITAND("I"."PROPERTY",1024),0,"IC"."INTCOL#","IC"."
SPARE2"))
128 - access("C"."OBJ#"="AC"."OBJ#"(+) AND "C"."INTCOL#"="AC"."INTCOL#"(+))

Statistics
----------------------------------------------------------
9  recursive calls
4  db block gets
909482  consistent gets
0  redo size
1324  bytes sent via SQL*Net to client
465  bytes received via SQL*Net from client
2  SQL*Net roundtrips to/from client
9  sorts (memory)
1  sorts (disk)
6  rows processed```

June 24, 2011 - 10:52 am UTC

you need to completely rewrite it - you are missing join conditions and group by columns all over the place.

You need to integrate the OWNER column into everything, everywhere.

Do that and see how it performs (no need for a huge plan to be posted, I can generate my own plan from the query)

Alexander, June 24, 2011 - 12:06 pm UTC

How close is this? I'll bet I'm still missing something it didn't help much.

```with tables
as
(
select owner, segment_name table_name, to_char(bytes/1024/1024,'999,999.99') table_size
from dba_segments
where segment_type = 'TABLE'
and segment_name not like 'BIN%'
),
indexes
as
(
select index_owner,table_name, index_name, columns, rn,
(select to_char(bytes/1024/1024,'999,999.99') from dba_segments where segment_name =
INDEX_NAME and segment_type = 'INDEX' and owner = sub.index_owner ) index_size
from (
select index_owner, table_name, index_name,
substr( max(sys_connect_by_path( column_name, ', ' )), 3) COLUMNS,
row_number() over (partition by table_name order by index_name) rn
from dba_ind_columns
connect by prior table_name = table_name
and prior index_name = index_name
and prior column_position+1 = column_position
group by index_owner,table_name, index_name
) sub
)
select decode( nvl(rn,1), 1, tables.table_name ) table_name,
decode( nvl(rn,1), 1, tables.table_size ) table_size,
rn "INDEX#",
indexes.columns,
indexes.index_name,
indexes.index_size
from tables, indexes
where tables.table_name = indexes.table_name(+)
and tables.owner = indexes.index_owner
and tables.table_name = 'T'
order by tables.table_name, indexes.rn
/```

June 24, 2011 - 1:09 pm UTC

push the where clause into the with subqueries.

where segment_type = 'TABLE' and segment_name = 'T'

Especially push that into the connect by - I don't believe we can automatically do that one.

where tables.table_name = indexes.table_name(+)
and tables.owner = indexes.index_owner

you are missing an outer join there.

A reader, July 06, 2011 - 11:54 am UTC

### get size of xml data type record

Alf Baez, September 24, 2011 - 7:51 pm UTC

Hi Tom,
Very useful topic I've used the solution(s) to find the actual size use for all the tables in a user schema. Now I'm face with a new request this one is similar to Andersen's issue with getting the size for xml data types.

I need to get the size of a xml block/record from a column with long data type, I've trying to get this using length function for the xml_data_txt column, not going to far... Can you share some insight on how would you accomplish this?

Set long 10000000
select pi.process_cd pi_prcd,
length(pix.xml_data_txt) xmllodsze_kb,
from tv.process_instance_xml pix,
tv.process_instance pi
where pix.request_id = pi.request_id
group by pi.process_cd

Regards
Alf
September 25, 2011 - 11:41 am UTC

Alf Baez, September 24, 2011 - 10:29 pm UTC

Hey Tom,

breaking this down and to make things easier(I think) I got some progress by creating SATs table and converting the column LONG to CLOB:
based on the new table.
1)I'm getting the result set with using length function but I'm not able to round to kb neither to round to 2 decimals. Any insight how can I round this up to kb with 2 decimal?

2) Then I'll need to group this by pi.process_cd

select distinct pi.process_cd pi_prcd,
round(length(pix.xml_data_txt), 3)/1024 xmllodsze_kb
from tv.process_instance_xml_clob pix,
tv.process_instance pi
where pix.request_id = pi.request_id and
rownum < 156
order by pi_prcd

Cheers
Alf
September 25, 2011 - 11:44 am UTC

you need to round AFTER you divide.

round( length(x)/1024, 2 )

### get size of xml data type record

Alf Baez, September 26, 2011 - 1:17 pm UTC

Thx Tom,

The next step is to sum and group by pi.process_cd

select distinct pi.process_cd pi_prcd,
round(length(pix.xml_data_txt)/1024, 2) xml_blKBs
from tv.process_instance_xml_clob pix,
tv.process_instance pi
where pix.request_id = pi.request_id and
rownum < 156
group by pi.process_cd
order by pi.process_cd

when I tried I got:
ERROR at line 2:
ORA-00979: not a GROUP BY expression

Then I tried to list at a subselect level and then group by at the top select, but I'm still faced with "not a GROUP BY expression" Can you please point what I'm missing or miss-using on this? Thanks in advance.

select a.pi_prcd, a.xml_blKBs
from
(select distinct pi.process_cd pi_prcd,
round(length(pix.xml_data_txt)/1024, 2) xml_blKBs
from tv.process_instance_xml_clob pix,
tv.process_instance pi
where pix.request_id = pi.request_id
order by pi.process_cd) a
where rownum <= 155
group by a.pi_prcd

-
Alf
September 26, 2011 - 6:55 pm UTC

ummm, you sort of need a sum in there - lose the distinct, sum the round(length())...

do you have anyone you work with that knows SQL to help you out? This is fairly basic. What is the rownum bit in there for??? that just gets a random 156 records to work with - before sorting, before grouping.

The distinct is just wrong.

### get size of xml data record

Alf Baez, September 26, 2011 - 9:32 pm UTC

Thanks Tom,

With few pointers you're always help to find my way around:
The rownum was used to limit the output.

select pi.process_cd pi_prcd,
round(sum(length(pix.xml_data_txt)/1024/1024), 3) xml_blMBs
from tv.process_instance_xml_clob pix,
tv.process_instance pi
where pix.request_id = pi.request_id
group by pi.process_cd
order by pi.process_cd

PI_PRCD XML_BLMBS
------------------ ----------
xxxxxxxxx_RTQ 1021.817
xxxxxxxxx_USER .008
xxxxxxxxxxxxx 4.724
xxxxxxxxxxxxx .31
xxxxxxREQ 223.105
xxxxxxxxxxxxBASIS 6.763
xxxxxxxxxx 338.491
xxxxx 91.49
xxxxxxxxxxx .005
......

Cheers
Alf

Alexander, November 04, 2011 - 2:48 pm UTC

Tom,

We thought it would be useful to have a query that will tell how much space we need available for a table for a specific number of rows being added.

I find a lot of times people tell us, "hey we need to add a million rows to table X can you tell us if we have the space?"

Is this correct (for MB), or how you would do this?

`select sum(AVG_COL_LEN) * num_of_rows /1024/1024 from dba_tab_columns where TABLE_NAME = 'T';`

November 07, 2011 - 11:02 am UTC

that tells you the raw number of bytes to be loaded.

If the table is "well packed", you can get an estimate by looking simply at the number of rows in the table and the number of blocks the table consumes - then just multiply/divide to figure it out.

Eg: if you have 100,000 rows in the table and it is consuming 1,000 blocks - and it is considered to be fairly well 'packed' (that is, the table didn't at one point have 1,000,000 rows in it which you have deleted 900,000) - then you can just multiply by 10 and say "you need about 10,000 blocks, which are Nk apiece"

Alexander, November 08, 2011 - 10:10 am UTC

What is block and row overhead? Sounds fairly negligible. This wouldn't have to be precise.
November 08, 2011 - 11:00 am UTC

well, pctfree = 10% right off the bat by default.

and the block overhead - we use the tail and the head of the block both - can be another 10%.

and to each column - we'll typically be adding a length byte at the very least - so, 80 one character columns can have more than 100% overhead at the row level.

```ops\$tkyte%ORA11GR2> create table t as select * from all_objects;

Table created.

ops\$tkyte%ORA11GR2>
ops\$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops\$tkyte%ORA11GR2>
ops\$tkyte%ORA11GR2> select avg_row_len * num_rows /1024/1024 from user_tables where table_name = 'T';

AVG_ROW_LEN*NUM_ROWS/1024/1024
------------------------------
6.7534256

ops\$tkyte%ORA11GR2> exec show_space( 'T' )
Unformatted Blocks .....................               0
FS1 Blocks (0-25)  .....................               0
FS2 Blocks (25-50) .....................               0
FS3 Blocks (50-75) .....................               0
FS4 Blocks (75-100).....................               0
Full Blocks        .....................           1,030
Total Blocks............................           1,152
Total Bytes.............................       9,437,184
Total MBytes............................               9
Unused Blocks...........................              96
Unused Bytes............................         786,432
Last Used Ext FileId....................               4
Last Used Ext BlockId...................          17,536
Last Used Block.........................              32

PL/SQL procedure successfully completed.
```

simple example (that you could have done ;) ) shows in that case - it was off by almost 35%.

I'll reiterate the very simple process outlined above again:

If the table is "well packed", you can get an estimate by looking simply at the number of rows in the table and the number of blocks the table consumes - then just multiply/divide to figure it out.

If the table isn't well packed - make a small copy of the table using create table as select and then do this measurement.

### Table size with clob

A reader, February 27, 2012 - 11:42 am UTC

Hi Tom,
We are purging the data older than 6months from very large tables(270million rows). Almost 100m rows were deleted. These tables have clob as well as regular datatypes. Database is 10gR2. I am not sure how to calculate the space that would be released after reorgs. I have gone through metalink (Note# 386341.1 How to determine the actual size of the LOB segments and how to free the deleted/unused space above/below the HWM) but somehow I couldn't find it helpful. It says "The deleted space from inside the the lob segment is not even shown by the procedures above. This is the expected behaviour and, unfortunately, currently there is no procedure/view to show the deleted space. Having such an option is the current subject of an enhancement request." Also, dbms_lob.getlength includes undo as well and couldn't give us the actual sizing.

1) Since clob data less than 3096 will be stored in-line(we have the default storage in row) and some of the data greater than 3096 will be stored out-of-line, does avg_row_len of dba_tables includes in-line clob as well? I am assuming yes.
2) How can we calculate the actual size of the table?

-Venkat
February 28, 2012 - 7:20 am UTC

how are you doing this purge? Please don't say "delete"

the actual size of the table is simply the amount of data allocated to the segment.

Tell me how you do this purge process.

### Table size with clob

A reader, February 27, 2012 - 1:26 pm UTC

Sorry, I missed to provide some information. Tablespaces are in LMT. SEGMENT_SPACE_MANAGEMENT=AUTO.

select l.table_name,l.column_name,l.segment_name,chunk,retention,cache,logging,in_row,s.bytes/1048576 "MB" from dba_lobs l,dba_segments s where l.segment_name=s.segment_name and l.table_name='TAB_LOB';

TABLE_NAME COLUMN_NAME SEGMENT_NAME CHUNK_SIZE RETENTION CACHE LOGGING IN_ROW MB
TAB_LOB COL_XML SYS_LOB0000241357C00201\$\$ 8192 18000 NO YES YES 0.0625
TAB_LOB COL_XML2 SYS_LOB0000241357C00154\$\$ 8192 18000 NO YES YES 0.0625
TAB_LOB COL_XML3 SYS_LOB0000241357C00252\$\$ 8192 18000 NO YES YES 995
TAB_LOB COL_XML4 SYS_LOB0000241357C00221\$\$ 8192 NO YES YES 0.0625

select s.bytes/1048576 "MB" from dba_segments s where s.segment_name like 'TAB_LOB';
MB
390390

-Venkat

### Table size with clob

A reader, February 28, 2012 - 10:49 am UTC

We delete using rowids. Every 30,000 records we commit the data. Also we haven't deleted 100M rows at a time. Every week we delete around 10M rows.

Sorry, I will rephrase my question. How can we calculate the table size after deletes? Since it has clob data I am not sure how to calculate this. Also, can you please confirm if the avg_row_len of dba_tables include the in-line clob as well?

February 28, 2012 - 11:03 am UTC

wow, that is just about the most inefficient approach I've heard of. You probably spend half of your processing power just deleting rows.

The table size is the same size after the delete as it is before. Tables do not change in size due to deletes.

If you are asking "how much space might be available to support future inserts", look at dbms_space.space_usage/dbms_space.free_blocks for ASSM (automatic segment space management) and dbms_space.unused_space.

Remember also that lobs manage their undo in the lob segment itself - not in the undo tablespace. So, if you delete 10m rows and have the clob managed via undo retention times - the space will NOT be free until the undo retention period has passed. If you are using pctversion - then X% of that space (based on the size of the lob segment itself, not the deleted space) will remain "out of use". I'd recommend using undo retention for lobs.

```ops\$tkyte%ORA11GR2> create table t1 ( x int, y varchar2(30), z clob );

Table created.

ops\$tkyte%ORA11GR2> create table t2 ( x int, y varchar2(30), z clob );

Table created.

ops\$tkyte%ORA11GR2> create table t3 ( x int, y varchar2(30), z clob );

Table created.

ops\$tkyte%ORA11GR2>
ops\$tkyte%ORA11GR2> declare
2          l_short varchar2(3000) := rpad( 'x', 3000, 'x' );
3          l_long varchar2(6000) := rpad( 'x', 6000, 'x' );
4  begin
5          for i in 1 .. 1000
6          loop
7                  insert into t1 values ( i, rpad('x',30,'x'), null );
8                  insert into t2 values ( i, rpad('x',30,'x'), l_short );
9                  insert into t3 values ( i, rpad('x',30,'x'), l_long );
10          end loop;
11          commit;
12  end;
13  /

PL/SQL procedure successfully completed.

ops\$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats( user, 'T1' );

PL/SQL procedure successfully completed.

ops\$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats( user, 'T2' );

PL/SQL procedure successfully completed.

ops\$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats( user, 'T3' );

PL/SQL procedure successfully completed.

ops\$tkyte%ORA11GR2>
ops\$tkyte%ORA11GR2> select table_name, avg_row_len from user_tables where table_name in ( 'T1', 'T2', 'T3' );

TABLE_NAME                     AVG_ROW_LEN
------------------------------ -----------
T1                                      35
T2                                    3138
T3                                     142

3 rows selected.

```

for table T1 - average row length is 35 - just the string and the integer, nothing for the NULL.

for table T2 - inline storage - we can see the entire clob is part of the row length.

for table T3 - the out of line storage - we can see the lob locator is taking a bit of space in the row and is added to the average row length.

### Table size with clob

A reader, February 28, 2012 - 2:11 pm UTC

Thank you very much Tom.

Example looks great.

And yes, my question is "Space available for future inserts" or more specifically "Possible space reclamation after reorgs".

I will check the dbms_space package.

### Table size with clob

A reader, February 29, 2012 - 4:15 pm UTC

```Hi Tom,
Somehow I couldn't reproduce the above example in 11.2.0.2. Can you please let me know what am I missing? I even specified the "enable storage in row" in table definition.

SQL> select version from v\$instance;
VERSION
-----------------
11.2.0.2.0
SQL> create table t1 ( x int, y varchar2(30), z clob );
Table created.
SQL> create table t2 (x int, y varchar2(30), z clob);
Table created.
SQL> create table t3 (x int, y varchar2(30),z clob);
Table created.
SQL> declare
4  begin
5  for i in 1..1000
6  loop
10  end loop;
11  commit;
12  end;
13  /
PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats('APPS','T1');
PL/SQL procedure successfully completed.
SQL> exec dbms_stats.gather_table_stats('APPS','T2');
PL/SQL procedure successfully completed.
SQL> exec dbms_stats.gather_table_stats('APPS','T3');
PL/SQL procedure successfully completed.

SQL> select table_name,avg_row_len from user_tables where table_name in ('T1','T2','T3');
TABLE_NAME                     AVG_ROW_LEN
------------------------------ -----------
T1                                      35
T2                                     142
T3                                     146

SQL> drop table t1;
Table dropped.
SQL> drop table t2;
Table dropped.
SQL> drop table t3;
Table dropped.
SQL> create table t1 ( x int, y varchar2(30), z clob );
Table created.
SQL> create table t2 (x int, y varchar2(30), z clob) LOB(z) store as(enable storage in row chunk 8192 retention nocache);
Table created.
SQL> create table t3 (x int, y varchar2(30),z clob) LOB(z) store as(enable storage in row chunk 8192 retention nocache);
Table created.

SQL> declare
4  begin
5  for i in 1..1000
6  loop
10  end loop;
11  commit;
12  end;
13  /

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats('APPS','T1');
PL/SQL procedure successfully completed.
SQL> exec dbms_stats.gather_table_stats('APPS','T2');
PL/SQL procedure successfully completed.
SQL> exec dbms_stats.gather_table_stats('APPS','T3');
PL/SQL procedure successfully completed.

SQL> select table_name,avg_row_len from user_tables where table_name in ('T1','T2','T3');

TABLE_NAME                     AVG_ROW_LEN
------------------------------ -----------
T1                                      35
T2                                     142
T3                                     146

Thank you```

March 01, 2012 - 7:31 am UTC

must be an issue in 11.2.0.2 that was corrected.

I get the right avg row lengths in 10.2.0.5 and 11.2.0.3

### Table size with clob

A reader, March 02, 2012 - 12:40 am UTC

```Thank you very much for spending your valuable time in responding to our questions.

I used dbms_space.space_usage but somehow I couldn't relate the output to required information. I am actually looking for how much space can be released after doing the reorg. Something on the lines of segment advisory output(provided below).

SQL>   declare
2     -- IN vars
3     v_segment_owner VARCHAR2(100):='TEST';
4     v_segment_name VARCHAR2(100) :='TAB_LOB';
5     v_segment_type VARCHAR2(100) :='TABLE';
6     v_partition_name VARCHAR2(100) :=null;
7     -- OUT vars
8     v_unformatted_blocks   NUMBER;
9     v_unformatted_bytes   NUMBER;
10     v_fs1_blocks   NUMBER;
11     v_fs1_bytes   NUMBER;
12     v_fs2_blocks   NUMBER;
13     v_fs2_bytes   NUMBER;
14     v_fs3_blocks   NUMBER;
15     v_fs3_bytes   NUMBER;
16     v_fs4_blocks   NUMBER;
17     v_fs4_bytes   NUMBER;
18     v_full_blocks   NUMBER;
19     v_full_bytes   NUMBER;
20     v_segment_size_blocks NUMBER;
21     v_segment_size_bytes NUMBER;
22     v_used_blocks NUMBER;
23     v_used_bytes NUMBER;
24     v_expired_blocks NUMBER;
25     v_expired_bytes NUMBER;
26     v_unexpired_blocks NUMBER;
27     v_unexpired_bytes NUMBER;
28    begin
29    DBMS_SPACE.SPACE_USAGE(
30     segment_owner => v_segment_owner ,
31     segment_name => v_segment_name ,
32     segment_type => v_segment_type ,
33     unformatted_blocks => v_unformatted_blocks ,
34     unformatted_bytes => v_unformatted_bytes ,
35     fs1_blocks => v_fs1_blocks  ,
36     fs1_bytes => v_fs1_bytes ,
37     fs2_blocks => v_fs2_blocks ,
38     fs2_bytes => v_fs2_bytes ,
39     fs3_blocks => v_fs3_blocks ,
40     fs3_bytes => v_fs3_bytes ,
41     fs4_blocks => v_fs4_blocks ,
42     fs4_bytes => v_fs4_bytes ,
43     full_blocks => v_full_blocks ,
44     full_bytes => v_full_bytes ,
45     partition_name => v_partition_name  );
46     dbms_output.put_line('Unformatted Blocks = '||v_unformatted_blocks);
47     dbms_output.put_line('FS1 Blocks = '||v_fs1_blocks);
48     dbms_output.put_line('FS2 Blocks = '||v_fs2_blocks);
49     dbms_output.put_line('FS3 Blocks = '||v_fs3_blocks);
50     dbms_output.put_line('FS4 Blocks = '||v_fs4_blocks);
51     dbms_output.put_line('FS1 Bytes = '||v_fs1_bytes);
52     dbms_output.put_line('FS2 Bytes = '||v_fs2_bytes);
53     dbms_output.put_line('FS3 Bytes = '||v_fs3_bytes);
54     dbms_output.put_line('FS4 Bytes = '||v_fs4_bytes);
55     dbms_output.put_line('Full Blocks = '||v_full_blocks);
56    end;
57  /
Unformatted Blocks = 3064
FS1 Blocks = 0
FS2 Blocks = 1
FS3 Blocks = 0
FS4 Blocks = 113
FS1 Bytes = 0
FS2 Bytes = 8192
FS3 Bytes = 0
FS4 Bytes = 925696
Full Blocks = 123026457

PL/SQL procedure successfully completed.

Elapsed: 00:05:05.88
SQL> @show_space

Procedure created.

Elapsed: 00:00:00.84
SQL> exec show_space('TAB_LOB','TEST','TABLE');
Unformatted Blocks .....................           3,064
FS1 Blocks (0-25)  .....................               0
FS2 Blocks (25-50) .....................               1
FS3 Blocks (50-75) .....................               0
FS4 Blocks (75-100).....................             113
Full Blocks        .....................     123,026,457
Total Blocks............................     123,150,592
Total Bytes.............................################
Total MBytes............................         962,114
Unused Blocks...........................               0
Unused Bytes............................               0
Last Used Ext FileId....................             116
Last Used Ext BlockId...................         721,024
Last Used Block.........................           8,192

PL/SQL procedure successfully completed.

DECLARE
l_object_id  NUMBER;
BEGIN
object_type => 'TABLE',
attr1       => 'TEST',
attr2       => 'TAB_LOB',
attr3       => NULL,
attr4       => 'null',
attr5       => NULL,
object_id   => l_object_id);
parameter => 'RECOMMEND_ALL',
value     => 'TRUE');
END;
/

PL/SQL procedure successfully completed.

SQL> col object_type for a15
SQL> col schema for a10
SQL> col object_name for a15
SQL> set lines 280 pages 300
SQL> col message for a50
SQL> col more_info for a50
SQL>
o.type AS object_type,
o.attr1 AS schema,
o.attr2 AS object_name,
f.message

--------------------   --------------- ---------- --------------- --------------------------------------------------
TEST1_SEGMENT_ADVISOR   TABLE           TEST       TAB_LOB       Perform re-org on the object TAB_LOB, estimated savings is 46851740983 bytes.

Thanks once again Tom
```

March 02, 2012 - 5:37 am UTC

You are looking at the table. the segment advisor is looking at the table, the lob, everthing.

but in anycase, since you already have the segment advisor output - why are you even looking elsewhere. It has given you what you are asking for? I'm very confused here....

In any case, look at the information you have regarding the *table* (but not the lob, you displayed the table)

```FS1 Blocks (0-25)  .....................               0
FS2 Blocks (25-50) .....................               1
FS3 Blocks (50-75) .....................               0
FS4 Blocks (75-100).....................             113
Full Blocks        .....................     123,026,457```

you have 123,026,457 blocks that are "full", they will not be released by a reorg.

You have 113 blocks that are between 75-100% full, SOME of that space might be reclaimable during a reorg. Assume they are all 75% full - about 15% of 113 blocks might be reclaimed from just the table (10% free assumed)

### Table size with clob

A reader, March 02, 2012 - 11:22 am UTC

```Thank you very much Tom.

I am so sorry for not providing enough data.

As per metalink, "SEGMENT ADVISOR not Working as Expected for LOB or SYS_LOB SEGMENT [ID 988744.1]". So I am trying to calculate it manually and compare the results with segment advisor output.

select l.table_name,l.column_name,l.segment_name,chunk,retention,cache,logging,in_row,s.bytes/1048576 "MB" from dba_lobs l,dba_segments s where l.segment_name=s.segment_name and l.table_name='TAB_LOB';

TABLE_NAME    COLUMN_NAME    SEGMENT_NAME    CHUNK_SIZE    RETENTION    CACHE    LOGGING   IN_ROW   MB
TAB_LOB    COL_XML1    SYS_LOB0000241357C00201\$\$    8192    18000     NO      YES       YES    0.0625
TAB_LOB    COL_XML2    SYS_LOB0000241357C00154\$\$    8192    18000     NO       YES       YES    0.0625
TAB_LOB    COL_XML3    SYS_LOB0000241357C00221\$\$    8192    18000     NO      YES       YES    0.0625
TAB_LOB    COL_XML4    SYS_LOB0000241357C00252\$\$    8192    18000     NO      YES       YES    3506

select bytes/1048576 "MB" from dba_segments where segment_name like 'TAB_LOB';
MB
962114

SQL>   declare
2     -- IN vars
3     v_segment_owner VARCHAR2(100):='TEST';
4     v_segment_name VARCHAR2(100) :='SYS_LOB0000241357C00252\$\$';
5     v_segment_type VARCHAR2(100) :='LOB';
6     v_partition_name VARCHAR2(100) :=null;
7     -- OUT vars
8     v_unformatted_blocks   NUMBER;
9     v_unformatted_bytes   NUMBER;
10     v_fs1_blocks   NUMBER;
11     v_fs1_bytes   NUMBER;
12     v_fs2_blocks   NUMBER;
13     v_fs2_bytes   NUMBER;
14     v_fs3_blocks   NUMBER;
15     v_fs3_bytes   NUMBER;
16     v_fs4_blocks   NUMBER;
17     v_fs4_bytes   NUMBER;
18     v_full_blocks   NUMBER;
19     v_full_bytes   NUMBER;
20     v_segment_size_blocks NUMBER;
21     v_segment_size_bytes NUMBER;
22     v_used_blocks NUMBER;
23     v_used_bytes NUMBER;
24     v_expired_blocks NUMBER;
25     v_expired_bytes NUMBER;
26     v_unexpired_blocks NUMBER;
27     v_unexpired_bytes NUMBER;
28    begin
29    DBMS_SPACE.SPACE_USAGE(
30     segment_owner => v_segment_owner ,
31     segment_name => v_segment_name ,
32     segment_type => v_segment_type ,
33     unformatted_blocks => v_unformatted_blocks ,
34     unformatted_bytes => v_unformatted_bytes ,
35     fs1_blocks => v_fs1_blocks  ,
36     fs1_bytes => v_fs1_bytes ,
37     fs2_blocks => v_fs2_blocks ,
38     fs2_bytes => v_fs2_bytes ,
39     fs3_blocks => v_fs3_blocks ,
40     fs3_bytes => v_fs3_bytes ,
41     fs4_blocks => v_fs4_blocks ,
42     fs4_bytes => v_fs4_bytes ,
43     full_blocks => v_full_blocks ,
44     full_bytes => v_full_bytes ,
45     partition_name => v_partition_name  );
46     dbms_output.put_line('Unformatted Blocks = '||v_unformatted_blocks);
47     dbms_output.put_line('FS1 Blocks = '||v_fs1_blocks);
48     dbms_output.put_line('FS2 Blocks = '||v_fs2_blocks);
49     dbms_output.put_line('FS3 Blocks = '||v_fs3_blocks);
50     dbms_output.put_line('FS4 Blocks = '||v_fs4_blocks);
51     dbms_output.put_line('FS1 Bytes = '||v_fs1_bytes);
52     dbms_output.put_line('FS2 Bytes = '||v_fs2_bytes);
53     dbms_output.put_line('FS3 Bytes = '||v_fs3_bytes);
54     dbms_output.put_line('FS4 Bytes = '||v_fs4_bytes);
55     dbms_output.put_line('Full Blocks = '||v_full_blocks);
56    end;
57  /
Unformatted Blocks = 3207
FS1 Blocks = 0
FS2 Blocks = 0
FS3 Blocks = 0
FS4 Blocks = 0
FS1 Bytes = 0
FS2 Bytes = 0
FS3 Bytes = 0
FS4 Bytes = 0
Full Blocks = 444628

SQL>   declare
2     -- IN vars
3     v_segment_owner VARCHAR2(100):='TEST';
4     v_segment_name VARCHAR2(100) :='SYS_LOB0000241357C00201\$\$';
5     v_segment_type VARCHAR2(100) :='LOB';
6     v_partition_name VARCHAR2(100) :=null;
7     -- OUT vars
8     v_unformatted_blocks   NUMBER;
9     v_unformatted_bytes   NUMBER;
10     v_fs1_blocks   NUMBER;
11     v_fs1_bytes   NUMBER;
12     v_fs2_blocks   NUMBER;
13     v_fs2_bytes   NUMBER;
14     v_fs3_blocks   NUMBER;
15     v_fs3_bytes   NUMBER;
16     v_fs4_blocks   NUMBER;
17     v_fs4_bytes   NUMBER;
18     v_full_blocks   NUMBER;
19     v_full_bytes   NUMBER;
20     v_segment_size_blocks NUMBER;
21     v_segment_size_bytes NUMBER;
22     v_used_blocks NUMBER;
23     v_used_bytes NUMBER;
24     v_expired_blocks NUMBER;
25     v_expired_bytes NUMBER;
26     v_unexpired_blocks NUMBER;
27     v_unexpired_bytes NUMBER;
28    begin
29    DBMS_SPACE.SPACE_USAGE(
30     segment_owner => v_segment_owner ,
31     segment_name => v_segment_name ,
32     segment_type => v_segment_type ,
33     unformatted_blocks => v_unformatted_blocks ,
34     unformatted_bytes => v_unformatted_bytes ,
35     fs1_blocks => v_fs1_blocks  ,
36     fs1_bytes => v_fs1_bytes ,
37     fs2_blocks => v_fs2_blocks ,
38     fs2_bytes => v_fs2_bytes ,
39     fs3_blocks => v_fs3_blocks ,
40     fs3_bytes => v_fs3_bytes ,
41     fs4_blocks => v_fs4_blocks ,
42     fs4_bytes => v_fs4_bytes ,
43     full_blocks => v_full_blocks ,
44     full_bytes => v_full_bytes ,
45     partition_name => v_partition_name  );
46     dbms_output.put_line('Unformatted Blocks = '||v_unformatted_blocks);
47     dbms_output.put_line('FS1 Blocks = '||v_fs1_blocks);
48     dbms_output.put_line('FS2 Blocks = '||v_fs2_blocks);
49     dbms_output.put_line('FS3 Blocks = '||v_fs3_blocks);
50     dbms_output.put_line('FS4 Blocks = '||v_fs4_blocks);
51     dbms_output.put_line('FS1 Bytes = '||v_fs1_bytes);
52     dbms_output.put_line('FS2 Bytes = '||v_fs2_bytes);
53     dbms_output.put_line('FS3 Bytes = '||v_fs3_bytes);
54     dbms_output.put_line('FS4 Bytes = '||v_fs4_bytes);
55     dbms_output.put_line('Full Blocks = '||v_full_blocks);
56    end;
57  /
Unformatted Blocks = 0
FS1 Blocks = 0
FS2 Blocks = 0
FS3 Blocks = 0
FS4 Blocks = 0
FS1 Bytes = 0
FS2 Bytes = 0
FS3 Bytes = 0
FS4 Bytes = 0
Full Blocks = 0

PL/SQL procedure successfully completed.

When I query the dba_segments and dba_lobs, it shows the size of the lob segments as 0.06MB and 3.5GB. Since the default storage is "In-ROW", data > 3964 is being stored in the out-of-line lob segments(your example shows the same). Which means, most of the lob data is stored in-line for COL_XML4 lob segment. For the remaining lob segments all the data is stored in-line. I ran the dbms_space.space_usage on COL_XML4 and COL_XML1 lob segments for reference. No where does it show the free space available.

Since total size of the table segment is 962GB, if we run the dbms_space.space_usage on the table it should show the overall data including in-line lobs as well? Please correct me if I am wrong.

I am trying to relate the segment advisory to dbms_space.

March 02, 2012 - 11:42 am UTC

it would include inline lobs - yes.

you could just query up the length of each lob - round it up to the next multiple of your chunk size and add them up.

```ops\$tkyte%ORA11GR2> create table t
2  as
3  select owner, view_name, to_lob(text) text
4    from dba_views
5  /

Table created.

ops\$tkyte%ORA11GR2>
ops\$tkyte%ORA11GR2> select sum( ceil( dbms_lob.getlength(text) / 8192 ) * 8192 )/1024/1024 mbytes
2    from t
3   where dbms_lob.getlength(text) > 3900
4  /

MBYTES
----------
2.2265625
```

8k chunksize in this case. It will underestimate the size of the lob segment (there are overheads but it will give you an idea.

### free space

A reader, March 02, 2012 - 11:22 am UTC

Tom,
How do I calculate the free space within a table or index segment?

Thanks
March 02, 2012 - 11:42 am UTC

just as demonstrated right above, using dbms_space

### Table size with clob

A reader, March 02, 2012 - 1:39 pm UTC

Thank you very much Tom.

Could you please confirm how undo space is managed in case of in-line lobs? Since lobs manage undo space in the lob segment itself, and in-line lobs doesn't store the data in lob segment where does the undo data stored? I believe it's stored in UNDO tablespace? Because, as per metalink note(How to determine the actual size of the LOB segments and how to free the deleted/unused space above/below the HWM [ID 386341.1]) dbms_lob.get_length output includes the undo space as well. Does it mean just for the out-of-line lob segment?

Also, what is the difference between storing undo in lob segment vs storing it in undo tablespace? Is it for performance?

Thanks once again for your time
March 02, 2012 - 1:54 pm UTC

undo for in-line lobs is done in the undo segment along with the rest of the undo for the table.

dbms_lob.get_length
output includes the undo space as well

no it does not, where did you see that in that note? that note said:

select sum(dbms_lob.getlength (<lob column name>)) from <table_name>;

Please note that the UNDO data for a LOB segment is kept within the LOB segment space. The above query result is merely the still active LOB data.

meaning - that only shows the lob data length. It doens't see any "undo"

### Table size with clob

A reader, March 02, 2012 - 2:56 pm UTC

Thank you very much Tom.

Sorry, I read that statement again. It is very clear now.

### fs - free space

A reader, March 02, 2012 - 3:21 pm UTC

Hi Tom,

From the above statement,
"You have 113 blocks that are between 75-100% full, SOME of that space might be reclaimable during a reorg. Assume they are all 75% full - about 15% of 113 blocks might be reclaimed from just the table (10% free assumed) "

Is it not the other way around, 75-100% free space? Because,
fs4_blocks => Number of blocks that has at least 75 to 100% free space

March 03, 2012 - 8:32 am UTC

doh, you are 100% correct, I got that backwards (again... I need to come up with a mental trick to remember that - like with db file sequential/scattered reads which are seemingly labeled backwards too)...

A reorg of that segment might be able to remove as many as 113 blocks from that segment since they are at least 75% empty.

thanks for pointing that out!

### Table and datasize

Ajit Kumar Shreevastava, March 23, 2012 - 5:24 am UTC

Hi Tom,

select extent_id, bytes, blocks
from user_extents
where segment_name = 'VIN_VMD_SUB_PROJ_HIS_DTL'
and segment_type = 'TABLE';
/
extent_id bytes blocks
0 65536 8
1 65536 8
2 65536 8
3 65536 8
4 65536 8
5 65536 8
6 65536 8
7 65536 8
8 65536 8
9 65536 8
10 65536 8
11 65536 8
12 65536 8
13 65536 8
-------------------
112
select blocks, empty_blocks,
avg_space, num_freelist_blocks,table_name
from user_tables
where table_name = 'VIN_VMD_SUB_PROJ_HIS_DTL'
/
BLOCKS EMPTY_BLOCKS AVG_SPACE NUM_FREELIST_BLOCKS
---------- ------------ ---------- -------------------
166 0 0 0
Then what is the actual size of table and what is the data size of table.

Thanx and regards,
Ajit
March 23, 2012 - 8:30 am UTC

user_segments will give you the actual size of the space allocated to the table (or sum up the blocks in the extents).

the information from user tables you queried is maintained by gathering statistics, it is not 100% reliable.

user extents is showing you what is truly currently allocated

```ops\$tkyte%ORA11GR2>
ops\$tkyte%ORA11GR2> create table t as select * from all_objects;

Table created.

ops\$tkyte%ORA11GR2>
ops\$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops\$tkyte%ORA11GR2>
ops\$tkyte%ORA11GR2> truncate table t;

Table truncated.

ops\$tkyte%ORA11GR2>
ops\$tkyte%ORA11GR2> select extent_id, bytes, blocks
2    from user_extents
3   where segment_name = 'T'
4     and segment_type = 'TABLE';

EXTENT_ID      BYTES     BLOCKS
---------- ---------- ----------
0      65536          8

ops\$tkyte%ORA11GR2>
ops\$tkyte%ORA11GR2> select blocks, empty_blocks,
2         avg_space, num_freelist_blocks,table_name
3    from user_tables
4   where table_name = 'T'
5  /

BLOCKS EMPTY_BLOCKS  AVG_SPACE NUM_FREELIST_BLOCKS TABLE_NAME
---------- ------------ ---------- ------------------- ------------------------------
1055            0          0                   0 T

ops\$tkyte%ORA11GR2>
ops\$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops\$tkyte%ORA11GR2>
ops\$tkyte%ORA11GR2> /

BLOCKS EMPTY_BLOCKS  AVG_SPACE NUM_FREELIST_BLOCKS TABLE_NAME
---------- ------------ ---------- ------------------- ------------------------------
0            0          0                   0 T

```

### Average row length-LOB

pranav, March 23, 2012 - 11:08 am UTC

Hi Tom,

Can you please tell me if there is a way to calculate the average row length of lob data in a table, if the lob is out-of-line? For in-line lobs it will be included in avg_row_len(for some versions of oracle)

This is to calculate amount of space released after reorgs. I would think only dbms_space would be helpful here?
March 24, 2012 - 10:13 am UTC

dbms_lob.getlength - applied to the lob, and run through "avg"

select avg( dbms_lob.getlength( lob_col ) ) from t;

have patience, it's going to take a while

### Calculate deleted lob space

pranav, August 20, 2012 - 6:23 pm UTC

Hi Tom,

I have this big question on lobs. How can we calculate the amount of space reclaimable after deleting data from the lobs? We have a table with 50M records and one of the column is clob(in-line lob). table size is 50GB. lob segment size is around 700GB(looks like most of the data is stored out-of-line as the data might be >3964 bytes). We deleted around 10M records from this table. I was able to use the dbms_space.space_usage procedure on the table and could find that it has around 9GB free space. However when I use the same procedure on the lobsegment it still shows 700GB(all of them listed in full_blocks).

I understand lobs will have the undo maintained inside of the lob segment itself.

"the deleted space from inside the the lob segment is not even shown by the procedures above. This is the expected behaviour and, unfortunately, currently there is no procedure/view to show the deleted space. Having such an option is the current subject of an enhancement request."

We still don't have a way to find it?

Looking forward to hear from you.

Thanks.

### Calculate deleted lob space

A reader, August 21, 2012 - 10:49 am UTC

Hi Tom,

When I check the "View your questions" page, I can see that status of the above question as Answered/Published. Not sure why I couldn't see your comment though. Please let me know if there is some problem.

### lob deleted space

A reader, August 30, 2012 - 10:41 am UTC

Hi Tom,

Can you please provide some inputs on the above question.

Thanks
September 10, 2012 - 6:24 pm UTC

there is no answer really, the space isn't reclaimable immediately (so the answer is really "0"). Over time it can be reclaimined (after the pctversion/retention settings have been satisfied). But there is currently no direct way to look at a lob space and see how much is "free" as far as I know

# More to Explore

##### DBMS_SPACE

More on PL/SQL routine DBMS_SPACE here