Skip to Main Content
  • Questions
  • Space difference between dba_segments, dba_tables, show_space

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Doug.

Asked: September 13, 2002 - 7:17 pm UTC

Last updated: June 28, 2012 - 9:49 am UTC

Version: 8.1.7

Viewed 10K+ times! This question is

You Asked

Question for Tom -
Tom - I don't understand this huge space report
dicrepancy.

I have a tablespace that is a size of
5.5 GB

SQL> select sum(bytes)/1024/1024 from dba_data_files where tablespace_name
2 = 'PDATA';

SUM(BYTES)/1024/1024
------------------
5500


It is mostly full as seen by this query:
SQL> select sum(bytes)/1024/1024 from dba_free_space
2 where tablespace_name = 'PDATA';

SUM(BYTES)/1024/1024
--------------------
449.375

for a total of 5 GB. Nothing has ever been
dropped or deleted (afaik - maybe I'm wrong).. i.e, I don't think this is reflective of any high water mark problems.

it contains 80 primary key indexes..
but mostly tables..


INDEX's account for the remaining space..
however...

1* select sum(num_rows*avg_row_len)/1024/1024 from dba_tables where tablespace
_name = 'PDATA' order by table_name
SQL> /

SUM(NUM_ROWS*AVG_ROW_LEN)/1024/1024
-----------------------------------
2822.27288

1 select sum(bytes)/1024/1024 from dba_segments where tablespace_name = 'PDATA'
2* and segment_type = 'TABLE'
SQL> /

SUM(BYTES)/1024/1024
--------------------
3962.875


PCT free is basically 10
SQL> select pct_free, count(1) from dba_tables
2 where tablespace_name = 'PRDDATA'
3 group by pct_free;

PCT_FREE COUNT(1)
---------- ----------
10 95
15 2

but let's assume 15
that means that (question).. (3962-2822)/3962 should = .15? it works out to
.28.. almost double what I would figure even factoring block headers and so forth.

What could explain this discrepancy of space other than pctfree reserved space ?

I took show_space from your sight and modified it slightly so I could use awk to sum of the results for the entire tablespace.

Using show_space on all the tables in the tablespace and summing up the total_blocks,unused_blocks, & free list blocks.
for all of them I get
507488 total blocks
and 878 unused blocks (HWM blocks)..
1,321 free list blocks.. so (507488-878-1,321)*8K = 4,139,327,488 or more than dba_segments OR dba_tables..


Why do these numbers differ so drastically?
show_space = 4.1GB
dba_segments = 3.9GB
dba_tables = 2.8GB

Have I done something wrong?

and Tom said...

I doubt you've done anything wrong -- it is just that there may be lots more then 10% free (15% free).

Consider the following example:


SQL> create table t ( x varchar2(4000) ) pctfree 10;

Table created.

SQL>
SQL> REM declare
SQL> REM l_cnt number;
SQL> REM begin
SQL> REM for i in 3000 .. 4000
SQL> REM loop
SQL> REM execute immediate 'truncate table t';
SQL> REM insert into t values ( rpad('*',4000,'*') );
SQL> REM insert into t values ( rpad('*',i,'*') );
SQL> REM commit;
SQL> REM select count(distinct dbms_rowid.ROWID_BLOCK_NUMBER(rowid)) into l_cnt
SQL> REM from t;
SQL> REM if ( l_cnt = 2 )
SQL> REM then
SQL> REM dbms_output.put_line( 'i = ' || i || ' put us over the edge' );
SQL> REM exit;
SQL> REM end if;
SQL> REM end loop;
SQL> REM end;
SQL> REM /
SQL> REM
SQL> REM i = 3277 put us over the edge
SQL> REM
SQL> REM
SQL>
SQL> truncate table t;

Table truncated.

SQL> insert into t values ( rpad( '*', 4000, '*' ) );

1 row created.

SQL> insert into t values ( rpad( '*', 3276, '*' ) );

1 row created.

SQL> select dbms_rowid.ROWID_BLOCK_NUMBER(rowid) from t;

DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)
------------------------------------
74
74

SQL>
SQL>
SQL> truncate table t;

Table truncated.

SQL> insert into t values ( rpad( '*', 4000, '*' ) );

1 row created.

SQL> insert into t values ( rpad( '*', 3277, '*' ) );

1 row created.

SQL> select dbms_rowid.ROWID_BLOCK_NUMBER(rowid) from t;

DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)
------------------------------------
74
75

SQL>


Now, these two cases differ by ONE BYTE. Yet, the first case virtually fills block one -- the second case leaves block one virtually empty.


A row will be inserted into a row on the freelist IF the insertion of that row will not cause the block to be filled over 10% free. That is part of the reasoning behind the 2.8 vs 3.8 gig differential (the other is that avg_row_len*number of rows gives you raw input data lenght, no overhead for the null/not null flag, length field on numbers, and other "because it is a data structure" overhead).


Now, I'm not sure where you are going with this math:

(507488-878-1,321)*8K = 4,139,327,488

Although it is 3.85504913 gig.... (why sub out the freelist blocks?)


When looking at show space, if you want to get the same answer as dba_segments -- you look at total blocks. And when I look at total blocks:

ops$tkyte@ORA920.US.ORACLE.COM> select (507488*8)/1024/1024 from dual;

(507488*8)/1024/1024
--------------------
3.87182617

I get 3.8 gig. If you use GIG in your dba_segments query (remember, divide by 1024, not 1000!!!) you get 3.8 gig. They are the same.



Rating

  (21 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Clears that up / reorgs

A reader, September 14, 2002 - 6:52 pm UTC

Ah..a math mistake.. so really I got 2 different answers, not 3. Is this kind of extra space typical? 28%? (seems ok). Also, some follow ups: What if I went through the trouble of using avg(vsize( on every column, wouldn't I end up with something closer to the lower number? I.e. - a lot of the extra space is just the inability to put rows packed into a block so any raw data figure would be wrong, right? Ok.. not "wrong".. but not reflective of the actual space used by various tables on disk, right?

Also, I have no way of knowing how "empty" a block really is, right? (I think you said this on your site somewhere citing that it would use an awful lot of resources to find this out and give us something not that interesting).. so therefore, I cannot know how much space I would save by exp/imp reorg until it is done - In fact, depending on the nature of the data, it might not save anything at all. In your rpad example, a exp/imp would end up with the same situation exactly. Correct?

I'm planning to delete a lot of data (not just from this tablespace) and am trying to find out how much I will save. From your answer, it looks like I would take the raw data number and * 1.28 to guess how many blocks would be freed by a big delete. That would assume that the "extra" space is sort of uniformly distributed throughout the table. I recognize that just doing an exp/imp without a big delete would be futile since I will probably need the space again in a week anyhow, but, I am curious to your answer about estimating it anyhow.

Thanks again.
- D

Tom Kyte
September 15, 2002 - 11:12 am UTC

No, you got one answer.

It took 3.8 of database blocks to hold your 2.75gig of raw input data. Part of that is block overhead, the other part is your pctfree and you may well have a "dengerative case" whereby you are getting one less row per block then you would like (like my example with 3276 vs 3277 bytes) -- leaving extra space on the block (more then 10%)

Using avg(vsize) won't really get you any close.


An imp/exp MIGHT end up with the same result, might not. If you insert a 4000 byte row and then a 3277 byte row and then a 4000 byte and then a 3277 byte -- that might be 4 blocks.

Now, insert 3277, 3277, 4000, 4000 -- well, that might be 3. It depends on how the data falls out.

If this is relatively static data, just set pctfree to 0% and pack the blocks up.

Use ALTER TABLE MOVE, rather then exp/imp to "compact" the data. Faster, less chance of human error (or software error for that matter), easier.



As for sizing, I always just load a representative portion of the data, measure it and multiply. This examples shows clearly why all formulas for sizing are inheritly FLAWED. That unknown "how the shape of the data will affect this and the order it comes in" always breaks them.


Good suggestion

Doug C, September 15, 2002 - 3:46 pm UTC

Hey Tom - thanks for the suggestion of using "move"..
Thanks again..Are these "degenerative" cases common?
- D

Tom Kyte
September 15, 2002 - 6:49 pm UTC

Degnerative cases are never "common", they are generally the exception.

But, if your dataset is one of those bad ones, you would certainly say the opposite!


Estimating wrong...

Marcio, July 22, 2003 - 6:06 pm UTC

Tom, In this case I'm trying find out why so lost bytes on...

Look table t has a field y with varchar2(4000) and when I put 6 rows is enough to fill up extent. I don't know how to consider rest to estimate size (near) of table.

I analyzed table and shows me avg_row_len of 4006 and my count was:

NUM_ROWS AVG_SPACE AVG_ROW_LEN BLOCKS EMPTY_BLOCKS PCT_FREE
---------- ---------- ----------- ---------- ------------ ----------
6 4643 4006 7 0 10

avg_row_len * num_rows = bytes_occupied

4006*6
----------
24036

I'm messing up this... I'm not using avg_space on compute nor overhead. I don't know how does it compute. Could you give some tips... I would like to estimate size of table.

Here is my script.

set echo on

drop table t
/

create table t (y varchar2(4000) )
/
declare
type t_array is table of t%rowtype index by binary_integer;
tb t_array;
begin
for i in 1..6
loop
tb(i).y := rpad('*',4000,'*');
end loop;
forall i in 1 .. tb.count
insert into t values tb(i);
end;
/

analyze table t compute statistics
for table
for all indexed columns
for all indexes
/

exec show_space('T')
/

select NUM_ROWS, avg_space, avg_row_len, blocks, empty_blocks, pct_increase, pct_free
from user_tables
where table_name = 'T'
/
select blocks, bytes from user_segments where segment_name = 'T';

select extent_id, bytes, blocks
from user_extents
where segment_name = 'T';

set echo off


Tom Kyte
July 22, 2003 - 8:12 pm UTC

huh?

Explain

Marcio, July 23, 2003 - 8:38 am UTC

sorry -- too many consistent gets and hard parse yesterday night ;)

If you kindly put a short example how to compute size to:

create table t ( x varchar2(4000) )
storage (initial 64k next 64k) ;

And fill up x with -- rpad('*',4000,'*') to estimate.

1. how many rows fit in block?
2. afaik - 1 extent (64k) have 8 blocks (8k) -- since i have created that, so how many extents will be necessary to 1000 rows?
3. Is correct say if 1000 rows need 4 extents that my estimate size must be 4 * 64k? I see that: I can't consider just bytes used for table to estimate size. Am I correct?
4. How could be used dbms_stat to estimate size on table?

Thanks in advanced
Marcio

Tom Kyte
July 23, 2003 - 9:57 am UTC

you would just create the table, put some data in it and then you can analyze it.

you'll find user_table has:

blocks -- how many blocks are used
num_rows -- how many rows you have

then divide.

Like this?

Marcio, July 23, 2003 - 3:51 pm UTC

Something like this?

ops$t_mp00@MRP9I1> create table t (x int primary key, y varchar2(40) )
2 /

Table created.

Put some data there about 10%

ops$t_mp00@MRP9I1> declare
2 type t_array is table of t%rowtype index by binary_integer;
3 tb t_array;
4 begin
5 for i in 1..2000
6 loop
7 tb(i).x := i;
8 tb(i).y := rpad('*',40,'*');
9 end loop;
10 forall i in 1 .. tb.count
11 insert into t values tb(i);
12 end;
13 /

PL/SQL procedure successfully completed.

ops$t_mp00@MRP9I1>
ops$t_mp00@MRP9I1> analyze table t compute statistics
2 for table
3 for all indexed columns
4 for all indexes
5 /

Table analyzed.

So, I would like estimate size to 20k rows.


ops$t_mp00@MRP9I1>
ops$t_mp00@MRP9I1> select num_rows, blocks, empty_blocks, row_block, row_block * 20000 bl_est_20k,
2 row_block * 20000 * 8192 by_est_20k
3 from ( select num_rows, blocks, empty_blocks, blocks / num_rows row_block
4 from user_tables
5 where table_name = 'T')
6 /

NUM_ROWS BLOCKS EMPTY_BLOCKS ROW_BLOCK BL_EST_20K BY_EST_20K
---------- ---------- ------------ ---------- ---------- ----------
2000 15 0 ,0075 150 1228800

1 row selected.


I would expect 150 blocks and approx 1.3 meg, but...

ops$t_mp00@MRP9I1>
ops$t_mp00@MRP9I1> drop table t
2 /

Table dropped.

ops$t_mp00@MRP9I1>
ops$t_mp00@MRP9I1> create table t (x int primary key, y varchar2(40) )
2 /

Table created.

ops$t_mp00@MRP9I1> declare
2 type t_array is table of t%rowtype index by binary_integer;
3 tb t_array;
4 begin
5 for i in 1..20000
6 loop
7 tb(i).x := i;
8 tb(i).y := rpad('*',40,'*');
9 end loop;
10 forall i in 1 .. tb.count
11 insert into t values tb(i);
12 end;
13 /

PL/SQL procedure successfully completed.

ops$t_mp00@MRP9I1>
ops$t_mp00@MRP9I1> analyze table t compute statistics
2 for table
3 for all indexed columns
4 for all indexes
5 /

Table analyzed.

ops$t_mp00@MRP9I1>
ops$t_mp00@MRP9I1> select num_rows, blocks, empty_blocks, blocks / num_rows row_block
2 from user_tables
3 where table_name = 'T'
4 /

NUM_ROWS BLOCKS EMPTY_BLOCKS ROW_BLOCK
---------- ---------- ------------ ----------
20000 142 113 ,0071

1 row selected.

ops$t_mp00@MRP9I1>
ops$t_mp00@MRP9I1> exec show_space('T')
Free Blocks.............................3
Total Blocks............................256
Total Bytes.............................2097152
Unused Blocks...........................113
Unused Bytes............................925696
Last Used Ext FileId....................11
Last Used Ext BlockId...................5257
Last Used Block.........................15

PL/SQL procedure successfully completed.

The table become in 256 blocks and about 2meg.

Is this compute?

Thanks,
Marcio

Tom Kyte
July 23, 2003 - 9:27 pm UTC

is that REPRESENTATIVE data - do you always fill up your varchar2???

you need to load REPRESENTATIVE data -- not just maxed out data.

Yet.... Overdose

Marcio, July 24, 2003 - 12:51 pm UTC

Sorry Tom, I couldn't understant that yet. When you said REPRESENTATIVE data, I think about all_objects -- does all_objects representative doesn't? I used number, varchar2 and date on table created. So there are some doubt yet.

ops$t_mp00@MRP9I1> create table t as select object_id, object_name, object_type,
2 created
3 from all_objects
4 where rownum <= 6000
5 /

Table created.

ops$t_mp00@MRP9I1>
ops$t_mp00@MRP9I1> analyze table t compute statistics
2 for table
3 for all indexed columns
4 for all indexes
5 /

Table analyzed.

ops$t_mp00@MRP9I1>
ops$t_mp00@MRP9I1> select num_rows, blocks, empty_blocks, row_block, row_block * 30000 bl_est_30k,
2 row_block * 30000 * 8192 by_est_30k
3 from ( select num_rows, blocks, empty_blocks, blocks / num_rows row_block
4 from user_tables
5 where table_name = 'T')
6 /

NUM_ROWS BLOCKS EMPTY_BLOCKS ROW_BLOCK BL_EST_30K BY_EST_30K
---------- ---------- ------------ ---------- ---------- ----------
6000 47 0 ,007833333 235 1925120

1 row selected.

Consider this example above, is correct say
1) I'll need 1880k to 30000 rows on this table?
2) I'll need 235 blocks to 30000 rows?

But when I fill it up, I get some % up from this... I would like know if is correct form to estimate size of table/schema/etc?

ops$t_mp00@MRP9I1> drop table t
2 /

Table dropped.

ops$t_mp00@MRP9I1>
ops$t_mp00@MRP9I1> create table t as select object_id, object_name, object_type,
2 created
3 from all_objects
4 /

Table created.

ops$t_mp00@MRP9I1>
ops$t_mp00@MRP9I1> analyze table t compute statistics
2 for table
3 for all indexed columns
4 for all indexes
5 /

Table analyzed.

ops$t_mp00@MRP9I1>
ops$t_mp00@MRP9I1> select num_rows, blocks, empty_blocks, blocks / num_rows row_block
2 from user_tables
3 where table_name = 'T'
4 /

NUM_ROWS BLOCKS EMPTY_BLOCKS ROW_BLOCK
---------- ---------- ------------ ----------
30560 213 42 ,006969895

1 row selected.

ops$t_mp00@MRP9I1>
ops$t_mp00@MRP9I1> exec show_space('T')
Free Blocks.............................0
Total Blocks............................256
Total Bytes.............................2097152
Unused Blocks...........................42
Unused Bytes............................344064
Last Used Ext FileId....................11
Last Used Ext BlockId...................5257
Last Used Block.........................86

=====================================================
See it show me 2meg and about 256 blocks

And last question if i am not run away from topic. Why does lmt, since 15th extent, allocates 1meg extents like this?

ops$t_mp00@MRP9I1> select extent_id, bytes, blocks
2 from user_extents
3 where segment_name = 'T'
4 /

EXTENT_ID BYTES BLOCKS
---------- ---------- ----------
0 65536 8
1 65536 8
2 65536 8
3 65536 8
4 65536 8
5 65536 8
6 65536 8
7 65536 8
8 65536 8
9 65536 8
10 65536 8
11 65536 8
12 65536 8
13 65536 8
14 65536 8
15 65536 8
16 1048576 128 <<-----


Tom Kyte
July 24, 2003 - 4:28 pm UTC

I'm telling you

IF you want to size YOUR tables

YOU will take some % of YOUR data, load it, multiply it out.

I'm not really sure what you are trying to do here.

Trying get approx size of table

Marcio, July 25, 2003 - 1:18 pm UTC

I am trying to simulate sizing. Take table, who cares -- 2k/row some thing like that. I'm gonna pick up an of your book examples ( x int, y char(2000), z date) about 3+2000+7 bytes so +/- 2k. I would like be able to say -- hey if you put 1000 rows into there, it'll take 2meg. Just it. Wich methodology do I must use? Take some % of my data, load it, multiply it out --> blocks/row_num * <rows_estimated> ?

Thanks
Marcio

Tom Kyte
July 25, 2003 - 4:54 pm UTC

that is "max", varchar2(2000) -- what is the AVERAGE width - 20 bytes? 200 bytes? 2000 bytes?


load it

analyze it

multiply it



space occupied by a schema

A reader, March 09, 2005 - 11:21 pm UTC

Tom,
How can I find out the space occupied by a particular schema.

Thank you

Tom Kyte
March 10, 2005 - 7:31 am UTC

select sum(bytes) from dba_segments where owner = 'username'

difference between user_segments and user_extents bytes column

jianhui, June 21, 2005 - 12:12 pm UTC

Dear Tom,
When i was checking the space usage of each user schema, i used

select tablesapce_name, owner, sum(bytes)/1024/1024
from dba_segment
group by rollup(tablespace_name, owner)

The total usage of one particular tablespace from above query is much greater than the total tablespace size that obtained from the following query

select tablespace_name, sum(bytes)/1024/1024
from dba_data_files
group by tablespace_name

This looked so wired to me so i drilled down the detail and found out a table's space usage is so different from xxx_segments and xxx_extents views.

There are 3GB in difference. You may see in below example, the table contains only char, varchar2, date, timestatmp, and number column. The number shows 3+GB is inconsistent with total tablespace size, because if it was correct number, the total usage would have been overflow the total tablespace size. How could this happen? Or I may not under stand the meaning of these xxx_segments/extents views correctly?


jianhui@mydb>select sum(bytes)/1024/1024 from user_segments
jianhui@mydb>where segment_name='TEST';

SUM(BYTES)/1024/1024
--------------------
3157.125

1 row selected.

jianhui@mydb>select sum(bytes)/1024/1024 from user_extents
jianhui@mydb>where segment_name='TEST';

SUM(BYTES)/1024/1024
--------------------
.125

1 row selected.

jianhui@mydb>select count(*) from TEST;

COUNT(*)
----------
0

1 row selected.

jianhui@mydb>truncate table TEST;

Table truncated.

jianhui@mydb>select sum(bytes)/1024/1024 from user_segments
jianhui@mydb>where segment_name='TEST';

SUM(BYTES)/1024/1024
--------------------
3157.125

1 row selected.

jianhui@mydb>select sum(bytes)/1024/1024 from user_extents
jianhui@mydb>where segment_name='TEST';

SUM(BYTES)/1024/1024
--------------------
.125

1 row selected.

jianhui@mydb>select segment_type from user_segments
jianhui@mydb>where segment_name='TEST';

SEGMENT_TYPE
------------------
TABLE

1 row selected.

jianhui@mydb>select segment_type from user_extents
jianhui@mydb>where segment_name='TEST';

SEGMENT_TYPE
------------------
TABLE

1 row selected.

Tom Kyte
June 21, 2005 - 5:15 pm UTC

so, what is the create table, use dbms_metadata.get_ddl to retrieve it.

same problem

Kathrin, February 23, 2006 - 10:04 am UTC

Hi,
I'm encountering the same problem.
Fill a table with LOTS of data, look at dba_extents, dba_segments, dba_free_space, dba_ts_quotas: they all show the tablespace is quite filled.
Use
truncate table xy drop storage;
dba_extents, dba_free_space, dba_ts_quotas show the space freed, just one extent left as expected. But dba_segments still shows the whole lot of extents.
And although dba_segments shows the tablespace filled, I am able to shrink the datafiles - so the space is indeed free.

Doing
alter table xy allocate extent;
corrects dba_segments - now there are two extents just as expected.

Seems to me quite odd behaviour.

I'm runing 9i on Solaris.

regards,
Kathrin

Tom Kyte
February 23, 2006 - 10:25 am UTC

give example please. and show versions.

Questions to Kathrin

Michel Cadot, February 23, 2006 - 12:10 pm UTC

Is your tablespace DMT or LMT?
Is it in READ/WRITE or READ ONLY state?
Does this state changed between your different statements?

Regards
Michel


Space Distinct between DBA_SEGMENTS and DBA_DATA_FILES

Hector Gabriel Ulloa Ligarius, May 18, 2007 - 10:54 am UTC

Hi Team Tom...

I have a tablespace EXAMPLEPROD1 , but exists diference between DBA_SEGMENTS and DBA_DATA_FILES

In DBA_SEGMENTS the size of tablespace EXAMPLEPROD1 is 110GB app...

See:
SQL> SELECT tablespace_name      tblspc,
  2         SUM(bytes)/1024/1024 ebytes
  3    FROM dba_segments
  4   GROUP BY tablespace_name;

TBLSPC                             EBYTES
------------------------------ ----------
.
.
EXAMPLEPROD1                   <b>111081.375</b>
.
.


But , the datafiles in the tablespace have a size of

  1  select bytes , bytes/1024/1024/1024 GB , file_name
  2  from dba_data_files
  3* where tablespace_name like 'EXAMPLEPROD1'
SQL> /

        BYTES         GB FILE_NAME
------------- ---------- ----------------------------------
  10737418240         10 /oracle/.../EXAMPLEPROD1.data1
  10737418240         10 /oracle/.../EXAMPLEPROD1.data2
  10737418240         10 /oracle/.../EXAMPLEPROD1.data3
  10737418240         10 /oracle/.../EXAMPLEPROD1.data4
  10737418240         10 /oracle/.../EXAMPLEPROD1.data5
  10737418240         10 /oracle/.../EXAMPLEPROD1.data6
  10737418240         10 /oracle/.../EXAMPLEPROD1.data7
  10737418240         10 /oracle/.../EXAMPLEPROD1.data8



Total : 80GB

More data


  1  select file_id , sum(bytes) , 
  2         sum(bytes)/1024/1024/1024 GB, 
  3         sum(blocks) 
  4   from dba_free_space 
  5  where tablespace_name like 'EXAMPLEPROD1'
  6* group by file_id
SQL> /

   FILE_ID SUM(BYTES)         GB SUM(BLOCKS)
---------- ---------- ---------- -----------
        11  161480704 .150390625       19712
        32  528416768 .492126465       64504
        33  552468480 .514526367       67440
        34  620691456 .578063965       75768
        35  420413440 .391540527       51320
        36  832569344 .775390625      101632
        50 2569994240 2.39349365      313720
        52 7249788928 6.75189209      884984

8 rows selected.

Why the diference??
Is a BUG?


SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
PL/SQL Release 9.2.0.6.0 - Production
CORE    9.2.0.6.0       Production
TNS for Solaris: Version 9.2.0.6.0 - Production
NLSRTL Version 9.2.0.6.0 - Production

SunOS sscprod7 5.9 Generic_117171-02 sun4u sparc SUNW,Sun-Fire-480R



Regards Tom

Hector Gabriel Ulloa Ligarius
http://ligarius.wordpress.com

Tom Kyte
May 18, 2007 - 4:18 pm UTC

run a single simple query for us like this with cut and paste

ops$tkyte%ORA10GR2> select 'segments', sum(bytes)/1024/1024 mbytes from dba_segments where tablespace_name = 'USERS'
  2  union all
  3  select 'datafile', sum(bytes)/1024/1024 mbytes from dba_data_files where tablespace_name = 'USERS'
  4  /

'SEGMENT     MBYTES
-------- ----------
segments     41.625
datafile         90


count(1)

S Patel, May 21, 2007 - 1:35 am UTC

Surprised you didn't comment on count(1) back in SEP 2002 when the thread was created. count(*) not available in 8.1.7? don't think so. :)

BTW, who the he&^ teaches them to use count(1)??

Is it published in oracle doc somewhere that they are picking it up from?

Hector Gabriel Ulloa Ligarius, May 22, 2007 - 3:41 pm UTC

< Hi Team Tom...
<
< I have a tablespace EXAMPLEPROD1 , but exists diference between DBA_SEGMENTS and DBA_DATA_FILES
<
< In DBA_SEGMENTS the size of tablespace EXAMPLEPROD1 is 110GB app...
<
< See:
<
< SQL> SELECT tablespace_name      tblspc,
<   2         SUM(bytes)/1024/1024 ebytes
<   3    FROM dba_segments
<   4   GROUP BY tablespace_name;
< 
< TBLSPC                             EBYTES
< ------------------------------ ----------
< .
< .
< EXAMPLEPROD1                   <b>111081.375</b>
< .
< .
<
<
<
< But , the datafiles in the tablespace have a size of
<
<   1  select bytes , bytes/1024/1024/1024 GB , file_name
<   2  from dba_data_files
< 3* where tablespace_name like 'EXAMPLEPROD1'
< SQL> /
< 
<         BYTES         GB FILE_NAME
< ------------- ---------- ----------------------------------
<   10737418240         10 /oracle/.../EXAMPLEPROD1.data1
<   10737418240         10 /oracle/.../EXAMPLEPROD1.data2
<   10737418240         10 /oracle/.../EXAMPLEPROD1.data3
<   10737418240         10 /oracle/.../EXAMPLEPROD1.data4
<   10737418240         10 /oracle/.../EXAMPLEPROD1.data5
<   10737418240         10 /oracle/.../EXAMPLEPROD1.data6
<   10737418240         10 /oracle/.../EXAMPLEPROD1.data7
<   10737418240         10 /oracle/.../EXAMPLEPROD1.data8
< 
<
<
<
< Total : 80GB
<
< More data
<
<   1  select file_id , sum(bytes) , 
<   2         sum(bytes)/1024/1024/1024 GB, 
<   3         sum(blocks) 
<   4   from dba_free_space 
<   5  where tablespace_name like 'EXAMPLEPROD1'
<   6* group by file_id
< SQL> /
< 
<    FILE_ID SUM(BYTES)         GB SUM(BLOCKS)
< ---------- ---------- ---------- -----------
<         11  161480704 .150390625       19712
<         32  528416768 .492126465       64504
<         33  552468480 .514526367       67440
<         34  620691456 .578063965       75768
<         35  420413440 .391540527       51320
<         36  832569344 .775390625      101632
<         50 2569994240 2.39349365      313720
<         52 7249788928 6.75189209      884984
< 
< 8 rows selected.
< 
<
< Why the diference??
< Is a BUG?
<
< SQL> select * from v$version;
< 
< BANNER
< ----------------------------------------------------------------
< Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
< PL/SQL Release 9.2.0.6.0 - Production
< CORE    9.2.0.6.0       Production
< TNS for Solaris: Version 9.2.0.6.0 - Production
< NLSRTL Version 9.2.0.6.0 - Production
< 
< SunOS sscprod7 5.9 Generic_117171-02 sun4u sparc SUNW,Sun-Fire-480R
< 
<
<
<
< Regards Tom
<
< Hector Gabriel Ulloa Ligarius
<
< http://ligarius.wordpress.com
<
<
< Followup May 18, 2007 - 4pm US/Eastern:
< < run a single simple query for us like this with cut and paste
< <
< < ops$tkyte%ORA10GR2> select 'segments', sum(bytes)/1024/1024 mbytes from dba_segments where 
< <  tablespace_name = 'USERS'
< <   2  union all
< <   3  select 'datafile', sum(bytes)/1024/1024 mbytes from dba_data_files where tablespace_name = 
< < 'USERS'
< <   4  /
< < 
< < 'SEGMENT     MBYTES
< < -------- ----------
< < segments     41.625
< < datafile         90
< < 
< <
< <
< <


I run the query but the result is very diferent...

LIGARIUS> r
  1  select 'segments', 
  2         sum(bytes)/1024/1024 mbytes 
  3    from dba_segments 
  4   where tablespace_name in ('EXAMPLEPROD1')
  5   union all
  6  select 'datafile', 
  7         sum(bytes)/1024/1024 mbytes 
  8    from dba_data_files 
  9   where tablespace_name in ('EXAMPLEPROD1')

 SEGMENT     MBYTES
-------- ----------
segments 123182.625
datafile      92160



Why the segments is more great than the datafiles?

Is a BUG?

Regards
Hector Gabriel Ulloa Ligarius
http://ligarius.wordpress.com


Tom Kyte
May 22, 2007 - 7:12 pm UTC

could be, looks like a problem, could it be related to this:

http://jonathanlewis.wordpress.com/2007/05/21/debugging/

char(1) using a lot of bytes ...

A reader, October 28, 2008 - 2:22 am UTC

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for 32-bit Windows: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production

SQL> select default_tablespace from user_users;

DEFAULT_TABLESPACE
------------------------------
USERS

SQL> select * from dba_tablespaces
2 where tablespace_name ='USERS';

TABLESPACE_NAME BLOCK_SIZE INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS
------------------------------ ---------- -------------- ----------- -----------
MAX_EXTENTS PCT_INCREASE MIN_EXTLEN STATUS CONTENTS LOGGING FOR EXTENT_MAN
----------- ------------ ---------- --------- --------- --------- --- ----------
ALLOCATIO PLU SEGMEN DEF_TAB_ RETENTION BIG
--------- --- ------ -------- ----------- ---
USERS 8192 65536 1
2147483645 65536 ONLINE PERMANENT LOGGING NO LOCAL
SYSTEM NO AUTO DISABLED NOT APPLY NO


SQL> create table t (i char(1));

Table created.

SQL> begin
2 for i in 1..8192
3 loop
4 insert into t values ('x');
5 commit;
6 end loop;
7 end;
8 /

PL/SQL procedure successfully completed.

SQL> select segment_name, bytes from user_segments
2 where segment_name ='T';

SEGMENT_NAME
--------------------------------------------------------------------------------
BYTES
----------
T
131072

((Do we really 131072 of space including block header ? The most I expect to see is only 2 )

SQL> select dbms_rowid.ROWID_BLOCK_NUMBER(rowid) , count(*) from
2 t
3 group by dbms_rowid.rowid_block_number(rowid);

DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) COUNT(*)
------------------------------------ ----------
43023 660
43030 660
43032 660
43021 660
43024 660
43029 660
43026 660
43028 660
43020 660
43022 660
43025 272

DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) COUNT(*)
------------------------------------ ----------
43027 660
43031 660

13 rows selected.

((Do we really need 13 of blocks ? )
SQL> set pagesize 30

SQL> show parameter db_block

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_block_buffers integer 50000
db_block_checking string FALSE
db_block_checksum string TRUE
db_block_size integer 8192


Tom Kyte
October 28, 2008 - 7:39 am UTC

yes, you do. You have a byte for the length (char is stored as a varchar2 - a char is just a blank padded varchar2 - never use char). You have a byte (at least - maybe more, multi-byte charactersets) for the data. You have a null indicator. You have the row directory. You have the block overhead.

You could use pctfree of 0 to get a couple of more rows per block, but you'll be inherently limited as to the number of rows per block by the blocksize itself.

You could bump your column up to char(5) and still take the same amount of space - you are bumping into the row directory size here, not a full block situation really.

A reader, October 29, 2008 - 12:16 am UTC

fixed header + transaction header + table directory + row directory = block header.

57 + 23 + 4 + 1320




SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod
PL/SQL Release 10.2.0.4.0 - Production
CORE    10.2.0.4.0      Production
TNS for 32-bit Windows: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production

SQL> select default_tablespace from user_users;

DEFAULT_TABLESPACE
------------------------------
USERS

SQL> select * from dba_tablespaces
  2  where tablespace_name ='USERS';

TABLESPACE_NAME                BLOCK_SIZE INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS
------------------------------ ---------- -------------- ----------- -----------
MAX_EXTENTS PCT_INCREASE MIN_EXTLEN STATUS    CONTENTS  LOGGING   FOR EXTENT_MAN
----------- ------------ ---------- --------- --------- --------- --- ----------
ALLOCATIO PLU SEGMEN DEF_TAB_ RETENTION   BIG
--------- --- ------ -------- ----------- ---
USERS                                8192          65536                       1
 2147483645                   65536 ONLINE    PERMANENT LOGGING   NO  LOCAL
SYSTEM    NO  AUTO   DISABLED NOT APPLY   NO


SQL> create table t (i varchar2(1)) pctfree 0;

Table created.

SQL> ed
Wrote file afiedt.buf

  1  begin
  2  for i in 1..8192
  3  loop
  4  insert into t values ('x');
  5  commit;
  6  end loop;
  7* end;
  8  /

PL/SQL procedure successfully completed.

  1* select table_name, min_extents, next_extent from user_tables
SQL> /

TABLE_NAME                     MIN_EXTENTS NEXT_EXTENT
------------------------------ ----------- -----------
T                                        1

SQL> select segment_name, bytes from user_segments
  2  where segment_name ='T';

SEGMENT_NAME
--------------------------------------------------------------------------------
     BYTES
----------
T
    131072

<<Am I right to say that why segment T have 131072 is because table T is using MIN_EXTLEN of 
USERS tablespace ? When the table first created, it has 65536, the next extent it added another 65536
which make it to 131072 >>

SQL> select dbms_rowid.ROWID_BLOCK_NUMBER(rowid) , count(*) from
  2  t
  3  group by dbms_rowid.rowid_block_number(rowid);

DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)   COUNT(*)
------------------------------------ ----------
                                  87        733
                                  91        733
                                  84        733
                                  86        733
                                  88        733
                                  90        733
                                  95        733
                                  85        733
                                  94        733
                                  96        129
                                  93        733

DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)   COUNT(*)
------------------------------------ ----------
                                  92        733

12 rows selected.
<< I bump in to row directory size , am I right to say row directory uses 2 bytes per stored row ? 
And I have 733 row per block, 733 * 2 = 1466 (row directory size)
                                        733   (row data)
1466 + 733 = 2199
My datablock is 8192, 8192 - 2199 = 5993
Where 5993 goes to ? 
>>

SQL> show parameter db_block

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_block_buffers                     integer     0
db_block_checking                    string      FALSE
db_block_checksum                    string      TRUE
db_block_size                        integer     8192



Tom Kyte
October 29, 2008 - 7:50 am UTC

you are hitting the row directory size limits here. don't know how else to say it?

You could bump your column up to char(5) and still take the same amount of space - you are bumping into the row directory size here, not a full block situation really.

answer me this:

why is this an issue? Are you really going to have a single column table with a single char(1) entry with thousands of rows? If not, if you are going to have rows that have at least 5 bytes - this is quite simply "nothing to even look at".


A reader, October 29, 2008 - 9:28 am UTC

Hi Tom,
Is not an issue at all. The reason all these start in the first place is because I want to know how Oracle use data block.

I started with char(1) is because is easier for me to do the computation.

Because I was trying to explain to my friend using char(1) as example and with block size 8192.

After insert 8192 rows of records, both of us expect Oracle will use 2 blocks (max) after we factor in Block Header.

But seems the result is not what we expected.

Thanks
Chee Yong

How to find out how empty/full a table is?

A reader, May 07, 2009 - 8:48 pm UTC

Is there an equivalent of dba_free_space for the table level? Let's say if I deleted 90% of the data from a table, is there any way I can find out that, with the current extent/block allocation, the table still has 90% of space to grow?

AVG_ROW_LEN Includes Index Size?

Abhijeet Rajukar, May 15, 2009 - 2:12 am UTC

Hi Tom,

Does the AVG_ROW_LEN column in dba_tables includes the size of the index for that perticular row?

Many Thanks
Abhijeet
Tom Kyte
May 15, 2009 - 1:36 pm UTC

I don't even know what that means?

How would an index on a table affect the width of a row in that table?


average row length is the average length of the row in the table.


A simple way to see if index's would change any table metric would be

a) create table
b) gather statistics
c) note avg row len
d) add index
e) repeat b and c
f) compare results

Thanks Tom

Abhijeet, June 24, 2009 - 10:40 am UTC

Thanks a lot tom. It clarifies that index has no impact.

Big data file

Jayadevan, June 27, 2012 - 11:45 pm UTC

Hi Tom,
I usually use the sql you provided to find if I can shrink the data files. But in this case, the script generated an output which said there is no scope for shrinking. The data file is about 17GB and segments add upto much less than that.
SQL> select 'segments', sum(bytes)/1024/1024 mbytes from dba_segments where
  2   tablespace_name='USERS'
  3  union all
  4  select 'datafile', sum(bytes)/1024/1024 mbytes from dba_data_files where tablespace_name ='USERS';

'SEGMENT     MBYTES
-------- ----------
segments    11690.5
datafile      17316


What is taking up the other 5-6 GB space?

Tom Kyte
June 28, 2012 - 9:49 am UTC

nothing.

we can only shrink down to where the last extent of something is.

So, if you have a datafile that looks like this:


XXXXXfXXXXfXXXXfffffffffffXffffffffffffffffffXffffffffff
                                             ^
                                             ^


where X=allocated, f=FREE, we can only shrink to the last X.

You have something way out there. You can query dba_extents to see what the last extent allocated in that file is and perhaps move it. (alter table t move, alter index i rebuild) and see if you can then shrink further.


More to Explore

DBMS_ROWID

More on PL/SQL routine DBMS_ROWID here