Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Saibabu .

Asked: May 02, 2000 - 1:21 pm UTC

Last updated: December 17, 2009 - 2:13 pm UTC

Version:

Viewed 10K+ times! This question is

You Asked

Hai,
Can you clarify the differnec between
the blocks column in dba_tables and
High water mark.what i understand is that both are same.But i
read in metalink that both can be differnt.I request you to
explain on this.
Thanks
Saibabu



and Tom said...



well, the blocks in dba_tables is the number of blocks below the HWM "as of the last analyze". It is not 100% accurate -- since it is current as of the last analyze. Here is an example with comments embedded:

ops$tkyte@8i> create table emp as select * from scott.emp;
Table created.

ops$tkyte@8i> select blocks, empty_blocks from dba_tables where owner = USER;

BLOCKS EMPTY_BLOCKS
---------- ------------



Initally, blocks is NULL, we have to analyze to fill it in.. We can see the number of allocated blocks with:

ops$tkyte@8i> select blocks, bytes from dba_extents where owner = USER;

BLOCKS BYTES
---------- ----------
64 524288

ops$tkyte@8i> analyze table emp compute statistics;
Table analyzed.

ops$tkyte@8i> select blocks, empty_blocks from dba_tables where owner = USER;

BLOCKS EMPTY_BLOCKS
---------- ------------
1 62


Now, we can see that 1 block has data and 62 are emtpy (the last block is an extent map -- there are 64 blocks allocated, 1 with data, 1 with a map and 62 empty ones).


ops$tkyte@8i> insert into emp select * from emp;
14 rows created.

ops$tkyte@8i> insert into emp select * from emp;
28 rows created.

ops$tkyte@8i> insert into emp select * from emp;
56 rows created.

ops$tkyte@8i> insert into emp select * from emp;
112 rows created.


So, now we've made the table larger -- lets see what happens:


ops$tkyte@8i> select blocks, empty_blocks from dba_tables where owner = USER;

BLOCKS EMPTY_BLOCKS
---------- ------------
1 62

ops$tkyte@8i> select blocks, bytes from dba_extents where owner = USER;

BLOCKS BYTES
---------- ----------
64 524288


that data did not change -- still looks like 1 block under the HWM but really:


ops$tkyte@8i> analyze table emp compute statistics;
Table analyzed.

ops$tkyte@8i> select blocks, empty_blocks from dba_tables where owner = USER;

BLOCKS EMPTY_BLOCKS
---------- ------------
3 60

There are 3. A full scan would hit 3 blocks now, not 1.


--
Thanks,

Thomas Kyte asktom_us@oracle.com
Oracle Service Industries

Oracle tips and papers </code> https://asktom.oracle.com/magazine-archive.htm <code>




Rating

  (6 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

user_tables and user_segments

pranay, July 16, 2009 - 6:17 pm UTC

Tom,
    Can you please explain me the reason behind this mismatch 
Case1)
psingh@PRANAY> create table tab1
  2  (col1 number);

Table created.
psingh@PRANAY> analyze table tab1 compute statistics;

Table analyzed.

Elapsed: 00:00:00.03
psingh@PRANAY> select SEGMENT_NAME,BLOCKS,BYTES,EXTENTS from user_segments where
 segment_name='TAB1';

SEGMENT_NAME
      BLOCKS
--------------------------------------------------------------------------------
- ----------
     BYTES    EXTENTS
---------- ----------
TAB1
           8
     65536          1
As you see extent=1 blocks=8

Case 2) psingh@PRANAY> begin
  2  for i in 1..1000
  3  loop
  4  insert into tab1 values(i);
  5  end loop;
  6  commit;
  7  end;
  8  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.07
psingh@PRANAY> analyze table tab1 compute statistics;

Table analyzed.

Elapsed: 00:00:00.03
psingh@PRANAY> select SEGMENT_NAME,BLOCKS,BYTES,EXTENTS from user_segments where
 segment_name='TAB1';

SEGMENT_NAME
      BLOCKS
--------------------------------------------------------------------------------
- ----------
     BYTES    EXTENTS
---------- ----------
TAB1
           8
     65536          1


Elapsed: 00:00:00.07
psingh@PRANAY> select count(*) from tab1;

  COUNT(*)
----------
      1000

Elapsed: 00:00:00.00
psingh@PRANAY> begin
  2  for i in 1..10000
  3  loop
  4  insert into tab1 values(i);
  5  end loop;
  6  commit;
  7  end;
  8  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.42
psingh@PRANAY>  select count(*) from tab1;

  COUNT(*)
----------
     11000

Elapsed: 00:00:00.00
psingh@PRANAY> select SEGMENT_NAME,BLOCKS,BYTES,EXTENTS from user_segments where
 segment_name='TAB1';

SEGMENT_NAME
      BLOCKS
--------------------------------------------------------------------------------
- ----------
     BYTES    EXTENTS
---------- ----------
TAB1
          24
    196608          3


Elapsed: 00:00:00.06 
 Pretty OK as Extents=3 Blocks=24

Case 3)psingh@PRANAY> begin
  2  for i in 1..1000000
  3  loop
  4  insert into tab1 values(i);
  5  end loop;
  6  commit;
  7  end;
  8  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:44.73
psingh@PRANAY>
psingh@PRANAY>
psingh@PRANAY> analyze table tab1 compute statistics;

Table analyzed.

Elapsed: 00:00:06.48
psingh@PRANAY> select SEGMENT_NAME,BLOCKS,BYTES,EXTENTS from user_segments where
 segment_name='TAB1';

SEGMENT_NAME
      BLOCKS
--------------------------------------------------------------------------------
- ----------
     BYTES    EXTENTS
---------- ----------
TAB1
        1664
  13631488         28


Elapsed: 00:00:00.03

Thsi is where I get confused.... Extents=28 and Blocks=1664 How come?? I mean extent size=64K blocksize=8K so 1 extent must have 8 blocks (7empty and 1 for Bitmap am I correct?)

So my question is why there are so may blcoks when extent=28
Please explain

Tom Kyte
July 24, 2009 - 7:16 am UTC

You do not show the create tablespace, so, we'll assume locally managed with SYSTEM ALLOCATED EXTENTS

meaning: the extents are not controlled by you, they are controlled by the software.

and with system allocated extents - they start small (64k) and when we see that the segment is "getting large", they get bigger.

It would have been more interesting for you to select out the EXTENTS themselves, you would have seen them "start small, get bigger - as the segment gets bigger"

That is the goal of system allocated extents - keep them small - unless and until the segment gets larger.

The analyzes - which you should not use, use dbms_stats to gather statistics - was not necessary in this example at all.

ops$tkyte%ORA10GR2> create table tab1
  2  (col1 number);

Table created.

ops$tkyte%ORA10GR2> select extent_id, BLOCKS,BYTES from user_extents where segment_name = 'TAB1' order by 1;

 EXTENT_ID     BLOCKS      BYTES
---------- ---------- ----------
         0          8      65536

ops$tkyte%ORA10GR2> insert into tab1 select level from dual connect by level <= 1000;

1000 rows created.

ops$tkyte%ORA10GR2> select extent_id, BLOCKS,BYTES from user_extents where segment_name = 'TAB1' order by 1;

 EXTENT_ID     BLOCKS      BYTES
---------- ---------- ----------
         0          8      65536

ops$tkyte%ORA10GR2> insert into tab1 select level from dual connect by level <= 10000;

10000 rows created.

ops$tkyte%ORA10GR2> select extent_id, BLOCKS,BYTES from user_extents where segment_name = 'TAB1' order by 1;

 EXTENT_ID     BLOCKS      BYTES
---------- ---------- ----------
         0          8      65536
         1          8      65536
         2          8      65536

ops$tkyte%ORA10GR2> insert into tab1 select level from dual connect by level <= 1000000;

1000000 rows created.

ops$tkyte%ORA10GR2> select extent_id, BLOCKS,BYTES from user_extents where segment_name = 'TAB1' order by 1;

 EXTENT_ID     BLOCKS      BYTES
---------- ---------- ----------
         0          8      65536
         1          8      65536
         2          8      65536
         3          8      65536
         4          8      65536
         5          8      65536
         6          8      65536
         7          8      65536
         8          8      65536
         9          8      65536
        10          8      65536
        11          8      65536
        12          8      65536
        13          8      65536
        14          8      65536
        15          8      65536
        16        128    1048576
        17        128    1048576
        18        128    1048576
        19        128    1048576
        20        128    1048576
        21        128    1048576
        22        128    1048576
        23        128    1048576
        24        128    1048576
        25        128    1048576
        26        128    1048576
        27        128    1048576

28 rows selected.



see how at extent 16 it started doing 1mb allocations - it said to itself "hmmm, this is getting big, time to stop using these tiny 64k extents"

Tom please clear my doubts in my previous post

pranay, July 24, 2009 - 4:15 am UTC

Hi Tom,
I guess my earlier post might have might have been missed by you. Please clear my doubt in my previous post

Thanks a ton Tom

pranay, July 24, 2009 - 5:26 pm UTC


Extents not deallocated after rollback

Santosh, December 17, 2009 - 7:50 am UTC

Hi,
      Going through one question on OTN, I am bit cinfused. I tried finding but no luck yet. Thought of checking with you.


SQL> create table test  as select * from all_objects where rownum < 1;

Table created.

SQL> select blocks, bytes from user_segments  where segment_name='TEST';

    BLOCKS      BYTES
---------- ----------
         8      65536


SQL> insert into test ( select * from all_objects  where rownum <500);

499 rows created.

SQL> select blocks, bytes from user_segments  where segment_name='TEST';

    BLOCKS      BYTES
---------- ----------
         8      65536

SQL> insert into test ( select * from all_objects  where rownum <1000);

999 rows created.

SQL> select blocks, bytes from user_segments  where segment_name='TEST';

    BLOCKS      BYTES
---------- ----------
        24     196608

SQL> rollback;

Rollback complete.

SQL> select blocks, bytes from user_segments  where segment_name='TEST';

    BLOCKS      BYTES
---------- ----------
        24     196608

I don't understand why the bytes show the increased size although the insert was rolled back.

Reagards

Santosh

Tom Kyte
December 17, 2009 - 10:24 am UTC

because a rollback is a logical operation - if you insert 1,000,000 rows and then rollback - we just delete the 1,000,000 rows you inserted.

It is NOT a binary information.

Please remember that in general you are NOT the only one in the database. While you were doing your insert, I might have been doing mine, we cannot just deallocate storage - once allocated to a segment, it is allocated to the segment.

Santosh, December 17, 2009 - 12:29 pm UTC

Ok....Can we say the same about 'update ' as well then?

Can you please guide me a link where I can found more detail.

Thanks and regards
Santosh.
Tom Kyte
December 17, 2009 - 2:13 pm UTC

If you have access to my book Expert Oracle Database Architecture, I go into this in detail.

But in short:

<quote src=Expert Oracle Database Architecture>

....
This is to be expected, as a ROLLBACK has to physically undo the work we’ve done. Similar to a COMMIT, a series of operations must be performed. Before we even get to the ROLLBACK, the database has already done a lot of work. To recap, the following would have happened:

* Undo segment records have been generated in the SGA.

* Modified data blocks have been generated in the SGA.

* A buffered redo log for the preceding two items has been generated in the SGA.

* Depending on the size of the preceding three items, and the amount of time spent, some combination of the previous data may be flushed onto disk already.

* All locks have been acquired.

When we ROLLBACK,

* We undo all of the changes made. This is accomplished by reading the data back from the undo segment, and in effect, reversing our operation and then marking the undo entry as applied. If we inserted a row, a ROLLBACK will delete it. If we updated a row, a rollback will reverse the update. If we deleted a row, a rollback will re-insert it again.

* All locks held by our session are released, and everyone who was enqueued waiting on locks we held will be released.


A COMMIT, on the other hand, just flushes any remaining data in the redo log buffers. It does very little work compared to a ROLLBACK. The point here is that you don’t want to roll back unless you have to. It is expensive since you spend a lot of time doing the work, and you’ll also spend a lot of time undoing the work. Don’t do work unless you’re sure you are going to want to COMMIT it. This sounds like common sense—of course I wouldn’t do all of the work unless I wanted to COMMIT it. Many times, however, I’ve seen a situation where a developer will use a “real” table as a temporary table, fill it up with data, report on it, and then roll back to get rid of the temporary data. In the next section, we’ll talk about true temporary tables and how to avoid this issue.
......
</quote>

Santosh, December 17, 2009 - 4:25 pm UTC

Excellent explanation.Thanks a lot.

Santosh