Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Ken.

Asked: November 21, 2000 - 11:18 am UTC

Answered by: Tom Kyte - Last updated: February 14, 2008 - 10:09 am UTC

Category: Database - Version: 8.1.5

Viewed 10K+ times! This question is

You Asked

Hi Tom,

I know you are very busy, but it would be much appreciated if you can help me with the following:

I created a schema that contains 2 tablespaces - abc_data, abc_idx for example, then loaded data into tables within these tablespaces when we have the program written and sometimes we loaded a whole month's worth of data so these tablespaces got extented. when the data is no longer needed, or the program needs modifications we truncate the tables using the truncate command with the drop storage clause but when I looked at the datafiles for these tablespaces in Oracle storage manager, I see for example :

NAME TABLESPACE SIZE (M) USED(M)
/d6/oradata/JIBPP_ORADATA/abc_data01.dbf abc_data 465 149

so I tried to resize the datafile manually with

"alter database datafile '/d6/oradata/JIBPP_ORADATA/abc_data01.dbf' resize 200m ;

but Oracle returned the error message "file contains used data beyond requested value".

Is the a way that I can resize the datafile without dropping the tablespace and re-creating it.


Thanks very much.

Ken


and we said...

See
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:153612348067 <code>
It'll provide a script to resize this file as small as we can -- which may be 465meg.

Think of the file as a collection of extents (it is). Let's say the layout is like this (X = allocated extent, O = free extent):

0meg 465m
XXXX0000000000X000000000000XXXXXX0000000000XX000000000000000000X


So, as you can see here -- that last X out by 465m will prevent us from shrinking this file. If you can determine what object is out there (see query following for how to do that) you can perhaps "move or rebuild" that object -- or perhaps just drop it and recreate it later.

To see what object is hanging out out there -- run:

column tablespace_name format a20
column "Name" format a45
break on file_id skip 1
ttitle &1
select file_id, block_id, blocks,
owner||'.'||segment_name "Name"
from sys.dba_extents
where tablespace_name = upper('&1')
UNION
select file_id, block_id, blocks,
'Free'
from sys.dba_free_space
where tablespace_name = upper('&1')
order by 1,2,3
/

and pass it in the name of the tablespace. Look for the large block_id at the end of the query and it'll show you the object you are looking for.




and you rated our response

  (6 ratings)

Is this answer out of date? If it is, please let us know via a Review

Reviews

Exactly what I needed to know

December 10, 2002 - 9:14 am UTC

Reviewer: Ron Plank from Raleigh NC USA

Exactly what I needed to know plus a bonus for the script show the physical distribution of objects in a tablespace's datafiles

Recreate Database due to the huge system tablespace.

March 07, 2003 - 10:58 am UTC

Reviewer: Alex from Seoul Korea

Great Tom's Answer
======================================================
Ouch -- looks like someone let people create some really big things in system.
There is a chance you can shrink some of the datafiles (see below -- it'll tell
you if you can and will generate the alter statements if they apply). We can
only shrink files back to their high water mark -- if there is an extent way out
at the end of a file -- we'll not be able to shrink it.
If not, the only thing that will undo this is to recreate the database (exp, imp).
====================================================
My question ++++++++++++++

It worked..Yes, I could shrink my system tablespace file. a little bit.( Thank you very much any way.
Accoding to your answer.. I'd better recreate my database.
Unfortunely..I don't know how to recreate database with same parameters/configuration. ( absolutely with shrinked actual size of system file.. other datafiles aren't important..I have all of creating other tablespace scripts and data)

I'll be appreciate your help.....


(I'm not good at writting my problem in English..Sorry)

Tom Kyte

Followup  

March 07, 2003 - 11:10 am UTC

I didn't say "you'd better".

I said "you would have to if shrinking this file is your only goal in life"


I would most likely just let it be myself.

Very Interesting and Potentially Helpful!

March 07, 2003 - 11:39 am UTC

Reviewer: Robert from Memphis, USA

Tom,

I added this script to my own toolkit of utilities (being careful to give you due acknowledgment, Tom!).

Thanks,

Robert.

highest occupied block doesn't match

February 12, 2008 - 9:48 pm UTC

Reviewer: Doug

Tom - I used this script to report on a 4GB datafile after moving many objects out of it.  The current report shows the highest block used is 5793 if I understand correctly.  These are 8K blocks on a 10.2.0.3 database. 

This is the beginning of the output - 
   FILE_ID   BLOCK_ID     BLOCKS Name
---------- ---------- ---------- ---------------------------------------------
       123       5385        128 PDBAUDIT.AUDIT_HISTORY
                 5513        128 PDBAUDIT.AUDIT_HISTORY
                 5641        128 PDBAUDIT.AUDIT_HISTORY
                 5769          8 PDBAUDIT.PK_AUDIT_HISTORY_LOG
                 5777          8 PDBAUDIT.LOGON_HISTORY
                 5785          8 PDBAUDIT.PK_LOGON_HISTORY
                 5793       4312 Free
                10105          8 Free

Then it goes on and on with nothing but Free until the end like - 

   FILE_ID   BLOCK_ID     BLOCKS Name
---------- ---------- ---------- ---------------------------------------------
       123     435761          8 Free
               435769          8 Free
               435777          8 Free
               435785          8 Free
               435793          8 Free
               435801          8 Free
               435809          8 Free
               435817      72096 Free
       123     507913       3024 Free

Yet, when I try to shrink it by 400MB, it barks at me.  I thought I would be able to get it down to around 6000 blocks. 

SQL> alter database datafile '/u04/oradata/PHRUAT/tools01.dbf'
  2  resize 3100M;
alter database datafile '/u04/oradata/PHRUAT/tools01.dbf'
*
ERROR at line 1:
ORA-03297: file contains used data beyond requested RESIZE value

How's that jive? 

Tom Kyte

Followup  

February 13, 2008 - 12:57 pm UTC

did you run maxshrink - from the link?

follow up - yes maxshrink not working

February 14, 2008 - 2:27 am UTC

Reviewer: Doug from Hartford, CT

Yes.. according to the maxshrink script - it provides the following output (abbreviated)
... 
alter database datafile '/u04/oradata/PHRUAT/patrol01.dbf' resize 5m;
alter database datafile '/u04/oradata/PHRUAT/tools01.dbf' resize 46m;
alter database datafile '/u04/oradata/PHRUAT/tuxdata01.dbf' resize 309m;
....
which jives with what I would think it would shrink too.
Nonethless.. it doesn't work - 

SQL> alter database datafile '/u04/oradata/PHRUAT/tools01.dbf' resize 46m;
alter database datafile '/u04/oradata/PHRUAT/tools01.dbf' resize 46m
*
ERROR at line 1:
ORA-03297: file contains used data beyond requested RESIZE value


SQL>


Tom Kyte

Followup  

February 14, 2008 - 10:09 am UTC

duh, recycle bin (the duh is directed to me)

sure, they are 'free' but associated with recycle bin objects.... select segment_name from dba_segments where tablespace_name = 'TOOLS' and you'll see them

refer to the purge command to see how to purge them...

resize datafile

November 20, 2014 - 5:46 pm UTC

Reviewer: mikyung kim from Corona, ca USA

My blob testing got out of hand and one of the datafile became huge, 34G. I used all the scripts provided by Tom and came into conclusion that one of the lob file among 200 other objects in the datafile has been dropped, but still in recycle bin. THIS datafile can be resized to 1100M by the diagnostics. when I only did purge recyclebin command it still showed in a dba_recyclebin. still couldn't resize the datafile. after purge dba_recyclebin execution, then it was gone from the data_dictionary queries and I was able to resize the datafile. Thanks again, Tom.