Skip to Main Content
  • Questions
  • Display the tablespaces that cant allocate another extent for a segment

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Anthony.

Asked: February 01, 2002 - 8:39 pm UTC

Last updated: January 04, 2005 - 7:25 am UTC

Version: 8.1.6

Viewed 1000+ times

You Asked

HI Tom

1. Can you look at this query and c if this script is useful.

Select f.tablespace_name, cant_allocate
from dba_free_space f, (select ( (max(next_extent)/1024/1024)) * 3 CANT_ALLOCATE, TABLESPACE_NAME
from dba_segments t
group by tablespace_name) MAX_ALOC
where f.tablespace_name=max_aloc.tablespace_name
AND F.TABLESPACE_NAME IN (SELECT TABLESPACE_NAME FROM DBA_DATA_FILES DF
WHERE DF.TABLESPACE_NAME=MAX_ALOC.TABLESPACE_NAME
HAVING SUM(MAXBYTES/1024/1024) < MAX_ALOC.CANT_ALLOCATE * 2
GROUP BY TABLESPACE_NAME)
having sum(f.bytes/1024/1024) < cant_allocate
group by f.tablespace_name, cant_allocate
/


2. How can i calculate if the buffer cache, shared pool, log buffer
are not over sized. Can you give me a script if possible.

3. What's the most efficient strategy when using the 'IN' condition in the 'WHERE' clause than using 'UNION'

We have an index on the pln_tran by
tran_date and tran_type
- this table has 1.6 Million records

select tran_Date from pln_tran
where tran_type in ('TEX','RTP','RPF','XXO')

than
select tran_date from pln_tran
where tran_Type='TEX'
UNION all
select tran_date from pln_tran
where tran_Type='RTP'
UNION ALL
select tran_date from pln_tran
where tran_Type='PRF'
UNION ALL
select tran_date from pln_tran
where tran_Type='XXO'

4. I red in the metalink that when you do a full table scan,
oracle will read up to the High water mark, if that so, then
if we have a table that has large number of deletes but small amount of inserts that would affect our performance?


Thank you very much for your precious time TOM.....
MORE POWER.....

and Tom said...

1) seems:

ops$tkyte@ORA8I.WORLD> select a.tablespace_name, a.largest_extent, b.largest_next, a.largest_extent-b.largest_next diff
2 from (select tablespace_name, max(bytes) largest_extent
3 from dba_free_space
4 group by tablespace_name) a,
5 (select tablespace_name, max(next_extent) largest_next
6 from dba_segments
7 group by tablespace_name ) b
8 where a.tablespace_name = b.tablespace_name
9 /

TABLESPACE_NAME LARGEST_EXTENT LARGEST_NEXT DIFF
------------------------------ -------------- ------------ ----------
ATCDEMO_DATA 65536 221184 -155648
DES2K_IDX 524288 524288 0
DRSYS 3309568 335872 2973696
FLOWS_DATA 26214400 524288 25690112
GPS_DATA 524288 524288 0
ISOC 524288 524288 0
OEM_REPOSITORY 524288 524288 0
OLR_DATA 1572864 524288 1048576
ORA_SP_DATA 40960 1720320 -1679360
PEOPLE 4194304 524288 3670016
PKT_DATA 524288 524288 0
PPR_DATA 4259840 1146880 3112960
PPR_IDX 598016 262144 335872
RFP_DATA 540672 45342720 -44802048
SARA 524288 524288 0
SYSTEM 114688 13090816 -12976128
TQM_DATA 524288 524288 0
USERS 8388608 262144 8126464
UTILS 524288 524288 0

19 rows selected.

would be easier -- find the biggest free extent by tablespace, find the biggest NEXT_EXTENT by tablespace and subtract. If negative -- you MIGHT have a problem (i do not, i use autoextend, I'm OK there)

2) look at your hit ratios, use stats pack to see how much memory is actually used in the shared pool. The log buffer is generally so small as to not be a concern -- you could make it smaller until you start seeing waits on redo writes and such.

3) They are basically equivalent if tran_type is indexed and that index is used. Look at the plans and you would see this. If there is no index being used, the union all is really quite inefficient as it would full scan the table 4 times - the in would do it once.

Use IN

4) If you load up a billion records and then delete them all. A select * from t will take about the same amount of time if the table was full. It'll full scan every block that HAD data. So yes, a table that was big and is now "small" due to a delete (how you have very small number of inserts but lots of deletes??? by definition you can only delete a row after its inserted so they would be at least the same) would take longer to full scan then you might think.

Rating

  (6 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Excellent

Anthony K. Velarde, February 03, 2002 - 1:38 am UTC

Thank you very much TOM.... More power and God bless.

Reg Question #4

Irfan, February 03, 2002 - 6:28 am UTC

Hi Tom,

Can you guide me a better way to increase the performance after large number of rows are deleted from a table. (Normally history tables that heap around millions of records in a small time). Is there a way to dealloacte blocks/extents. Since Truncate command will delete all the rows. We have Oracle 734 Parallel Server.
Thanks.


Tom Kyte
February 03, 2002 - 9:15 am UTC

Not in that release.

Partitioning is the answer. It allows you to easily "age" data out of the database. You can simply drop a partition -- a slice of the data -- without touching the rest. Sort of like a selective delete.

You see - the rows you delete might be intermingled with other rows. Sure, you started with 10,000,000 rows, you delete 5,000,000 of them -- but you might not have EMPTIED a single block (eg: you deleted every other record and you get 2 records/block). There is quite simple "nothing" to deallocate. You would need to "rebuild".

Free space query

Connor McDonald, February 04, 2002 - 6:11 am UTC

Maybe a "(+)" might be useful for the poster for the (hopefully rare) condition where there are no remaining entries in DBA_FREE_SPACE for a tablespace.

Tom Kyte
February 04, 2002 - 10:14 am UTC

YES, definitely -- good point, it should be:

ops$tkyte@ORA8I.WORLD> select a.tablespace_name, a.largest_extent, 
b.largest_next, a.largest_extent-b.largest_next diff
  2    from (select tablespace_name, max(bytes) largest_extent
  3            from dba_free_space
  4           group by tablespace_name) a,
  5         (select tablespace_name, max(next_extent) largest_next
  6            from dba_segments
  7           group by tablespace_name ) b
  8   where a.tablespace_name<b>(+)</b> = b.tablespace_name
  9  /

 

The disappearing next extent....

Connor, April 05, 2002 - 5:32 am UTC

Just about every query I've ever seen for free space mgt involves "Can the next extent be allocated without error?" which makes enough sense. Any suggestions on how this can be achieved for an auto-allocate lmt? (Not my choice of use - but of course it is the default for 9i).

sys@cust9> create tablespace small datafile '/u02/oracle/oradata/cust9/small.dbf' size 2m;

Tablespace created.

sys@cust9> create table small_tab ( x number ) tablespace small;

Table created.

sys@cust9> select segment_name, bytes, next_extent
2 from user_segments
3 where segment_name = 'SMALL_TAB'

SEGMENT_NAME BYTES NEXT_EXTENT
-------------------- ---------- -----------
SMALL_TAB 65536

Cheers
Connor

Tom Kyte
April 05, 2002 - 9:33 am UTC

If there is free space -- and the free space exceeds the extent size, there be enough room.

Me, I use auto-extend datafiles (you either love them or hate them, i know.  I love them).  So to me, those queries are useless.  "df -k" tells me if i have enough space or not.

On my system, this query looks at locally managed, uniform allocation tablespaces:

ops$tkyte@ORA817DEV.US.ORACLE.COM> select a.tablespace_name, a.initial_extent,
  2  case when b.bytes > a.initial_extent then 'Got space' else 'No space' end got_space
  3  from dba_tablespaces a,
  4       ( select tablespace_name, max( bytes ) bytes from dba_free_space group by tablespace_name ) b
  5  where a.tablespace_name = b.tablespace_name (+)
  6  and a.extent_management = 'LOCAL'
  7  and a.allocation_type = 'UNIFORM'
  8  /

TABLESPACE_NAME                INITIAL_EXTENT GOT_SPACE
------------------------------ -------------- ---------
RBS_TS_01                              524288 No space
RBS_TS_02                              524288 No space
RBS_TS_03                              524288 No space
RBS_TS_04                              524288 No space
RBS_TS_05                              524288 Got space
TEMP                                   524288 No space
USERS                                  524288 Got space
DRSYS                                  524288 No space

8 rows selected.

and shows which have space or not.  Looks like I'm in trouble -- but not really.  if I run my free.sql (search for free.sql) I would find that all of my files can autoextend as needed. 

 

Tuning monitoring scripts

A reader, January 31, 2003 - 5:59 pm UTC

Hi,

We have the following scripts run every 15minutes to monitor
1. if the any object (table/index) is close to reaching its max extents

SELECT
segment_type,
segment_name,
extents,
max_extents,
( ( extents/max_extents ) *100 )
FROM
user_segments
WHERE
segment_type
IN ( 'TABLE', 'INDEX' ) and extents > (90/100) * max_extents

Performs approximately
DISK READS 1000
BUFFER GETS 280205


2. if the the tablespace has contiguous space big enough for the next extent.

SELECT
substr ( segment_name,1,20 ),
substr ( segment_type,1,8 ),
substr ( tablespace_name,1,15 ),
bytes,
next_extent
FROM
sys.dba_segments a
WHERE
rtrim ( a.Owner ) = 'SCOTT' AND a.tablespace_name in ( select tablespace_name from sys.dba_data_files group by tablespace_name ) and next_extent > ( select max ( bytes ) from sys.dba_free_space b where b.tablespace_name = a.tablespace_name ) order by next_extent desc

Performs approximately:

DISK READS 18450
BUFFER GETS 2828560


These are very expensive on production boxes. Is there any way to write them efficiently? Can you please help to provide a better sql?


Thanks a lot.

Tom Kyte
January 31, 2003 - 6:20 pm UTC

1) alter table set the maxetents unlimited and stop running it -- or move everything to locally managed tablespaces where the only option is maxextents unlimited.

problem solved - you don't need to run the query anymore. that query was relevant in 1995 and before.

2) see #1 and move to LMTS which remove this issue all together.

(to check every 15 minutes? that seems a tad extreme, maybe once a day -- off peak, but every 15 minutes???)

A reader, January 04, 2005 - 7:25 am UTC