Skip to Main Content
  • Questions
  • Drop or Rebuild Indexes with MANY MANY extents

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Roy.

Asked: February 01, 2008 - 11:47 am UTC

Last updated: March 31, 2008 - 5:49 pm UTC

Version: 10.2.0

Viewed 1000+ times

You Asked

Hi Tom,

We are upgrading an 8174 database to 10203 and doing some maintenance beforehand. I discovered a partitioned table with 4 locally parititoned indexes. The table is OK but the index partitions were created mainly with 16k extents. The largest index was 24 GB with 1.1 million extents. I just completed a drop index command than ran (no kidding) for 6 days. Then, rebuilt the index in parallel in 33 minutes.

My question is this. I have 2 more indexes in that tablespace. Both are in similar shape but not as bad (one has 540,000 extents, the other about 800,000). The tablespace is 64 GB in size made up of 38 datafiles. I was wondering if a DROP TABLESPACE INCLUDING CONTENTS AND DATAFILES might be a better approach to fixing these last 2 indexes than dropping and recreating each index. Will SMON suffer as much cleaning up temp segments with the drop tablespace command vs the 2 individual drop index commands?

Thanks.
Roy

and Tom said...

are these in locally managed (LMT) or dictionary managed tablespaces (DMT)?

If in DMT, then dropping of any sort will cause uet$ and fet$ to be hammered with recursive SQL (that is why it was six days, I'm sort of surprised it went that fast).

You might consider converting to a LMT first, then dropping, you'll find the extent deallocation to be MUCH faster.

That is, if you need to do this at all. Why not just fix the NEXT EXTENT for the future. the time having lots of dictionary managed extents impacts you is WHEN YOU DROP OR TRUNCATE THE SEGMENT. (so if you do not drop or truncate them, and you've already paid the price of allocating them, just leave them - indexes are read using single block IO in general - root to branch to leaf - database block address by block address - the number of extents doesn't really impact the traversal (unless you frequently index fast full scan them...)

Rating

  (9 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

wouldn't convert hit uet$ and fet$ too?

Kirill, February 03, 2008 - 10:15 pm UTC

> ... uet$ and fet$ to be hammered ...
> ... consider converting to a LMT first, then dropping,
> you'll find the extent deallocation to be MUCH faster.

Just curious, wouldn't converting to lmt hit uet$ and fet$ just the same?


Tom Kyte
February 04, 2008 - 3:59 pm UTC

no, it is quite fast - does things behind the covers, in bulk.

migrate to local (in dbms_space_admin) is speedy. then it'll be a locally managed tablespace... where deallocating extents is pretty fast.

table t - 100,000 extents...


sys%DMT> exec sys.dbms_space_admin.tablespace_migrate_to_local( 'DMT' );
PL/SQL procedure successfully completed.
Elapsed: 00:00:15.08


sys%DMT> select count(*) from dba_extents where tablespace_name = 'DMT';

  COUNT(*)
----------
    100000
Elapsed: 00:00:00.20


sys%DMT> drop table ops$tkyte.t;
Table dropped.
Elapsed: 00:00:00.71


sys%DMT> select count(*) from dba_extents where tablespace_name = 'DMT';

  COUNT(*)
----------
         0
Elapsed: 00:00:00.02

Roy, February 03, 2008 - 10:35 pm UTC

Thanks Tom. We are attempting the cleanup for 2 reasons. First, we are having intermittent performance issues, and are trying to rule this out. Statspack frequently shows SQL against uet$/fet$ in the Top SQL, and Support has identified SMON as getting hammered during some of the system state dumps we have uploaded. And second, we eventually will need to do partition maintenance on this table and don't want it to take forever trying to drop partitions once the system is in production.

Regarding the conversion to LMT, based on your experience and the environment as I've described, any SWAGs on how long the conversion to LMT (and then the subsequent drop) might take? It's a 4 cpu HP server.

Chris, February 04, 2008 - 10:25 am UTC

>> Just curious, wouldn't converting to lmt hit uet$ and fet$ just the same?

If you use dbms_space_admin to convert the tablespace to loccally managed the updates to uet$ and fet$ happen as a single transaction rather than 1.1 million seperate transactions as the extents are deallocated.

Roy, February 05, 2008 - 1:32 pm UTC

Hi Tom,

I conducted a similar test and did not see encouraging results. I created a table with 100,000 4k extents in a DMT. I then created 2 compound indexes on the table, each had 100,000 4k extents, but I interleaved them (used a pl/sql block to alter index allocate extent, alternatingly). I then dropped 1 index, it took 2 hrs 7 minutes. Next I ran the migrate_to_local on the tablespace, it took 2 hrs 43 minutes. The last drop index then took just 4 seconds. However, the migration took 35% longer than dropping the index. This was in 8i on a 2 CPU AIX box. I'm going to look for a 10g install to test on, but so far this doesn't seem as though it will help.

Any other thoughts? Also, is there a way to check the progress of the tablespace migration process? v$session_longops didn't show anything. Thanks.
Tom Kyte
February 05, 2008 - 2:09 pm UTC

I just did a migrate to local with hundreds of thousands of extents and it was "fast" - it was 10g however - I don't have an 8i with the resources needed to create 100,000's of extents.


v$session_longops only works for single contiguous long operations - eg like a full scan.
HASH JOIN
   FULL SCAN T1
   FULL SCAN T2


session long ops would be populated for the full scans (single long operation)

but

nested loops
   full scan t1
   index range scan t2_idx
       table access by index rowid t2



would not - because there is no SINGLE long running thing.


and freeing the extents is millions of really short operations - but do anything a million times and it adds up.

Roy, February 05, 2008 - 2:31 pm UTC

okay thanks for your help. I'm trying to find a 10g that doesn't have a LM system tablespace to I can create a DMT.
Tom Kyte
February 05, 2008 - 2:42 pm UTC

hah, I had the same problem :)

I had to create one....

Roy, February 05, 2008 - 9:12 pm UTC

Well ... same here, had to create a new DB with DM system tablespace. Didn't make a difference though. Must be some significant difference in our environments (last test 10.2.0.3 on Linux) as my 10g test yielded similar results to my 8i test (just faster overall).  Index drop - DMT - 37 minutes; TBS conversion - 43 minutes; Index drop - LMT - 18 seconds.  Doesn't look like any way to get this done by Saturday so we'll have to resort to partition-by-partition online rebuilds.


SQL> @chk_size

SEGMENT_NAME                EXTENTS         MB
------------------------ ---------- ----------
T1                           100000        781
IDX2                         100000        781
IDX1                         100000        781

Elapsed: 00:00:00.01
SQL> drop index idx1;

Index dropped.

Elapsed: 00:37:51.72
SQL> @chk_size

SEGMENT_NAME                EXTENTS         MB
------------------------ ---------- ----------
T1                           100000        781
IDX2                         100000        781

Elapsed: 00:00:00.06
SQL> exec dbms_space_admin.tablespace_migrate_to_local('ROY');

PL/SQL procedure successfully completed.

Elapsed: 00:43:07.76
SQL> @chk_size

SEGMENT_NAME                EXTENTS         MB
------------------------ ---------- ----------
T1                           100000        781
IDX2                         100000        781

Elapsed: 00:00:00.08
SQL> drop index idx2;

Index dropped.

Elapsed: 00:00:18.99


Just curious ... in your test case, are you allocating all your extents contiguously? (Should that even matter?)

Tom Kyte
February 05, 2008 - 9:50 pm UTC

10.2.0.3 on linux for me.

can I have your test scripts?

Roy, February 05, 2008 - 10:09 pm UTC

I had to tweak it a bit between versions but I think this should work.

set timing on

create tablespace roy datafile '/exports/roy01.dbf' size 5001m extent management dictionary segment space management manual;

create table t1 (x date, y number, z varchar2(20)) tablespace roy
storage (initial 4k next 4k minextents 1 maxextents unlimited pctincrease 0);

alter table t1 add (a number, b varchar2(20));

create index idx1 on t1(x,y,z) tablespace roy
storage (initial 4k next 4k minextents 1 maxextents unlimited pctincrease 0);

create index idx2 on t1(x,a,b) tablespace roy
storage (initial 4k next 4k minextents 1 maxextents unlimited pctincrease 0);

declare
x number :=1;
begin
while x < 20000
loop
execute immediate 'alter table t1 allocate extent';
execute immediate 'alter index idx1 allocate extent';
execute immediate 'alter index idx2 allocate extent';
x:=x+1;
end loop;
end;
/

declare
x number :=1;
begin
while x <= 40000
loop
execute immediate 'alter table t1 allocate extent';
execute immediate 'alter index idx1 allocate extent';
execute immediate 'alter index idx2 allocate extent';
x:=x+1;
end loop;
end;
/

declare
x number :=1;
begin
while x <= 40000
loop
execute immediate 'alter table t1 allocate extent';
execute immediate 'alter index idx1 allocate extent';
execute immediate 'alter index idx2 allocate extent';
x:=x+1;
end loop;
end;
/

drop index idx1;

exec dbms_space_admin.tablespace_migrate_to_local('ROY');

drop index idx2;


Tom Kyte
February 06, 2008 - 1:10 pm UTC

Hmm, I do not see what you see.

ops$tkyte%DMT> drop index idx1;

Index dropped.

Elapsed: 00:56:13.05
ops$tkyte%DMT>
ops$tkyte%DMT> exec dbms_space_admin.tablespace_migrate_to_local('DMT');

PL/SQL procedure successfully completed.

Elapsed: 00:07:49.29
ops$tkyte%DMT>
ops$tkyte%DMT> drop index idx2;

Index dropped.

Elapsed: 00:00:22.22
ops$tkyte%DMT> spool off



it is not like I have a huge super powerful machine (it is a 3 year old dell workgroup server, 2 cpu)

I was the only user

Roy, February 06, 2008 - 8:53 pm UTC

Yes that is very strange. I retested by migrating the tablespace back to dictionary and repeating the test case. My results were very similar to yours. Then, I dropped and recreated the tablespace from scratch and started over, and my results went back to what I had seen before.

SQL> drop index idx1;

Index dropped.

Elapsed: 00:40:20.64
SQL>
SQL> execute dbms_space_admin.tablespace_migrate_to_local('ROY');

PL/SQL procedure successfully completed.

Elapsed: 00:55:05.45
SQL>
SQL>
SQL> drop index idx2;

Index dropped.

Elapsed: 00:00:18.34


Who knows ... could be that I have some other activity going on, could be some params different ... don't know. Anyway, thanks a lot for all the help Tom.

work around?

Ke Qiu, March 31, 2008 - 5:37 pm UTC

As follow up, I ran some more tests with SQL trace turned on and found out the time it takes to convert a dictionary managed tablespace to locally managed depends more on the number of rows in fet$ than uet$. When I compare the tkprof output between a fast conversion (seconds) and a slow conversion (hours), the difference was due to the CPU time of following SQL,

select length from fet$ where file#=:1 and block#=:2 and ts#=:3

Here is the tkprof stats of the fast run,

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 94 0.00 0.00 0 0 0 0
Execute 94 0.00 0.00 0 0 0 0
Fetch 94 0.01 0.00 0 318 0 48
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 282 0.01 0.00 0 318 0 48

Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS CLUSTER FET$ (cr=2 pr=0 pw=0 time=16 us)
1 INDEX UNIQUE SCAN I_TS# (cr=1 pr=0 pw=0 time=5 us)(object id 7)


And corresponding stats from slow run,

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1084 0.03 0.02 0 0 0 0
Execute 1084 0.08 0.08 0 0 0 0
Fetch 1083 325.09 355.85 0 890912 0 1060
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3251 325.20 355.96 0 890912 0 1060

Rows Row Source Operation
------- ---------------------------------------------------
0 TABLE ACCESS CLUSTER FET$ (cr=2947 pr=0 pw=0 time=1065592 us)
1 INDEX UNIQUE SCAN I_TS# (cr=1 pr=0 pw=0 time=12 us)(object id 7)

Althoug the access path are the same for both runs, it taks much longer to access the clustered table FET$ in the case of slow run. Using a script from your book "Expert Oracle Database Architecture", I found out there are over 2.3 million rows in FET$ the tablespace to convert, or 2946 chained blocks for that specific tablespace.

I suppose that if I can create an index on FET$(file#, block#, ts#, length), then I don't need to scan those chained blocks at all. However, I am not allowed to create indexes on a data dinctionary table. Is there any way to get around this?
Tom Kyte
March 31, 2008 - 5:49 pm UTC

I cannot suggest anything that would muck with the sys owned tables at all, no.