Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Amit.

Asked: May 10, 2001 - 2:27 pm UTC

Last updated: July 16, 2013 - 2:22 pm UTC

Version: 8.1.6.1

Viewed 50K+ times! This question is

You Asked

Tom -

I am required to drop a lot of columns from a few tables off of a PROD schema.
The tables are big tables (for us) consisting of 13 Mil. rows each and
already have a large number of columns in them.
The tables do have referential integrity constraints and a lot of indices.
Columns getting dropped are not indexed.

I want to know what could be the best way to go about dropping these columns from those tables:

a. Do I need to create intermidiate tables using "create table as select " and then drop the original tables and then create indexes and constraints on the new table and then rename it to original.

b. I drop column one statement at a time using "alter table drop column ..." statement?
c. I drop a series of columns using "alter table drop (...) " statement?
d. I set the columns "unused" and not worry about the space they already hogged? We are looking at reclaiming the space...

I also need to know the side-effects like "fragmentation" and "undo size"
We are "still" using dictionary managed tablespaces.

Do I need to use checkpoint option if i decide to use drop column ?

Does dropping columns leave the table fragmented or it rewrites it?

The downtime we can take is about 4 hours including some other release stuff.

Let me know what you think would be the best option keeping in perspective : fragmentation,time and space usage.

Pls. elaborate any side effect of using "set unused..".

Any help would be highly appreciated.

Thanks,
Amit.




and Tom said...

if the goal is to reclaim space then the choice would probably be

e) set unused all columns and then DROP with checkpoints.


that'll rewrite each block (it will not move the rows off of the block they are on -- that would force us to rebuild all indexes as well if we did that). so you should end up with more blocks in the freelists after you are finished.


the table will be very much as it was before, just each block will have more free space and (depending on your pctused of course).

Here is an example (search for dbms_space on my web site for show_space):

ops$tkyte@DEV816> create table t ( x int, y char(2000), z char(2000) );
Table created.

ops$tkyte@DEV816> insert into t select rownum, 'x', 'y' from all_objects where rownum < 1001;
1000 rows created.

ops$tkyte@DEV816> exec show_space( 'T' )
Free Blocks.............................5
Total Blocks............................1024
Total Bytes.............................8388608
Unused Blocks...........................19
Unused Bytes............................155648
Last Used Ext FileId....................7
Last Used Ext BlockId...................203209
Last Used Block.........................45

PL/SQL procedure successfully completed.


that shows the table is using 1005 blocks and 5 of those blocks are on the freelist

1* alter table t set unused column x
ops$tkyte@DEV816> /

Table altered.

ops$tkyte@DEV816> alter table t set unused column y;

Table altered.

ops$tkyte@DEV816> exec show_space('T')
Free Blocks.............................5
Total Blocks............................1024
Total Bytes.............................8388608
Unused Blocks...........................19
Unused Bytes............................155648
Last Used Ext FileId....................7
Last Used Ext BlockId...................203209
Last Used Block.........................45

PL/SQL procedure successfully completed.

that shows that setting the columns unused didn't do anything spacewise -- we are still at the same point

ops$tkyte@DEV816> alter table t drop unused columns checkpoint 500;

Table altered.

ops$tkyte@DEV816> exec show_space('T')
Free Blocks.............................1004
Total Blocks............................1024
Total Bytes.............................8388608
Unused Blocks...........................19
Unused Bytes............................155648
Last Used Ext FileId....................7
Last Used Ext BlockId...................203209
Last Used Block.........................45

PL/SQL procedure successfully completed.


That shows that we now have 1004 blocks on the free list (pretty much EVERY block is).

Rating

  (43 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

DBA

Pauline, May 10, 2001 - 9:36 pm UTC

Good answer with the example.

Amit Barve, May 11, 2001 - 9:45 am UTC


checkpoint 500

A reader, September 18, 2001 - 4:03 am UTC

Tom,

How did you come up with the figure 500 ?

Tom Kyte
September 18, 2001 - 8:51 am UTC

I "made it up". It was just a demo. 500 was about half of 1005 which is the number of blocks in the table. It was "half way". It is not a rule of thumb, it is not guidance -- it was just a demo.

Cool - but why...

A reader, September 18, 2001 - 9:21 am UTC

In the Example, why the total bytes remain the same size?

Ramification for checkpoint 500

A Reader, September 18, 2001 - 9:28 pm UTC

From the Manual
"
if this statement is interrupted
after a checkpoint has been applied, the table remains in an
unusable state. While the table is unusable, the only operations allowed on it are DROP TABLE, TRUNCATE TABLE, and ALTER TABLE DROP COLUMNS CONTINUE
Submitting DROP COLUMNS CONTINUE this statement while the table is in a valid state results in an error
"

Seems like must be used with great care. Any alternatives

Thanks



Tom Kyte
September 19, 2001 - 7:32 am UTC

I don't see the need for "great care". All it is saying is that if the instance fails, you must resume this command from where it left off when you restart -- thats all.

To me that would be preferable if I was halfway through my 100gig table when the failure occurred. Rather then looking at the glass being half empty here, I'm looking at it as being half full. I only have 1/2 the work to finish -- I didn't have to ROLLBACK the 1/2 of the work I did. I just pick up EXACTLY where I left off.

How come Total Blocks = 1005?

PAwanputra, September 27, 2001 - 11:46 am UTC

Tom,
Excellent example. But U say
that shows the table is using 1005 blocks and 5 of those blocks are on the
freelist

But I see total blocks =1024. Is this a typo??



Tom Kyte
September 27, 2001 - 2:18 pm UTC

No, but could be more clear........ The numbers are:

Free Blocks.............................5
Total Blocks............................1024
Total Bytes.............................8388608
Unused Blocks...........................19


When I said "using" i meant "really really using, not allocated, using".

1005 = 1024-19

there are 19 UNUSED blocks.


monitoring drop column progress

A reader, September 10, 2002 - 5:54 am UTC

Hi Tom

How can we monitor how many rows drop column command has progressed?
I thouught of watching checkpoint (setting checkpoint 1000) per session but I cant find any view that shows that

thank you

Tom Kyte
September 10, 2002 - 7:38 am UTC

v$session_longops is useful for monitoring any long running operation.

I kicked off an alter table drop column on a 1.4 million row table. Then, waited a bit and queried v$session_longops from another session -- using "where sid=<sid of the session doing the drop>"

It showed me:

SID : 8
SERIAL# : 3928
OPNAME : Rowid Range Scan
TARGET : BIG_TABLE.BIG_TABLE
TARGET_DESC :
SOFAR : 3603
TOTALWORK : 19630
UNITS : Blocks
START_TIME : 10-sep-2002 07:34:13
LAST_UPDATE_TIME : 10-sep-2002 07:35:56
TIME_REMAINING : 458
ELAPSED_SECONDS : 103
CONTEXT : 0
MESSAGE : Rowid Range Scan: BIG_TABLE.BIG_TABLE: 3603 out of 19630 Blocks done
USERNAME : BIG_TABLE
SQL_ADDRESS : 829556CC
SQL_HASH_VALUE : 2608864360
QCSID : 0
-----------------

PL/SQL procedure successfully completed.


Which shows how many blocks it has to process total, how many its done so far, how long it has been running, a guesstimate as to when it'll be done....

Very useful.

drop /unused columns

Jack, January 08, 2003 - 4:25 pm UTC

Tom,
We have a production db and we want to keep the downtime as min. as possible for db changes.
a) is it better to set some columns to unsed and then drop OR directly drop the columns (appx.5 million rows). Which is faster ( I know unused takes less time than drop)? I'm not sure how much time it takes to drop unused columns?
can you provide some numbers like
drop columns ...x time.
unused columns and drop unused.....y time.

b) can I drop unused columns when users are accessing the tables (online customers).

c) If I keep unused columns does space usage will be still same or oralce won't store data for unused columns.

thanks for your help.

Tom Kyte
January 08, 2003 - 6:27 pm UTC

or how about

a) set them unused. stop. period.

otherwise -- benchmark -- it is what I would do (and only you have the hardware you are running on).

My advice tho -- just set unused.

b) no, but you could use DBMS_REDEFINITION in 9i to accomplish that.

c) oracle will not RECLAIM existing used space. Space will not be used for new rows.

Info is good. I have one more question, currently I am struggling with

src, November 16, 2006 - 12:23 pm UTC

Below are the sequence of steps I have done.

1) One column was set to unused few days back
2) I have started dropping the column with checkpoint (Requested 25 hours downtime from the users. It's a very big table).
3) I killed it after 25 hours. Then only I came to know that the table is not accessible until I complete the drop column statement.
4) I have executed again "alter table drop columns continue".

Now I would like to see, the current status of the "alter table drop columns continue" statement.

sid of this doesn't have any entry in v$session_longops table.

Please advise me.



Tom Kyte
November 16, 2006 - 3:35 pm UTC

not sure that you can monitor that one. sorry.

Thank you.

A reader, November 16, 2006 - 5:54 pm UTC

Thank you for the clarification Tom. Is it possible to find the original column name (Which was set to unused). After setting the column to unused, oracle is changing it to 'SYS_...' in the dba_tab_cols. I am unable to find when it was set to unused and what was the column name before setting it to unused. Could you please help me.

Tom Kyte
November 17, 2006 - 3:13 am UTC

nope, it would not be relevant anyway, it is not like you can get to the data or anything.

Thank you

A reader, November 17, 2006 - 2:10 pm UTC

Thank you for the reply Tom. I found a surprising thing yesterday. After the system is rebooted, I have started the drop columns continue statement with checkpoint option. Now I could see 2/3/.../8/9...29/30 entries are coming and going in v$session_longops for that SID. Opname is "Rowid Range Scan". Number of entries are not static.
I have a question currently. As per the plans, we are upgrading this machine to 10g tomorrow. If this process still runs till tomorrow. And if we shutdown the instance and upgrade to 10g, would there be any issues during upgrade. Can we restart the process again after 10g.

Tom Kyte
November 20, 2006 - 2:24 am UTC

you should of course let this finish before upgrading.

locking done by "drop unused columns"

Jeremy Portzer, January 29, 2007 - 4:54 am UTC

Thanks for a great article and thread.

We are trying to drop an unused column from a large table, and are finding that the "drop unused columns" statement holds a table lock, causing all other sessions to wait on its completion before performing inserts and updates on this table. Is there any way around this? Maybe this is necessary because of the nature of the operation, but it is problematic as we need this table to remain fully online.

We are not using checkpoint because we don't want the risk of the table being invalid if the transaction doesn't succeed, and are willing to take the hit on necessary UNDO space. However it isn't clear to me if this affects the locking profile at all.

Thanks,
Jeremy
Tom Kyte
January 31, 2007 - 1:35 pm UTC

why not just set unused?

and if space reclaiming must be done - dbms_redefinition to online redefine the segment?

Set UNUSED and DROP or Just Set UNUSED

Lakshaman, April 19, 2007 - 6:48 am UTC

We ship a 360 column table. Customers have an average of 1/2 million rows in this table.
Would setting columns [nearly 30] to UNUSED would improve performance or we have to drop them too.

The columns are a mix of NUMBER and VARCHAR2 and are not part of any index on the table.
Tom Kyte
April 19, 2007 - 7:29 am UTC

it would not improve performance to set them unused, it would immediately make them invisible (they are still there, you just cannot access them anymore)

it may or may not improve performance to drop them. "it depends".

for a full scan - it'll reduce the work to process a given row (less data to parse over), and if you have millions of rows, this might be noticeable - it will not make the table smaller, so IO won't change, but the cpu used might go down a little.

for an index retrieval of a few rows, you likely won't notice much if any change.

drop unused checkpoint xxx

Alvin, March 18, 2008 - 4:21 pm UTC

Great answer as always.

A couple of follow up questions:

From the Documentation
---8<---
" Checkpointing cuts down the amount of undo logs accumulated during the DROP COLUMN operation to avoid running out of undo space. "

How does it achieve reducing the undo ? everytime it does a checkpoint, does it tag the just "used undo blocks" as "free to re-use again" hence the whole operation just re-use whatever db-redo-chunk-of-blocks that it was using ?

2. In your first answer to this thread.

I see that 1005 of 1024 blocks was relocated to the free-list. Does the "checkpoint" option tells oracle to "forget about pctused/pctfree criteria" ? or Is it relying/assuming that we are using ASSM ?


Tom Kyte
March 24, 2008 - 9:20 am UTC

it should read more like "reducing the total amount of active UNDO held by this process at any given point in time"

it increases the overall amount of undo generated, however, at any given point in time - it reduces the amount of ACTIVE undo that cannot be reused.


For example, let us say that it would take 100mb of UNDO if no checkpointing happened. So, near the end you would need 100mb UNDO to be allocated to this single process, it would 'own' all 100mb

With checkpointing, it might only need 1mb, then 'commits' and 'releases' this 1mb. It might have to do that 200 times to accomplish the same work - so, it generated 200mb of undo, but only need 1mb of undo at any point in time.

Since this process is written to be RESTARTABLE (picks up where it left off) and since this process is ok with people seeing the table before it is entirely done (it hides the column from view, and then starts dropping it) and since this process would not be subjecting itself to an ora-1555 because of the way it is developed - it is safe for it to checkpoint.

But please remember, the developers wrote a lot of code to make it restartable - safe for people to see it partially done - and to make it not be subject to a 1555, so they can checkpoint.



as for #2, not sure what you mean. pctfree is never ignored for table blocks (that is used during insert processing and used by both manual and automatic segment space management)

and ASSM has the concept of pctused, you just do not set it, it does it automatically.

Need some sugegstion

Anil, July 18, 2009 - 12:29 pm UTC

Hello Tom,
Most of my queries have been answered in this section.

If go with an manual method (drop unused column) of dropping the columns then do I need to re-org the table?

Our db is on Oracle 9i.


Tom Kyte
July 24, 2009 - 9:38 am UTC

there are two answers to all technical questions

a) why? (why are you wanting to do that...)
b) it depends

(b) is the answer here.


If you drop a column, it will leave that space "free" on the block. Suppose you drop a column that is 50% of the row width on average (it is big). You will end up with a table that is more than half empty - it might benefit you to reorganize it.

Suppose you drop a column that is tiny, the resulting new free space on each block is small, it might not benefit you to reorganize

Column seeming as unused in a table but not in dba_unused_col_tabs

Sumit Thapar, April 01, 2010 - 2:58 am UTC

Hi Tom,

Plethora of knowledgeable material to read on in any of the threads and I everytime thank you for enlightening us.

I have a question with regards to unused columns. We have a table in the database which shows a column named "SYS_NC00041$" when we query the user_tab_cols view, but does not appear in the normal table selects.
Assuming this column to be an unused column, I queried the dba_unused_col_tabs view.
But, the table containing this colunm is not listed in it.

As a result, I am clueless about the same and hence need your guidance to find out what exactly is it.

Thanks a lot in advance..
Tom Kyte
April 05, 2010 - 12:54 pm UTC

that is just a hidden column. it is not unused.

could be a virtual column, a function based index, some object column whatever.

give us the output of dbms_metadata.get_ddl/get_dependent_ddl for that table and we can shed more light.

Thanks, required details...

Sumit Thapar, April 09, 2010 - 2:28 am UTC

Hi Tom,

Thanks a lot for addressing the question. Below are the results are required...


SQL> SELECT DBMS_METADATA.GET_DDL(object_type => 'TABLE',name => 'PRODUCT',schema => 'SCH_OWNER') DETAILS FROM DUAL

DETAILS
--------------------------------------------------------------------------------

  CREATE TABLE "SCH_OWNER"."PRODUCT"
   ( "PROD_ID" NUMBER(15,0) NOT NULL ENABLE,
 "PROD_NM" VARCHAR2(35) NOT NULL ENABLE,
 "NDC" VARCHAR2(11) NOT NULL ENABLE,
 "BRND_CD" VARCHAR2(10) NOT NULL ENABLE,
 "PROD_NM_EXTN_TXT" VARCHAR2(35) NOT NULL ENABLE,
 "OTC_IND_CD" CHAR(1) NOT NULL ENABLE,
 "PKG_SIZE_QTY" NUMBER(11,3) NOT NULL ENABLE,
 "PKG_SIZE_UOM_CD" VARCHAR2(2) NOT NULL ENABLE,
 "PKG_QTY" NUMBER(*,0) NOT NULL ENABLE,
 "PKG_DESC_TXT" VARCHAR2(10) NOT NULL ENABLE,
 "DSG_FORM_ABBR_CD" VARCHAR2(4) NOT NULL ENABLE,
 "DEA_CLAS_CD" VARCHAR2(5),
 "THER_CLAS_CD" NUMBER(*,0) NOT NULL ENABLE,
 "GNRC_CD" VARCHAR2(1) NOT NULL ENABLE,
 "MTRC_STRN_QTY" NUMBER(11,3) NOT NULL ENABLE,
 "UOM_NM" VARCHAR2(11) NOT NULL ENABLE,
 "SELL_UOM_NM" VARCHAR2(10) NOT NULL ENABLE,
 "THER_EQVL_CD" VARCHAR2(2) NOT NULL ENABLE,
 "GCN_TXT" VARCHAR2(5) NOT NULL ENABLE,
 "ACTV_STAT_CD" CHAR(1) NOT NULL ENABLE,
 "NHUI_CD" VARCHAR2(1) NOT NULL ENABLE,
 "GPI_TXT" VARCHAR2(14) NOT NULL ENABLE,
 "ANCL_CD" VARCHAR2(2),
 "MULT_DSG_CD" CHAR(1),
 "PKG_STRN_QTY" NUMBER(11,3),
 "MEDI_PKG_SIZE_TXT" VARCHAR2(4),
 "MEDI_PKG_DESC_TXT" VARCHAR2(10),
 "PKG_STRN_UOM_CD" VARCHAR2(4),
 "MEDI_PKG_STRN_QTY" NUMBER(11,3),
 "MFG_ID" NUMBER(15,0),
 "CRTE_BY" VARCHAR2(61) NOT NULL ENABLE,
 "CRTE_DT" TIMESTAMP (6) DEFAULT SYSDATE NOT NULL ENABLE,
 "UPDT_BY" VARCHAR2(61) NOT NULL ENABLE,
 "UPDT_DT" TIMESTAMP (6) DEFAULT SYSDATE NOT NULL ENABLE,
 "EFF_STRT_DT" TIMESTAMP (6) DEFAULT SYSDATE NOT NULL ENABLE,
 "EFF_END_DT" TIMESTAMP (6) DEFAULT to_date('12/30/9999', 'MM/DD/YYYY') NOT NULL
 ENABLE,
 "REC_VER_ID" NUMBER(15,0) NOT NULL ENABLE,
 "REC_ORIG_CD" VARCHAR2(10) DEFAULT 'NOT_SPEC' NOT NULL ENABLE,
  CONSTRAINT "PRODUCT_PK" PRIMARY KEY ("PROD_ID")
  USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
  TABLESPACE "DATA01_1M"  ENABLE,
  CHECK (REC_ORIG_CD IN ('CATS', 'HBS_SPARCS', 'EPH', 'NOT_SPEC', 'RXCON
N', 'HMS')) ENABLE,
  CHECK (ANCL_CD IN (0, 1, 2, 3, 4, 5, 6)) ENABLE,
  CHECK (MULT_DSG_CD IN ('N', 'Y')) ENABLE,
  CHECK (REC_ORIG_CD IN ('ESPS', 'CATS', 'HBS_SPARCS', 'EPH', 'NOT_SPEC', 'RXCON
N', 'HMS')) ENABLE,
  CHECK (ANCL_CD IN (0, 1, 2, 3, 4, 5, 6)) ENABLE,
  CHECK (MULT_DSG_CD IN ('N', 'Y')) ENABLE,
  CHECK (REC_ORIG_CD IN ('ESPS', 'CATS', 'HBS_SPARCS', 'EPH', 'NOT_SPEC', 'RXCON
N', 'HMS')) ENABLE,
  CHECK (ANCL_CD IN (0, 1, 2, 3, 4, 5, 6)) ENABLE,
  CHECK (MULT_DSG_CD IN ('N', 'Y')) ENABLE,
  CHECK (REC_ORIG_CD IN ('ESPS', 'CATS', 'HBS_SPARCS', 'EPH', 'NOT_SPEC', 'RXCON
N', 'HMS')) ENABLE,
  CHECK (ANCL_CD IN (0, 1, 2, 3, 4, 5, 6)) ENABLE,
  CHECK (MULT_DSG_CD IN ('N', 'Y')) ENABLE,
  CHECK (REC_ORIG_CD IN ('ESPS', 'CATS', 'HBS_SPARCS', 'EPH', 'NOT_SPEC', 'RXCON
N', 'HMS')) ENABLE,
  CHECK (ANCL_CD IN (0, 1, 2, 3, 4, 5, 6)) ENABLE,
  CHECK (MULT_DSG_CD IN ('N', 'Y')) ENABLE,
  FOREIGN KEY ("MFG_ID")
   REFERENCES "SCH_OWNER"."MANUFACTURER" ("MFG_ID") ENABLE
   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
  TABLESPACE "DATA01_1M"


--==============================================================================================


SQL> SELECT DBMS_METADATA.GET_DEPENDENT_DDL(object_type => 'INDEX',base_object_name => 'PRODUCT') DETAILS from dual;

CREATE INDEX "SCH_OWNER"."PRODUCT_N1" ON "SCH_OWNER"."PRODUCT" ("MFG_ID")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "DATA01_1M"

CREATE INDEX "SCH_OWNER"."PRODUCT_N2" ON "SCH_OWNER"."PRODUCT" (UPPER("BRND_CD"))
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "DATA01_1M"

CREATE INDEX "SCH_OWNER"."PRODUCT_N3" ON "SCH_OWNER"."PRODUCT" (UPPER("PROD_NM"))
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "DATA01_1M"

CREATE UNIQUE INDEX "SCH_OWNER"."PRODUCT_PK" ON "SCH_OWNER"."PRODUCT" ("PROD_ID")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "DATA01_1M"

CREATE UNIQUE INDEX "SCH_OWNER"."PRODUCT_U2" ON "SCH_OWNER"."PRODUCT" ("NDC", TRUNC("EFF_STRT_DT"))
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "DATA01_1M"

---------------
Thanks for the help and guidance being provided.

Tom Kyte
April 13, 2010 - 8:35 am UTC

.. (UPPER("BRND_CD"))..

function based indexes do that - hidden colum used for the index.

Thanks a lot for clearing out the cloud...

Sumit Thapar, April 22, 2010 - 4:21 am UTC


Simple Drop Statement

Sumit, July 16, 2010 - 10:51 am UTC

Hello Tom,
For a DW application, I have a table with 200 columns and 10 Million rows.
I added an column which took about a second. But the drop statement on the newly added column took 10 minutes.

alter table test drop column val5;

The column to be dropped didn't have any value.

Can you please let me know why did the drop operation took such long time.

Tom Kyte
July 19, 2010 - 1:24 pm UTC

... The column to be dropped didn't have any value....

and you think we knew that why? We had to go look and see. If you wanted "fast drop", use "set unused" instead.

Drop directly vs. set unused and drop

Murat, October 08, 2010 - 7:22 am UTC

Dear Tom,

Thanks for your share.
I tried to drop a column from a table that has 6 million row without setting it to unused and it took 9 min while setting the column to unused and then dropping it took 10 min.

P.S.= DB was only running the ddls of mine.Nothing was running on db during the executions.

Tom Kyte
October 11, 2010 - 11:39 am UTC

and the difference between 9 and 10 minutes of elapsed time is zero as far as I'm concerned.

Run it again and the 9 will turn into 10 and the 10 into 9 and so on. In the grand scheme of "time" that isn't much of a difference at all - and the runtimes will vary.

You didn't need to drop it at all, setting unused would have achieved your goal.

Dropping Unused takes time.

Snehasish Das, January 13, 2011 - 2:12 am UTC

Hi Tom,

Can you please let me know why Drop unused table takes a lot more time than the DROP COLUMN. What is the difference in internal mechanism of DROP UNUSED and DROP COLUMN.

Also Can you let me know how to find which columns are unused in a table , I know we can know the number of columns which are unused but if i want to know what were the column names and data types can i Know it.

Currently we are to drop some unused columns in production which are huge (45GB table) and want to know how much would be the time period if we want to drop the unused columns.

I tested it on a 3G table and the time it took was about 1 hour and 45 Mins.

Regards,
Snehasish Das,
Technical Architect.
Tom Kyte
January 13, 2011 - 10:20 am UTC

I believe you either have a really really really slow machine - or your machine was quite busy doing something else and didn't have the time to pay attention to your - or you had outstanding transactions and where waiting for them.


The do about the same thing - dropping a column and dropping unused do.

There is no direct way to see how many are unused - however, if you truly truly have a "production" table and no idea how many columns are unused - you have a big problem that needs to be fixed straight up. You apparently have no configuration management - it is a little like the wild wild west. That needs to get under control.

I just tested on my laptop, in a virtual machine, while everything else normal on my laptop was running - another virtual machine, web brower, email, gadgets, IM client and so on.... And it only took a few minutes on a 1.1gb table to drop a column.


You might want to trace it next time (dbms_monitor.session_trace_enable( waits=>true ) to see if you are waiting a long time on some resource. If you see long elapsed and low CPU with no other waits - you were cpu starved, if you see long elapsed with low CPU and other waits - you were waiting for that resource, be it IO, log space, whatever.

Also - remember - the fastest way to do something is to NOT DO IT. Perhaps - you should just set unused and be done with it. If you want to 'reclaim the space', you could use an online reorg with dbms_redefinition to do it - then it doesn't really matter how long it takes since it is.... online.

ops$tkyte%ORA11GR2> prompt building big table
building big table
ops$tkyte%ORA11GR2> set termout off
ops$tkyte%ORA11GR2> select blocks*8/1024/1024 gb from user_tables where table_name = 'BIG_TABLE';

        GB
----------
1.12217712

Elapsed: 00:00:00.06
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> set timing on
ops$tkyte%ORA11GR2> alter table big_table drop column object_name;

Table altered.

Elapsed: 00:02:19.21
ops$tkyte%ORA11GR2> alter table big_table drop column owner;

Table altered.

Elapsed: 00:02:30.84
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> prompt building big table
building big table
ops$tkyte%ORA11GR2> set termout off
ops$tkyte%ORA11GR2> select blocks*8/1024/1024 gb from user_tables where table_name = 'BIG_TABLE';

        GB
----------
1.12217712

Elapsed: 00:00:00.01
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> alter table big_table set unused (object_name);

Table altered.

Elapsed: 00:00:00.05
ops$tkyte%ORA11GR2> alter table big_table drop unused columns;

Table altered.

Elapsed: 00:02:31.52
ops$tkyte%ORA11GR2> alter table big_table set unused (owner);

Table altered.

Elapsed: 00:00:00.10
ops$tkyte%ORA11GR2> alter table big_table drop unused columns;

Table altered.

Elapsed: 00:02:44.68
ops$tkyte%ORA11GR2> 



Alternative method of altering large tables

Karan, February 08, 2011 - 11:33 pm UTC

Hi Tom,
I face a similar problem of altering numerous large tables (>10mil rows)in production. I wanted to know can we follow the following steps to speed up our alters instead of using the unused columns method:
1) Backup to be taken with indexes and constraints (using create table as select *....and taking bckup of constraints/indexes)
2) Drop the index
3) Truncated existing table
4) Apply alter statements on original table
5) Populate table from backup (insert into tbl as select col1,col2...from bkp_tbl)
6) Apply index and constraints and drop bkp table

Since its production tables and i've never done it before, im a little apprehensive with setting columns as unused and dropping/modifying columns.
Tom Kyte
February 10, 2011 - 4:04 pm UTC

why do you think this would be faster/easier? It would take longer and be error prone and a lot more complex.

If you are just trying to get rid of a column why not just set it unused??? period - done - fast, clean, simple.

There are good reasons to drop columns

Rich, March 04, 2011 - 1:39 pm UTC

In the preceding threads, the question is repeatedly posed, "Why not just set the columns to UNUSED?"  If the source table, from which the column needs to be removed, is a partitioned table, you may want to set the column UNUSED and DROP it.  

For example, if you you create table as select (CTAS), using the partitioned table as the source and then attempt an ALTER TABLE EXCHANGE PARTITION ....., you can encounter this issue if there are UNUSED or HIDDEN columns in the partitioned source.   

create table non_partitioned_exchange as select * from partitioned_table where 0 = 1;

Table created.

SQL> alter table partitioned_table exchange partition anypartition with table non_partitioned_exchange;

alter table partitioned_table exchange partition anypartition with table non_partitioned_exchange
*
ERROR at line 1:
ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION

I recently ran into this due to a HIDDEN column (M_ROW$$) that was created by using DBMS_REDEFINITION.  My only real workaround was to set UNUSED AND DROP.  

Tom Kyte
March 07, 2011 - 9:40 am UTC

So, tell me, does this mean you always want to drop the column?

No, you hardly ever need to drop the column physically (just wait for a reorg at some future point if you want).

So, this is why I ask "WHY".

If you don't have any solid technical reason "WHY", I just say "the fastest and easiest way to do it then would be to not do it".

I can come up with many "WHY"s - but if the person asking "how do i" doesn't have one of them - "why" bother?

There are only two answers to all technical questions - the first answer should be:

"why", "why do you want to do that"

and upon discovering a valid line of reasoning for doing something, the second answer is:

"it depends" :)

We almost always need more information before answering.


Your subject should have been:

There are sometimes, in certain specific cases, good reasons to drop columns.

In general, you probably don't need to.

Partition exchange with hidden columns

Drew, April 26, 2011 - 8:01 am UTC

I ran into the partition swap mismatch error when the partitioned table had some columns set to "unused". The only workaround I could think of was to add "dummy" columns to my working table and set them to UNUSED. I used all_unused_tab_cols to determine how many "dummy" columns I needed.

Are there any other alternatives to dropping the unused columns in this scenario?

Drop column in a BIG table

Marcel, June 01, 2011 - 6:18 am UTC

I have set unused column in a big table ( 100GB ) and after
that drop unused column with checkpoint , at this point the table was totally inacessible, I can not do select or any dml in the table, if I cancel the session , the table still inacessible and new oracle error is showed up, saying to continue the drop column, even I can not monitor the work of drop column , but if I do the command without checkpoint,
I can do select in the table , I can cancel the session and monitor the rollback in v$session_longops, so I think the drop column with checkpoint it is a very risk for big tables.
The oracle11g has the same behavior to drop column?

thanks.

Drop Column from HASH Partition Table

Rajeshwaran, Jeyabal, September 07, 2011 - 8:54 pm UTC

Tom:

We have a table of 300M records size 62GB with HASH Partitioned (8 partitions). We need to drop a column from this table.

(i) Column datatype is NUMBER
(ii) No index defined on this column
(iii) Foreign Key constraint availble on this column

alter table t drop column x
This took nearly 2 hrs 10 mins.

alter table t set unused column x
- This took less than 1 sec

I am planning to go UNUSED column option rather than DROP. Please let me know your thoughts .
Tom Kyte
September 08, 2011 - 5:13 pm UTC

I'm a fan of unused myself. It gets rid of the column in a virtual sense - without having to rewrite the entire table.

I'm a fan of "set unused" whenever possible

Drop Column from HASH Partition Table

Rajeshwaran, Jeyabal, September 09, 2011 - 9:10 am UTC

Tom:

I was trying to find out why the DROP COLUMN command is getting slowed down, so enabled the Tracing and Tkprof shows me this.

********************************************************************************

alter table big_table drop column OBJECT_NAME


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1     53.12      61.70      43631      46978    6182028           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2     53.12      61.70      43631      46978    6182028           0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 244  

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       2        0.00          0.00
  SQL*Net message from client                     2        0.03          0.03
  rdbms ipc reply                               223        0.00          0.05
  i/o slave wait                               2745        0.26          7.42
  db file scattered read                       2745        0.26          7.99
  db file sequential read                         1        0.00          0.00
  log file switch completion                      8        0.07          0.17
  latch: checkpoint queue latch                   1        0.00          0.00
********************************************************************************

Why the Physical IO 43K involved with this DROP column? is that oracle internally executes the below statement as part of the DROP column command?
update big_table set object_name = NULL;

Tom Kyte
September 09, 2011 - 9:38 am UTC

the drop column actually rewrites the blocks - set unused is just a flag delete.

we don't set the column null - we go and rewrite every row removing the column

Drop Column from HASH Partition Table

Rajeshwaran, Jeyabal, September 09, 2011 - 8:29 pm UTC

we don't set the column null - we go and rewrite every row removing the column

rajesh@ORA10GR2> exec show_space(user,'BIG_TABLE','TABLE');
l_total_blocks****************  46080
l_total_bytes*****************  377487360
l_unused_blocks***************  680
l_unused_bytes****************  5570560
l_last_used_extent_file_id****  267
l_last_used_extent_block_id***  3023377
l_last_used_block*************  344
l_unformatted_blocks**********  0
l_unformatted_bytes***********  0
l_fs1_blocks******************  0
l_fs1_bytes*******************  0
l_fs2_blocks******************  0
l_fs2_bytes*******************  0
l_fs3_blocks******************  0
l_fs3_bytes*******************  0
l_fs4_blocks******************  0
l_fs4_bytes*******************  0
l_full_blocks*****************  45117
l_full_bytes******************  369598464

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.43
rajesh@ORA10GR2>
rajesh@ORA10GR2> alter table big_table drop column object_name;

Table altered.

Elapsed: 00:00:53.20
rajesh@ORA10GR2>
rajesh@ORA10GR2>
rajesh@ORA10GR2> exec show_space(user,'BIG_TABLE','TABLE');
l_total_blocks****************  46080
l_total_bytes*****************  377487360
l_unused_blocks***************  680
l_unused_bytes****************  5570560
l_last_used_extent_file_id****  267
l_last_used_extent_block_id***  3023377
l_last_used_block*************  344
l_unformatted_blocks**********  0
l_unformatted_bytes***********  0
l_fs1_blocks******************  0
l_fs1_bytes*******************  0
l_fs2_blocks******************  38279
l_fs2_bytes*******************  313581568
l_fs3_blocks******************  0
l_fs3_bytes*******************  0
l_fs4_blocks******************  0
l_fs4_bytes*******************  0
l_full_blocks*****************  6838
l_full_bytes******************  56016896

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.14
rajesh@ORA10GR2>


1) Are these free blocked created by DROP COLUMN statement are the free blocks below the HWM? If yes, then these blocks will be reused by the subsequent data loading?

Tom Kyte
September 10, 2011 - 3:12 pm UTC

yes, they would all be - by definition - below the high water mark, they contain data.

You just put them on the free list by rewriting them all and removing that column from them. They can be used by subsequent conventional (not direct) path loads.

courious duration

Michael, September 21, 2011 - 4:19 am UTC

hi tom,

we had a column defined as NUMBER in a table on the production system (11gR1).

development did a "update set column = NULL" on this column in the production system.

we did a conventional path export/import of this table (production => development).

development did a "alter table drop column" on this development system - it took a few minutes.

development did a "alter table drop column" on the production system - it took several hours.

do you have any clue on this? could it be that the physical column was already cleaned out by the exp/imp operation? i remember NULLs are not physically stored anymore if they are on the end of the row.

Cheers,
Michael


Tom Kyte
September 21, 2011 - 9:36 am UTC

on production - was there anyone else logged in doing stuff? I assume on development you had the machine all to yourself.

but yes, the export/import likely had something to do with this as well - not that it cleaned out the column - but rather that you ended up with a nicely compacted table - with fewer blocks - no migrated rows - etc.

Or, if the column was last in the table - then it would not have existed in the newly imported table and the cleanup would have been fast (no bytes to change in the block)

I like "set unused" myself - instantaneous. You might not get the space back right away, but it is instantaneous.

curios duration

Michael, September 22, 2011 - 6:40 am UTC

tom,

i don't think that there was running anything else on production as it was a "only we" maintenance window.

the segment seems to be quite similar on both sides, 51M rows in 55K extents or 3.5M blocks, 505 bytes avg row size.

the column was at the tail of the row but i found the "update set null" itself doing the physical column cleanout when i played around with some block dumps.

i did an "analyze table list chained rows" now and found the production table having 3.3M migrated rows (development 0). this was not shown in the dba_tab_statistics (chain_cnt=0) even as we did a full dbms_stats.gather_schema_stats on the weekend!? any clue? (it's an assm tablespace / 11.1.0.7 EE).

the "analyze" took 1:30 hours on production while running in 19 minutes on development. do you think this could be only because of the ~7% migrated rows in production?

cheers,
michael
Tom Kyte
September 22, 2011 - 5:49 pm UTC

were the rows migrated BEFORE the change - I believe so. So, there was a lot of tail chasing in production that was not done in your test environment.

dbms_stats does not collect chained rows counts, the optimizer doesn't use it so dbms_stats doesn't collect it.


I can say the analyze might have been slower because:

o there was other stuff in production going on maybe - contention for shared resources.

o the chained rows and the chasing down of the tails (lots of single block IO instead of big reads)

o the IO on production is botched somehow and very slow.


You might sql trace one of these to see what sort of IO waits you see - or use ASH/AWR to analyze what you've already done.

curios duration

Michael, September 23, 2011 - 6:45 am UTC

hi tom,

yes the rows have been migrated before the change. db statistics (eg. v$filestat) tells IO on production to be faster than on development. anyway, i will do a sql trace to look into details.

do you really think 7% migrated rows can have such a large impact? how's the "alter table drop column" getting the rows? i saw a lot of "rowid range scans". does it follow [a|every] migration pointer even it points to "another rowid range"? (as simply getting and manipulating every row in the segment like a full table scan does would also work).

cheers,
michael

Tom Kyte
September 23, 2011 - 9:42 am UTC

it has to chase the row down, it gets a request to read all rows between rowid X and Y - that would be a nice multiblock read.

until it hits a row that is migrated, all it gets then is a pointer to where the row really is. And it has to read that row using a single block IO.

If you do that 3.3 million times and it incurs a physical IO (say that is 5ms), then you are looking at

ops$tkyte%ORA11GR2> select 3300000 * (5/1000) / 60/ 60 hours from dual; 

     HOURS
----------
4.58333333



4.5 hours of IO just to do these single block reads - let alone the writes and so on that have to take place.



Is there a security issue with set unused?

A reader, September 26, 2011 - 12:52 pm UTC

Hi Tom,

I am also a fan of set unused however as we become more security conscious in this day in age, I am looking at the security aspect of "set unused".

Is dropping a column more secure since "set unused" does not remove the old data? Lets say my column contained sensitive data and I don't want to be subject to hackers getting at data files (yes, i understand there should be security in place that even prevents access at that level) but I just want to understand that if I do a "set unused" what is my vulnerability? Should I be worried about that when using "set unused"?

Thanks!


Tom Kyte
September 26, 2011 - 6:53 pm UTC

who is to say that dropping the column doesn't leave bits about? It can and will - as will alter table T move, create table as select - etc. We don't necessarily overwrite anything.

And then there is the existence of the data in undo, redo, temp and so on.

If you need to wipe data out - you have to move *everything* out of the tablespace that the data was in (except the data you want to wipe out of course) and then drop that tablespace with the datafiles. And then think about your archives, redo, undo and temp and what you need to do with them.

set unused would tend to leave more bits around, but everything short of removing the datafiles will do that to some degree.

And then you have to figure out how to wipe the filesystem :)

Alexander, January 18, 2012 - 8:29 am UTC

Tom,

Can you explain what happens behind the scenes when you drop a table? Someone I know tried to drop a 300GB table and it ran for quite a long time, a couple hours I think. So what's it doing? I figured it would be the same as a truncate. It was too big to go into the recycle bin, so I don't think that came into play.
Tom Kyte
January 18, 2012 - 9:52 am UTC

It cannot be too big to go into the recycle bin, since your tablespace is the recycle bin - the recycle bin is a logical concept - not a "place". When you drop a table in 10g and above, by default, we just update the segment name mostly to say "this is really not here anymore, ignore it" - but we don't move or touch the data. In fact, in general, the recycle bin makes things able to be dropped faster since it really isn't doing much.

One thing we do is checkpoint the object out to disk - so if the table had lots of blocks in the cache, that would take a while.

If the recycle bin was de-activated (they turn the feature off or used purge on the drop) then we would have had to of actually deallocated the extents.

Was this table in a dictionary managed tablespace?

Alexander, January 18, 2012 - 10:16 am UTC

I was told it wasn't in the recycle bin after the drop so that's why I said that.

I'm pretty sure it was a DMT, I'm getting confirmation on that.
Tom Kyte
January 18, 2012 - 10:23 am UTC

dictionary managed tablespaces cannot do the recycle bin trick.

Here is what happened:

you were using a DMT (dictionary managed tablespace)

the object had a lot of dictionary managed extents - hundreds would be enough, more would just be piling on

every extent had to be deallocated, one by one

it is very very very expensive to allocate or deallocate a dictionary managed extent, every single one is a recursive transaction with a commit.


so, it took hours to deallocate the hundreds of dictionary managed extents.



This is one of the reasons we introduced locally managed tablespaces - extent allocation and deallocation become very very fast.



It could take days (literally, days) to drop a table with 10's of thousands of dictionary managed extents.

Alexander, January 20, 2012 - 9:51 am UTC

Makes sense, thanks.

So what is so much different about how LMT deallocates space? I know they store block information in a bitmap and not in the data dictionary, but I can't picture how deallocating extents would be different.
Tom Kyte
January 20, 2012 - 10:00 am UTC

there is no big recursive set of transactions to take place, just lock the bitmap image in the header of the datafile (each datafile has their own bitmap - increases concurrency, operations against the dictionary were serialized in general at the database level), zero out a bunch of one's, and viola - they are deallocated.

as opposed to the delete against uet$, the insert into fet$, the other operations that had to take place in addition to those, and a commit... for each and every extent.

Alexander, January 20, 2012 - 10:32 am UTC

Top notch explanation as usual.

(By the way it's kind of scary you know the DD tables being modified under the covers off the top of your head).

Dropping Unused columns

Shimmy, April 02, 2012 - 10:59 am UTC

I am trying to drop an unused column from a table. Why would I see a record in USER_UNUSED_COL_TABS, even after executing 
ALTER TABLE SK_20120402 DROP UNUSED COLUMNS;

The script I used :
SQL> DROP TABLE sk_20120402;

Table dropped.

SQL> 
SQL> create table sk_20120402 as
  2  (select mod(level, 10) dept, level id
  3  from dual
  4  connect by level <= 100);

Table created.

SQL> 
SQL> SELECT * from USER_UNUSED_COL_TABS WHERE TABLE_NAME='SK_20120402';

no rows selected

SQL> 
SQL> 
SQL> ALTER TABLE SK_20120402
  2  SET UNUSED (DEPT);

Table altered.

SQL> 
SQL> SELECT * from USER_UNUSED_COL_TABS WHERE TABLE_NAME='SK_20120402';

TABLE_NAME                          COUNT                                       
------------------------------ ----------                                       
SK_20120402                             1                                       

SQL> 
SQL> ALTER TABLE SK_20120402
  2  DROP UNUSED COLUMNS;

Table altered.

SQL> 
SQL> SELECT * from USER_UNUSED_COL_TABS WHERE TABLE_NAME='SK_20120402';

TABLE_NAME                          COUNT                                       
------------------------------ ----------                                       
SK_20120402                             1                                       


Tom Kyte
April 03, 2012 - 6:29 am UTC

version?

...
ops$tkyte%ORA11GR2> SELECT * from USER_UNUSED_COL_TABS WHERE TABLE_NAME='SK_20120402';

TABLE_NAME                          COUNT
------------------------------ ----------
SK_20120402                             1

ops$tkyte%ORA11GR2> ALTER TABLE SK_20120402
  2  DROP UNUSED COLUMNS;

Table altered.

ops$tkyte%ORA11GR2> SELECT * from USER_UNUSED_COL_TABS WHERE TABLE_NAME='SK_20120402';

no rows selected

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select * from v$version;

BANNER
-------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
PL/SQL Release 11.2.0.3.0 - Production
CORE    11.2.0.3.0      Production
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production


Dropping Unused columns

Shimmy, April 03, 2012 - 7:35 am UTC

Sorry, forgot to mention Oracle version.

SQL> select * from v$version;

BANNER                                                                          
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production    
PL/SQL Release 11.2.0.2.0 - Production                                          
CORE 11.2.0.2.0 Production                                                      
TNS for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production               
NLSRTL Version 11.2.0.2.0 - Production                                          


Tom Kyte
April 03, 2012 - 1:04 pm UTC

ugh, I just upgraded my database to 11.2.0.3 last week and now I cannot reproduce your issue - Please contact support, it looks like it could have been an issue resolved in the 11.2.0.3 patch set

ways of dropping columns from table

A reader, April 04, 2012 - 10:20 pm UTC

Hi Tom,

After read through this thread, really noticed much information and got a little confused, would like to seek
more clarification from you, thank you for your time in advanced.

Basically speaking we have below list of ways to drop column from table, i have several questions for each of them as below:

1. CTAS, create associated indexes and constraints, drop old, rename new to old
if there are many indexes and constraints, the downtime will be unacceptable, so not a good recommendation?
this one does not need table reorg, no matter how user define 'table fragmentation', right?
2. alter table drop column(s)
not recommended to use, especially when the table is huge?
drop more than one columns will use more undo than drop 1 column repeatly at any given point of time?
but drop 1 column repeatly will use more undo in total than drop them all in once?
if add 'checkpoint' option, it is to indicate oracle to do a commit periodically?
after drop is done, the space will go to the free list but still ocupied by this specific table?
3. set unused, drop unused-optionally
set unused is just a flag change, is it change the data dicitonary table?
during the 'set unused' is processing, table is still available for processing, no down-time?
during the 'drop unused column', it is doing transparently and has no impact on the transactions who is working with the table?
if we do not care about the space, no need to drop them just set unused is ok which is very fast?
if add 'checkpoint' option, it is to indicate oracle to do a commit periodically?
4. redefine
only when down-time is not acceptable?
Tom Kyte
April 06, 2012 - 9:45 am UTC

1) I wouldn't call that a method to drop a column really, that is an entire re-org

2) whether it is "recommended or not" is an "it depends" sort of thing. It depends.

3) I like set unused because it is just fast - gets rid of the column for all intents and purposes, any future re-org would clean anything out, but as far as we are concerned the column is gone.

4) nothing wrong with an online redefine.



In the end - it all depends on your needs, your situation, your goals. Tell us that and then someone could recommend the "best" approach (but it will only be best for you in your situation - not in general)

probably a very basic question...please bear with me...

biju george, September 27, 2012 - 12:25 pm UTC

Hi Tom,
We added say three columns to a table with 50 million records. It took a couple of seconds.
We, in order to roll back the change, executed an alter drop of these three columns. And its taking more than 20 minutes.
Why? Though these columns are empty and are not part of any indexes.
Thanks a lot for your response...
Tom Kyte
September 27, 2012 - 3:22 pm UTC

the alter add needs to do nothing to the blocks in the table.

the alter drop is going out and looking at the blocks to see if there is anything it needs to do to get rid of the data. it cannot know that there are no rows that have a value, that there is nothing in fact to do.


you could just use "set unused", that doesn't touch the blocks.

Dropping columns from Compressed tables in Oracle

Kishore, December 09, 2012 - 2:53 pm UTC

Hi Tom,

Greetings!

In one of our 11G R2 databases, for all the tables, COMPRESSION is ENABLED for OLTP. As per few requirements, one of those tables needs to drop few coulmns and add few columns very frequently.

As the compression is ENABLED for this particular table, when ever we issue the statement to DROP a column, its being SET as UNUSED column and so soon the count in dba_unsed_col_tabs will become till 1000 and as it is the maximum count for number of columns allowed for a table, further operation will be affected due to this.

Can you please let us know it there is way to drop the columns completely without being set to UNUSED while we issue ALTER TABLE <tab_name> DROP COLUMN <col_name> ?

Please let us know.

Question on table availability when we kill the alter table drop with and without checkpoint option

Poorna, July 05, 2013 - 9:17 pm UTC

Hi Tom,
I have a question as below.
what happenes to the table's availability if we kill the below

1. ALTER TABLE TABLE_1 drop column column_1;
2. ALTER TABLE TABLE_2 drop column_1 checkpoint 250;


Tom Kyte
July 16, 2013 - 2:22 pm UTC

nothing. neither will cause the table to "disappear"

checkpointing allows a restart of the command to skip over already completed work. it makes it faster to recover in the event of an error and complete the operation.

Reclaiming the space after dropping the columns

A reader, December 19, 2013 - 12:42 pm UTC

Hi Tom,

Developer did a mistake. Dropped few columns in a big table directly. After dropping, the average row size remains same and the table size also same. Its partition table. When we created a new table and populated the same number of records, this new table size is 10 GB and after compression it is 1.8 GB. Whereas the original table partition size is 22 GB and after compression it's 17 GB.
Avg Row length of new table 359 and the original table is 762. How to reclaim the dropped columns space. Its a huge table, we can't recreate it. Pls suggest and help us on this issue.

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library