Skip to Main Content
  • Questions
  • Overhead/locks of adding/dropping partitions

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Susie.

Asked: June 06, 2011 - 10:40 am UTC

Last updated: June 08, 2011 - 10:51 am UTC

Version: 11.2.0

Viewed 10K+ times! This question is

You Asked

Hi, Tom,

I'm planning to use 11G's interval partitioning for a 'large' table (14M isrt's per day, no updates) for which we need to keep the most recent 4 days' data.

I'll partition on the day part of a TS column and use local indexes.

My biggest concern (so far) is the overhead/locking related to the automatic adding of the new partition (upon the first insert of a day) and the coded-by-me-and-scheduleable dropping of the oldest partition.

Can you tell me the best way to predict/measure the impact on (1) this table and (2) the db server in general whenever a partition is added or dropped?


Also, I've read (ORA-14758: Last Partition In The Range Section Cannot Be Dropped (Doc ID 882681.1)) and I interpret this as saying you cannot drop the last surviving 'statically created' partition. I thought I could create a partition using a range that'll never have data within, as new data is inserted daily, new partitions would automatically be added, and nightly we could drop the oldest partition that actually has data in it. Does this make sense?

thanks!

and Tom said...

The creation of a new partition will have minimal impact. It is done as a recursive transaction - by the first session that does an insert into that new partition. If there are other sessions trying to insert into that same partition at just about the same instant - they will wait for the very first session to complete and then they will continue on. So, as most, you would expect a very very short 'pause' for just the users that happen to be inserting into the new partition at the instant it is being created. The other partitions will not be affected.

The same will be true for your drop as well - since you have no global indexes to deal with. The drop would only affect things if people are actively using that partition - but given that you are dropping it, I think it is safe of my to assume "no one would be accessing it"


And yes, every interval partitioned table has at least one "old fashioned range" partition - I loosely call this the "low high water mark" partition. So, this one will exist - but as you suggest - it can exist and be empty forever.




ops$tkyte%ORA11GR2> create table audit_trail
  2  ( ts    date,
  3    data  varchar2(30)
  4  )
  5  partition by range(ts)
  6  interval (numtodsinterval(1,'day'))
  7  store in (users, example )
  8  (partition p0 values less than
  9   (to_date('23-feb-2007','dd-mon-yyyy'))
 10  )
 11  /

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select partition_name, tablespace_name, high_value
  2    from user_tab_partitions
  3   where table_name = 'AUDIT_TRAIL'
  4   order by partition_position;

PARTITION_ TABLESPACE HIGH_VALUE
---------- ---------- ------------------------------------------------------------------------------------------
P0         USERS      TO_DATE(' 2007-02-23 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into audit_trail (ts,data) values
  2  ( to_date('02-jun-2007','dd-mon-yyyy'), 'xx' );

1 row created.

ops$tkyte%ORA11GR2> column partition_name new_val PN
ops$tkyte%ORA11GR2> select partition_name, tablespace_name, high_value
  2    from user_tab_partitions
  3   where table_name = 'AUDIT_TRAIL'
  4   order by partition_position;

PARTITION_ TABLESPACE HIGH_VALUE
---------- ---------- ------------------------------------------------------------------------------------------
P0         USERS      TO_DATE(' 2007-02-23 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
SYS_P827   USERS      TO_DATE(' 2007-06-03 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select '&PN' from dual;
old   1: select '&PN' from dual
new   1: select 'SYS_P827' from dual

'SYS_P82
--------
SYS_P827

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into audit_trail (ts,data) values
  2  ( to_date('16-sep-2007','dd-mon-yyyy'), 'xx' );

1 row created.

ops$tkyte%ORA11GR2> select partition_name pname, tablespace_name, high_value
  2    from user_tab_partitions
  3   where table_name = 'AUDIT_TRAIL'
  4   order by partition_position;

PNAME                          TABLESPACE HIGH_VALUE
------------------------------ ---------- ------------------------------------------------------------------------------------------
P0                             USERS      TO_DATE(' 2007-02-23 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
SYS_P827                       USERS      TO_DATE(' 2007-06-03 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
SYS_P828                       USERS      TO_DATE(' 2007-09-17 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> commit;

Commit complete.

ops$tkyte%ORA11GR2> alter table audit_trail drop partition &PN;
old   1: alter table audit_trail drop partition &PN
new   1: alter table audit_trail drop partition SYS_P827

Table altered.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select partition_name pname, tablespace_name, high_value
  2    from user_tab_partitions
  3   where table_name = 'AUDIT_TRAIL'
  4   order by partition_position;

PNAME                          TABLESPACE HIGH_VALUE
------------------------------ ---------- ------------------------------------------------------------------------------------------
P0                             USERS      TO_DATE(' 2007-02-23 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
SYS_P828                       USERS      TO_DATE(' 2007-09-17 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')


Rating

  (3 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

ORA-01466 when managing partitions

Martin Vajsar, June 07, 2011 - 5:06 am UTC

I've run into some trouble truncating partitions in 10g and I assume the same would happen when creating or dropping them too. These are all DDL operations, so (in 10g at least) they change the timestamp of the whole table. I had queries against the affected tables in long (under a minute, actually) read-only transactions running when the truncate happened and they have failed with ORA-01466, even though they were not accessing the partition being truncated. I suppose that, with just the right timing, a query in read-committed transaction could raise 1466 too, it just didn't happen in our environment.

Admittedly, an OLTP system might be expected not to run long transactions (or they could be scheduled for times when the partitions are not managed), and the exception can be properly handled if need be, but it is better to anticipate it right from start.

Is there any difference in 11g regarding ORA-01466?
Tom Kyte
June 07, 2011 - 2:24 pm UTC

The 1466 should be happening when you are using flashback query - and flashback query is documented to not support flashing back over most all DDL operations (adding a partition doesn't fall into that category). That might also happen with READ ONLY (not read committed) transactions - because they are basically big old flashback queries (flashing back to the start of the transaction).


You cannot flashback over DDL. It won't affect read committed transactions (the default) - only things that flashback.


With flashback data archive - you can flashback query over DDL - that is supported, but the environment there is radically different. The flashback data archive isn't really undo based - it uses history tables.

What about single plus default partition

Dana, June 08, 2011 - 7:30 am UTC

I'm trying to recode a set of batch processes.
The db is newly upgraded to 11.2.0.2.

The typical process flow is:

delete from table;
merge into table over several queries, sequentially.
commit;
dbms_stats.gather_table_stats(... estimate_percent=>100);

It's slower than users want, and the delete and stats are the primary resource consumers according to AWR.

My plan at this point is to set up a number of tables, either permanent or temp, for each query in the batch, process them in parallel, then union them into a new fake table.

My original thought was to do:
alter table_current rename table to table_old;
alter table_new rename to table_current;

And to skip the dbms_stats as representation won't change so much from run to run. Just run it once at the start of the week and use copy_stats.

In single user testing, the process improves run times from 50% to several times better, depending on number of parallel queries executed.

But this question makes me think about a single plus default partition and doing partition swapping.

It sounds like a valid approach. In your experience, do you see potential upside/downside? Would you offer an opinion?


Tom Kyte
June 08, 2011 - 10:51 am UTC

you could do the partition swapping thing - but it wouldn't bring any tangible benefit that I can think of.

It would still progress table by table, just like the rename.

It would still invalidate query plans, just like the rename (you'll be hard parsing after this process is complete)

The syntax would be harder - including indexes, novalidate... The rename is pretty simple.

Thank you sir.

Dana, June 08, 2011 - 1:34 pm UTC

I appreciate the opinion.

More to Explore

VLDB

If you are new to partitioning, check out Connor McDonald's introduction series here.

VLDB

Documentation set on VLDB and Partitioning.