Skip to Main Content
  • Questions
  • Delete then Insert --every 5 minutes

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, raul.

Asked: February 18, 2005 - 9:52 pm UTC

Last updated: September 30, 2005 - 11:57 am UTC

Version: 9.2.0

Viewed 1000+ times

You Asked

Hi Tom,

We have a busy OLTP database that shares many schemas for many applications. A workflow application has a dbms job that runs every 5 minutes. Here is what it does:

1. delete from <a wide de-normalized summary table>;
2. insert into <this de-normalized summary table>
select *
from <a code table>,
<
inline view with sum(case when..) as period1,...
... sum(case when ...) as period10,
count(*), from ( select,,from tranxn tables..)
group by, outer join
> order by ..; <-- complex datawarehouse type query

Inserts ~150,000 rows.

Online operators query the summary table actively. This works good for them.

I see two issues. A single execution generates
1) 150M redo
2) 5 million buffers gets

This is not a problem yet. Future needs would increase the number of rows 2 or 3 times.

For the excess redo, I cannot use NOLOGGING/APPEND as the database is set to FORCE LOGGING mode (for Standby database). I cannot use the session specific GLOBAL TEMPORARY TABLE -- tried it, generates only 900K redo - nice!

I'd appreciate if you could give suggestions to tune/redesign for both issues.

*** Additional Information ***
1. The table being inserted has just one Primary Key index on a ID varchar2(160)!? column.

2. The select statement in the insert uses 2 indexes with a range scan; I ran the select alone seperately, this generates ~2500 buffer gets and ~6000 physical reads.

3. The delete generates redo of ~93M; insert does a ~52M of redo.

4. Statistics are below for the delete and insert:

Delete:
Statistics
----------------------------------------------------------
295 recursive calls
644084 db block gets
3580 consistent gets
188 physical reads
97846596 redo size
795 bytes sent via SQL*Net to client
687 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
153720 rows processed

Insert:
Statistics
----------------------------------------------------------
79 recursive calls
3309832 db block gets
2338020 consistent gets
6306 physical reads
54814692 redo size
799 bytes sent via SQL*Net to client
6902 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
2 sorts (memory)
1 sorts (disk)
153720 rows processed

Thank you!


and Tom said...

Well, we can use partitions to reduce to a tiny fraction the redo (and insert /*+ append */ would "help" you in reducing redo for it skips undo generation on the table insert -- but since you don't truncate, you'd always be writing above the HWM....)


So, how can we reduce the redo?

by reducing significantly the undo generated. We can in fact skip undo on both the TABLE and the INDEX by using a partitioned table with a single partition. Consider this small example:


ops$tkyte@ORA9IR2> CREATE TABLE t
2 (
3 OWNER,
4 OBJECT_NAME,
5 SUBOBJECT_NAME,
6 OBJECT_ID,
7 DATA_OBJECT_ID,
8 OBJECT_TYPE,
9 CREATED,
10 LAST_DDL_TIME,
11 TIMESTAMP,
12 STATUS,
13 TEMPORARY,
14 GENERATED,
15 SECONDARY
16 )
17 PARTITION BY RANGE (object_id)
18 (
19 PARTITION junk VALUES LESS THAN (MAXVALUE)
20 )
21 as
22 select * from all_objects where rownum <= 5
23 /

Table created.

where object_id < maxvalue, single partition -- starts with just 5 rows in this case


ops$tkyte@ORA9IR2> alter table t add constraint t_pk primary key(object_id) using index local;

Table altered.

ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T', cascade=>true );
PL/SQL procedure successfully completed.

so, that is your table you want to refresh from time to time... We'll set up a one time "mirror" of that table:


ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table new_t
2 as
3 select *
4 from all_objects
5 where 1=0;

Table created.

it'll start empty. We create a view and a procedure on table T to see what effect this technique will have on them "invalidation wise" (none as we'll see)


ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace view v as select * from t;

View created.

ops$tkyte@ORA9IR2> create or replace procedure p
2 as
3 begin
4 for x in ( select * from t ) loop null; end loop;
5 end;
6 /

Procedure created.

ok, here is the refresh routine, I'll inject comments in bold in it:


ops$tkyte@ORA9IR2> create or replace procedure refresh_t
2 as
3 no_constraint exception;
4 pragma exception_init( no_constraint, -2443 );
5 l_numrows number;
6 l_numblks number;
7 l_numlblks number;
8 l_avgrlen number;
9 l_numdist number;
10 l_avglblk number;
11 l_avgdblk number;
12 l_clstfct number;
13 l_indlevel number;
14 begin

remove pretty much the redo from the "delete" (your biggest part!), truncate is standby safe 100% it just updates the dictionary:

15 execute immediate 'truncate table new_t';
16

Now, we don't want to maintain the index in real time, so, if one exists, drop it, ignore "no constraint" errors:

17 begin
18 execute immediate 'alter table new_t drop constraint new_t_pk';
19 exception
20 when no_constraint
21 then
22 null;
23 end;
24

Now, using insert append (skips undo generation on the table) load the new table:

25 insert /*+ append */ into new_t
26 select *
27 from all_objects;
28

and index it, the create index doesn't have to generate undo (we just rollback the dictionary change -- standby "safe"

29 execute immediate
30 'alter table new_t
31 add constraint new_t_pk
32 primary key(object_id)';
33

lets get the correct stats and save them, if you use column stats, you can add that as well:

34 dbms_stats.gather_table_stats
35 ( user, 'NEW_T', cascade=>true );
36
37 dbms_stats.get_table_stats
38 ( user, 'NEW_T', numrows => l_numrows,
39 numblks => l_numblks, avgrlen => l_avgrlen );
40
41 dbms_stats.get_index_stats
42 ( user, 'NEW_T_PK',
43 numrows => l_numrows, numlblks => l_numlblks,
44 numdist => l_numdist, avglblk => l_avglblk,
45 avgdblk => l_avgdblk, clstfct => l_clstfct,
46 indlevel => l_indlevel );
47

now, do the swap -- make the newly loaded table become the partitioned table and vice versa:

48 execute immediate
49 'alter table t
50 exchange partition junk
51 with table new_t
52 including indexes
53 without validation';
54

and update the table level stats, the swap will get the partition stats -- this just makes the table level stuff "correct" as well


55 dbms_stats.set_table_stats
56 ( user, 'T',
57 numrows => l_numrows, numblks => l_numblks,
58 avgrlen => l_avgrlen );
59
60 dbms_stats.set_index_stats
61 ( user, 'T_PK',
62 numrows => l_numrows, numlblks => l_numlblks,
63 numdist => l_numdist, avglblk => l_avglblk,
64 avgdblk => l_avgdblk, clstfct => l_clstfct,
65 indlevel => l_indlevel );
66
67 end;
68 /

Procedure created.

Ok, lets see what we see, remember table starts with 5 rows:

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select table_name, num_rows from user_tables where table_name = 'T';

TABLE_NAME NUM_ROWS
------------------------------ ----------
T 5

ops$tkyte@ORA9IR2> select index_name, num_rows from user_indexes where index_name = 'T_PK';

INDEX_NAME NUM_ROWS
------------------------------ ----------
T_PK 5

ops$tkyte@ORA9IR2> select table_name, num_rows from user_tab_partitions where table_name = 'T';

TABLE_NAME NUM_ROWS
------------------------------ ----------
T 5

ops$tkyte@ORA9IR2> select index_name, num_rows from user_ind_partitions where index_name = 'T_PK';

INDEX_NAME NUM_ROWS
------------------------------ ----------
T_PK 5


we do the swap:


ops$tkyte@ORA9IR2> exec refresh_t;

PL/SQL procedure successfully completed.


code and views stay "valid", they are safe

ops$tkyte@ORA9IR2> select object_name, status from user_objects where object_name in ('V','P');

OBJECT_NAME STATUS
------------------------------ -------
P VALID
V VALID


and the stats are all in place...


ops$tkyte@ORA9IR2> select table_name, num_rows from user_tables where table_name = 'T';

TABLE_NAME NUM_ROWS
------------------------------ ----------
T 26334

ops$tkyte@ORA9IR2> select index_name, num_rows from user_indexes where index_name = 'T_PK';

INDEX_NAME NUM_ROWS
------------------------------ ----------
T_PK 26334

ops$tkyte@ORA9IR2> select table_name, num_rows from user_tab_partitions where table_name = 'T';

TABLE_NAME NUM_ROWS
------------------------------ ----------
T 26334

ops$tkyte@ORA9IR2> select index_name, num_rows from user_ind_partitions where index_name = 'T_PK';

INDEX_NAME NUM_ROWS
------------------------------ ----------
T_PK 26334


Now, how about redo, what effect will this have on it?

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> @mystat "redo size"
ops$tkyte@ORA9IR2> set echo off

NAME VALUE
------------------------------ ----------
redo size 29524876

ops$tkyte@ORA9IR2> exec refresh_t;

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2> @mystat2
ops$tkyte@ORA9IR2> set echo off

NAME V DIFF
------------------------------ ---------- ----------------
redo size 30085276 560,400

that way took about 560k of redo, lets check out the delete+insert approach:

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> @mystat "redo size"
ops$tkyte@ORA9IR2> set echo off

NAME VALUE
------------------------------ ----------
redo size 30085276

ops$tkyte@ORA9IR2> delete from new_t;

26334 rows deleted.

ops$tkyte@ORA9IR2> insert into new_t select * from all_objects;

27894 rows created.

ops$tkyte@ORA9IR2> @mystat2
ops$tkyte@ORA9IR2> set echo off

NAME V DIFF
------------------------------ ---------- ----------------
redo size 48908380 18,823,104


18meg, very significant difference -- you'll find the consistent gets go way down too (that was the index I think you'll discover)




Rating

  (27 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

A reader, February 19, 2005 - 6:02 pm UTC

Excellent!

select * from t when creating temporary table

a reader, February 19, 2005 - 6:11 pm UTC

Tom,

I am aware we should use the column names when creating table using CTAS (create table as )syntax. However, when we are building an intermediate(temporary) table to swap a partition of a table with data in the temporary table, could we use select * or we should use individual columns even when we know that both the tables have identical structures. I can understand that you use select * from t for just demos.

Thanks for your time


Regards


Tom Kyte
February 20, 2005 - 9:38 am UTC

if the select * gets the columns in the right order (i used create table as select more out of convenience than anything else) it would be safe -- since this table is created exactly ONCE (and it'll either get it right or nor and you would know right away)

handling the hash-partitioned ones

Alberto Dell'Era, February 19, 2005 - 6:43 pm UTC

> using a partitioned table with a single partition

Interesting addendum: if the original table is partitioned by hash (quite common in a dwh), you can make the mirror table SUBpartitioned by hash and everything works just dandy, eg

create table algnc_extract_umbox (
username varchar2(50) not null
)
partition by hash (username) partitions 32;

create table algnc_extract_umbox_stage (
username varchar2(50) not null
)
partition by range (username)
subpartition by hash (username) subpartitions 32
(
partition p_max values less than (maxvalue)
);

alter table algnc_extract_umbox_stage
exchange partition p_max
with table algnc_extract_umbox
including indexes
without validation;

Obviously you can swap roles - the staging table can be the single-partition one or vice-versa.

Alberto

Out of the box!!!

raul, February 19, 2005 - 10:14 pm UTC

Excellent! Thanks for taking the time to give a neat solution.

'redo size' came down to 24M ( from 145M )

The APPEND hint did not make a difference though.

SQL> select LOG_MODE, FORCE_LOGGING from v$database;

LOG_MODE     FOR
------------ ---
ARCHIVELOG   YES

SQL> create table t as select * from dba_objects where 1=2;

Table created
SQL> exit
SQL> insert into t select * from dba_objects;

36811 rows created.

SQL> @mystat
Enter value for 1: redo size

NAME                 VALUE
--------------- ----------
redo size          4474248
SQL> truncate table t;

Table truncated.

SQL> exit
SQL> insert /*+ APPEND */ into t select * from dba_objects;

36811 rows created.

SQL> @mystat
Enter value for 1: redo size

NAME                 VALUE
--------------- ----------
redo size          4492168
 

Tom Kyte
February 20, 2005 - 10:01 am UTC

what is will reduce is the "delete rowid" messages placed into UNDO and the REDO assocaited with that.  It can make a difference depending on the size of the rows (if you have really wide rows, you might not notice this)

ops$tkyte@ORA9IR2> select LOG_MODE, FORCE_LOGGING from v$database;
 
LOG_MODE     FOR
------------ ---
ARCHIVELOG   YES
 
ops$tkyte@ORA9IR2> drop table t;
 
Table dropped.
 
ops$tkyte@ORA9IR2> create table t as select id from big_table.big_table where 1=0;
 
Table created.
 
ops$tkyte@ORA9IR2> @mystat "redo size"
ops$tkyte@ORA9IR2> set echo off
 
NAME                                VALUE
------------------------------ ----------
redo size                        11441736
 
ops$tkyte@ORA9IR2> insert into t select id from big_table.big_table;
 
20000000 rows created.
 
ops$tkyte@ORA9IR2> @mystat2
ops$tkyte@ORA9IR2> set echo off
 
NAME                                    V DIFF
------------------------------ ---------- ----------------
redo size                       357189944      345,748,208
 
ops$tkyte@ORA9IR2> truncate table t;
 
Table truncated.
 
ops$tkyte@ORA9IR2> @mystat "redo size"
ops$tkyte@ORA9IR2> set echo off
 
NAME                                VALUE
------------------------------ ----------
redo size                       357311544
 
ops$tkyte@ORA9IR2> insert /*+ append */ into t select id from big_table.big_table;
 
20000000 rows created.
 
ops$tkyte@ORA9IR2> @mystat2
ops$tkyte@ORA9IR2> set echo off
 
NAME                                    V DIFF
------------------------------ ---------- ----------------
redo size                       609519956      252,208,412
 

<b>so, lots of rows -- small rows -- 70% of the redo is generated (no undo)</b>


but again, function of the width of the row, using wider rows, this savings will drift to the zero point as the amount of data to truly be logged overwhelms the "delete rowid" messages placed into undo -- the biggested redo savings came from

a) not deleting
b) not maintaining the indices 

Yes! Sir Tom

raul, February 20, 2005 - 12:31 pm UTC

I have 'really' learned and understood some cool stuff here. I sincerely appreciate your time and patience. THANK YOU!!!

Is merging an option?

SFT, February 21, 2005 - 2:58 am UTC

If the summary table has a relatively small delta, i.e. very few new rows have to be inserted and old rows to be deleted would it be an option to use MERGE statement in scenario approx. like this:
1. Generate unique sequence number or timestamp
2. MERGE new rows into existing summaries using just generated ID and taking care of of unnecessary updates
3. Delete old rows that have ID <> that current one

Then redo related to insert/delete would not be that large just because there were not that many INSERTs and DELETEs and therefore not that many index entries had to be created/deleted. On the other hand update related redo is the function of actual data that are updated.

Is that correct?

Tom Kyte
February 21, 2005 - 10:16 am UTC

if you are not updating -- just inserting and deleting, where does merge come into play?

"unnecessary updates"?

sorry, not following you on this one.

Merging example

SFT, February 21, 2005 - 2:41 pm UTC

Hello Tom,

What I meant is that in some situations it is needed to replace not all the data, but some portion of it, e.g. all rows within a given range of key column value. Then your truncate/exchage approach does not work (? at least requires some modifications and presumably more than one partition).
Here is an example with benchmarking. It shows that trancate/exchange is clearly superior (375K redo), but MERGE/DELETE (2.5M) is better then DELETE/INSERT (6.5M). BTW elapsed time is also better.

SYS@ORA9> create table t
2 (
3 c1 number constraint t_pk primary key
4 ,c2 varchar(100)
5 ,c3 number -- session id
6 );

Table created.

SYS@ORA9>
SYS@ORA9> -- DELETE/INSERT case
SYS@ORA9> truncate table t;

SYS@ORA9> insert into t
2 select * from
3 (select
4 ROWNUM c1
5 ,TO_CHAR(ROWNUM) c2
6 ,null c3
7 from user_objects
8 where rownum <= 10000
9 )
10 where c1 between 1 and 9990
11 ;

9990 rows created.

Elapsed: 00:00:02.00
SYS@ORA9> @mystat "redo size"

NAME VALUE
-------------------- ----------
redo size 25284744

SYS@ORA9>
SYS@ORA9> delete from t;

9990 rows deleted.

Elapsed: 00:00:04.09
SYS@ORA9>
SYS@ORA9> insert into t
2 select * from
3 (select
4 ROWNUM c1
5 ,TO_CHAR(ROWNUM) c2
6 ,null c3
7 from user_objects
8 where rownum <= 10000
9 )
10 where c1 between 11 and 10000
11 ;

9990 rows created.

Elapsed: 00:00:04.04
SYS@ORA9> @mystat2

NAME V DIFF
-------------------- -------------------- --------------------
redo size 31743616 6458872

SYS@ORA9>
SYS@ORA9> -- MERGE/DELETE case
SYS@ORA9> var session_id number;
SYS@ORA9>
SYS@ORA9> truncate table t;

Table truncated.

SYS@ORA9> exec :session_id := 1;

SYS@ORA9> insert into t
2 select * from
3 (select
4 ROWNUM c1
5 ,TO_CHAR(ROWNUM) c2
6 ,:session_id
7 from user_objects
8 where rownum <= 10000
9 )
10 where c1 between 1 and 9990
11 ;

9990 rows created.

Elapsed: 00:00:02.00
SYS@ORA9> @mystat "redo size"

NAME VALUE
-------------------- ----------
redo size 32513408

SYS@ORA9>
SYS@ORA9> exec :session_id := 2;

SYS@ORA9> merge
2 into t t1
3 using
4 (
5 select * from
6 (select
7 ROWNUM c1
8 ,TO_CHAR(ROWNUM) c2
9 from user_objects
10 where rownum <= 10000
11 )
12 where c1 between 11 and 10000
13 ) t2
14 on (t2.c1 = t1.c1)
15 when matched then
16 update set
17 t1.c2 = t2.c2
18 ,t1.c3 = :session_id
19 when not matched then
20 insert
21 values
22 (
23 t2.c1
24 ,t2.c2
25 ,:session_id
26 )
27 ;

9990 rows merged.

Elapsed: 00:00:04.08
SYS@ORA9>
SYS@ORA9> delete from t where c3 <> 2;

10 rows deleted.

Elapsed: 00:00:00.00
SYS@ORA9> @mystat2

NAME V DIFF
-------------------- -------------------- --------------------
redo size 34961168 2447760

SYS@ORA9>
SYS@ORA9>
SYS@ORA9> -- TRUNCATE/INSERT/ADD CONSTRAINT case
SYS@ORA9>
SYS@ORA9> @mystat "redo size"

NAME VALUE
-------------------- ----------
redo size 34961168

Elapsed: 00:00:00.00
SYS@ORA9> truncate table t;

Table truncated.

Elapsed: 00:00:00.01
SYS@ORA9> alter table t drop constraint t_pk;

Table altered.

Elapsed: 00:00:00.02
SYS@ORA9> insert /*+ append */ into t
2 select * from
3 (select
4 ROWNUM c1
5 ,TO_CHAR(ROWNUM) c2
6 ,null c3
7 from user_objects
8 where rownum <= 10000
9 )
10 where c1 between 11 and 10000
11 ;

9990 rows created.

Elapsed: 00:00:02.02
SYS@ORA9> alter table t add constraint t_pk primary key (c1);

Table altered.

Elapsed: 00:00:00.06
SYS@ORA9> @mystat2

NAME V DIFF
-------------------- -------------------- --------------------
redo size 35336896 375728

Thanks for your patience (and for mystat* scripts:)


Tom Kyte
February 21, 2005 - 3:54 pm UTC

ok, i did not understand the question which really was

a) should I delete existing rows and insert all as new
or
b) update existing data and delete ones I don't want


and you would find the answer is sometimes a) and sometimes b) depends on the volume

in 10g, you might be interested in the merge with the delete option -- you can insert new, update existing and if the updated row meets some criteria - delete it.

Superb solution

A reader, February 22, 2005 - 6:12 am UTC


A reader, March 10, 2005 - 5:05 pm UTC

Hi Tom,

In your original answer, why are you gathering stats on NEW_T and using those stats to set the stats on table T ?

You could have directly gathered the stats on table T after exchange partition operation(skipping the gather stats operation on NEW_T) ?

thanks

Tom Kyte
March 10, 2005 - 7:39 pm UTC

I figured you want the table in there with stats ASAP, so gather on the table that is not part of the big table first, slide it in and set the stats

Less time for the table was are sliding the data into to have 'wrong' stats.

Any performance benefit using Materialized views doing full refresh

A reader, March 11, 2005 - 4:20 pm UTC

Hi Tom,

Thanks for the reply.

Based on your method(initial post), we are going to rewrite the method by which we do Materialized View(MV) refresh.

We are going to replace all our MVs with ordinary tables. I dont think we have any specific gain by using MVs since all our refreshes are full refreshes.

Create partitioned nologging temporary table(not global temporary) with similar definition as original MV(now not MV but table) - this create is not inside the refresh procedure


a) Truncate the partitioned temporary table
b) insert into this temporary table using /*+ append */ hint using the select in the MV definition
d) compress this temporary table using alter ... move compress (since it is a dataware house and our queries are faster with compressed table)
e) Drop indexes on orginal table
f) use exchange partition between partitioned temporary table and the original table
g) create all the indexes on original table
h) gather stats on original table


During initial tests, the refresh time has almost halved

Now a few clarifications :-

a) I assume there is no performance benefit by using Materialized views compared to ordinary tables(for full refresh) - other than the ease of maintenance by using the standard packages available for their refresh/maintenance etc - am I right here ?

b) Instead of using a ordinary table as temporary table, will it help(performance wise) if we use a global temporary table instead ?

b) Is there any way of refreshing the MV using 'append' hint ?

thanks



Tom Kyte
March 12, 2005 - 8:22 am UTC

d) would not be necessary, insert /*+ append */ will load compressed


second a) well, you'll be missing out o MV rewrites.
b) you cannot.
b2) they already do, mv full refreshes do what you described already! trace it

A reader, March 12, 2005 - 11:51 am UTC

"would not be necessary, insert /*+ append */ will load compressed"

Even after doing 'insert /*+ append */ ...' , we are able to bring down the size of table still further by using 'alter table ...move compressed'.


"second a) well, you'll be missing out o MV rewrites."

Neither query rewrite nor fast refresh option worked for us for MVs - both due to complexity of the MV definition or maybe due to lack of knowledge - there simply is too many restrictions in Oracle for fast refresh and query rewrites


Tom Kyte
March 12, 2005 - 2:14 pm UTC

how much? does not make sense if the table you are loading into is COMPRESSED already.

give us some metrics, show how you measured and the table was compressed during the insert append correct?



A reader, March 12, 2005 - 11:54 am UTC

they already do, mv full refreshes do what you described already! trace it

Maybe for single MV refresh the above is true.

But if you do a refresh of MV using dbms_mview.refresh, along with some other dummy MV, it always does a delete and then an ordinary insert.

Tom Kyte
March 12, 2005 - 2:14 pm UTC

if you use mv groups -- yes, but if they can do the above - then they would by definition have no need of refresh groups so -- it is already doing what they are doing.

A reader, March 13, 2005 - 7:29 pm UTC

Basically we want the data to be available for querying. So the only time where it is not available(in the above method) is during the 'alter table ... exchange partition..' which anyway takes only a fraction of a second.

Whereas if you do the refresh of a single MV,(which is a truncate and then insert /*+ append */ ), data is not available to other users for quering during the insert /*+ append */ operation.

Truncate -- does not affect standby ?

Balasubramanian., March 21, 2005 - 5:17 am UTC

Tom
YOu have stated that
"truncate
is standby safe 100% it just updates the dictionary:"

could you please explain.

Thanks and regards




Tom Kyte
March 21, 2005 - 10:29 am UTC

well, it just updates the dictionary, that is logged and those updates are applied to the standby so the data disappears on the standby as well.

Great stuff

Randy, March 21, 2005 - 3:26 pm UTC

I was planning on implementing something similar to this, but I have multiple, related tables. How can this work for that situation? I then started to look at the MERGE command, since it is transactional. I couldn't see how to make the DDL (partitions) solution work for a set of tables. What do you think?

Tom Kyte
March 22, 2005 - 11:00 am UTC

DDL won't let you do N tables at a time.

If you need them refreshed "consistently", you'll be back to using SQL -- just like an MV refresh. A single table refresh:

truncate + insert /*+ append */

a multi-table refresh

delete + insert
delete + insert
delete + insert
....





execute immediate 'truncate table new_t';

A reader, March 22, 2005 - 10:37 am UTC

I do not know the requirements but using the truncate, there will be a moment in time when a different session select no data from the table. If you use delete then insert, the table will alway be populted.

15 execute immediate 'truncate table new_t';
16

Tom Kyte
March 22, 2005 - 11:03 am UTC

new_t is never queried by anyone.

so, not so.

Possible to have "table t logging / table new_t nologging"?

A reader, March 23, 2005 - 6:00 pm UTC

Tom, dumb question perhaps ...

If I create (partitioned) table "t" *logging* and (exchanged) table "new_t" *nologging* in an instance which is in archivemode (think: standby db), will "t" be in logging mode (AFTER the exchange has happened) again?

Can I really have the cake, or did I miss something here?

Thanks,
Tomasz


Tom Kyte
March 23, 2005 - 6:31 pm UTC

well, was new_t loaded unlogged? if so, standby will not have any data -- even after the exchange, for it.

Possible to have "table t logging / table new_t nologging"?

A reader, March 24, 2005 - 5:13 am UTC

Ok, that's clear (that was a stupid question, of course).
Let me put it in another way. Forget standby now and simply
assume archive logging mode:

1) If i load "new_t" unlogged and exchange with "t", after
that exchange any changes to "t" would be logged, right?
2) But that wouldn't make sense, since I would lose the
very value of archive logging. I couldn't recover from
media failure afterwards, since the rows that have been
loaded unlogged would be missing, correct?

Thank you,
Tomasz


Tom Kyte
March 24, 2005 - 8:52 am UTC

1) direct path operations would not be. conventional path operations are always. As this was in support of a "full refresh", there would be no modfications to the "new_t" partition, it wasn't a consideration in this example.

2) this is a "do it yourself mv", they reload it every 5 minutes. It wasn't applicable, they would just recreate it.

"table t logging / table new_t nologging"

A reader, March 24, 2005 - 6:15 pm UTC

Thanks Tom, I groked it :)

Mille grazie,
Tomasz


delete - /*+APPEND*/ combination

A Reader, April 07, 2005 - 9:23 am UTC

Tom,
I just want to make sure my understanding is correct: delete - /*+append*/ is not a good combinations because each time the insert would above the HWM and leave those deleted blocks empty. We cannot use the trucate because in case of the procedue fails we need to rollback the deleted table and use the old data at the end. Should we simply remove the /*+append*/?
Thanks.

Tom Kyte
April 07, 2005 - 11:08 am UTC

insert is the only DML (well, merge but only for the insert part of it) that append could apply to.


I would remove the append only because it might have someone question you in the future "you don't believe append actually works there do you" sort of stuff.

Index Creation

Andy, April 07, 2005 - 2:39 pm UTC

Tom,

Great solution, but I do not understand your comment about the constraint creation. Does this mean that the index was not dropped when the constraint was dropped ?

'and index it, the create index doesn't have to generate undo (we just rollback
the dictionary change -- standby "safe"'

Thank you for this site.

Tom Kyte
April 07, 2005 - 4:32 pm UTC

should have said "and constrain it, which will create an index in this case"

the add constraint is adding an index, and doesn't generate undo, just redo

A slight variation on the theme...

Mark Brady, August 29, 2005 - 4:18 pm UTC

Oracle 8i

Let's say we have a partitioned Table. It has a column called effective date which is the range partition key. Every day new data is added to the current month's partition. Data is stale IF, after two months, a given ID has been written with an effective date in the last two month. Most ID's are written at least once a month but some are not. I want to wring out the stale data from my older partitions.

So I would modify the procedure you defined above in the following way.

Instead of creating the table in advance and inserting the rows-to-keep into it. I would create the table using CTAS with the select defining the records which need to be saved.

Once I have the saved records in the new table, I index, gather stats as above, then swap the partition in the main table with the partition in the new table, then drop the new table.

Eventually the CTAS should select no records and then I know there is no data needing to be kept and just drop the partition outright.

Did I miss something?

reader

A reader, September 22, 2005 - 2:54 pm UTC

I know that
delete generates lot of undo
insert generates least undo

is it true that

insert generates lot of redo
delete generates least redo ( because to redo the change
is just to delete the rowid , so only information
required is the rowid to be deleted )

Tom Kyte
September 22, 2005 - 5:59 pm UTC

well, remember that indexes will affect this as well (they are undoable and logged as well)

but since UNDO is logged...... when you talk about redo, you have to talk about UNDO as well :)




reader

A reader, September 27, 2005 - 10:46 pm UTC

If the indexes are altered unusable/disable, will it still
generate the undo. The indexes can be rebuilt online after
the load

Tom Kyte
September 28, 2005 - 9:17 am UTC

if the indexes are not maintained, there is no undo to be generated for them.

redo for delete

steve, September 30, 2005 - 9:20 am UTC

Hi Tom,

I'm on an oracle course at the moment and I'd like to
clarify somnething on deletes.

Q1. In the case of a delete, if we ignore the UNDO portion
that is logged in the redo logs, then is it true that
the amount of redo logged for a delete is small (i.e.
just the rowid to delete)?

Q2. Would this be one of the reasons why we cannot use
the redo logs (and archived redo logs) to flashback
the database? i.e. we do not have the before image.
Also, I guess things like NOLOGGING also influence
the decision....

The reason I ask is that the instructor said the redo logs
could have been used to flashback the database but a design
decision was made to use a new structure (flashback logs).

Thanks

Steve
Just to clarify something In the case

Tom Kyte
September 30, 2005 - 11:05 am UTC

q1) forgetting about indexes, yes.

q2) well, in theory you can - because I cannot forget about the undo myself, it is there. But UNDO is used to UNDO things in Oracle.



technically, flashback database could be done with only redo, technically.


Thanks

Steve, September 30, 2005 - 11:09 am UTC

Thanks Man.

But then you would be dependent on the UNDO_RETENTION in that case - correct?

Tom Kyte
September 30, 2005 - 11:57 am UTC

if you used redo, you would be dependent on the amount of redo you had - undo would be in there.

but they don't so it is all hypothetical

DML's performance with partitioned table

Pablo Rovedo, September 13, 2006 - 8:58 am UTC

We have an application that insert/delete many times per second in a table. Because this table was generating too many archives i created a one partition table as you suggested and the archives generation was reduced. My doubt is if this change could affect the dml performance over the table (we have a RAC with 2 nodes, version 10.2.0.2).

Thanks in Advance

More to Explore

VLDB

If you are new to partitioning, check out Connor McDonald's introduction series here.

VLDB

Documentation set on VLDB and Partitioning.