Skip to Main Content
  • Questions
  • Why there is so much logging for "insert /+* append */ into tab select"?

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Prince.

Asked: March 28, 2002 - 7:10 pm UTC

Last updated: July 24, 2016 - 9:03 pm UTC

Version: 8.1.7

Viewed 10K+ times! This question is

You Asked

Hi Tom,

Q1
~~~~

I was executing the following sql statement with the table in nologging mode. But it seems the changes are logged (not just the dictionary changes).

Please have a look at the following.

Case1:
========

SQL> insert /*+ append */ into testusr.contactversion nologging select * from perm_owner.contactversion ;

297585 rows created.


Execution Plan
----------------------------------------------------------
0 INSERT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=781)
1 0 LOAD AS SELECT
2 1 TABLE ACCESS (FULL) OF 'CONTACTVERSION' (Cost=1 Card=1 B
ytes=781)





Statistics
----------------------------------------------------------
12975 recursive calls
1798143 db block gets
56538 consistent gets
9882 physical reads
125409912 redo size
1015 bytes sent via SQL*Net to client
874 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
31 sorts (memory)
0 sorts (disk)
297585 rows processed



Case2:
==========

SQL> alter table contactversion nologging;

Table altered.

SQL> insert /*+ append */ into testusr.contactversion select * from perm_owner.contactversion ;

297585 rows created.


Execution Plan
----------------------------------------------------------
0 INSERT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=781)
1 0 LOAD AS SELECT
2 1 TABLE ACCESS (FULL) OF 'CONTACTVERSION' (Cost=1 Card=1 B
ytes=781)





Statistics
----------------------------------------------------------
11544 recursive calls
1798175 db block gets
55735 consistent gets
10464 physical reads
125449904 redo size
1020 bytes sent via SQL*Net to client
865 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
3 sorts (memory)
0 sorts (disk)
297585 rows processed

SQL> commit ;

Commit complete.

SQL> truncate table testusr.contactversion ;

Table truncated.

Case3:
==========

SQL> alter table contactversion logging;

Table altered.

SQL> insert into testusr.contactversion select * from perm_owner.contactversion ;

297585 rows created.


Execution Plan
----------------------------------------------------------
0 INSERT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=781)
1 0 TABLE ACCESS (FULL) OF 'CONTACTVERSION' (Cost=1 Card=1 Byt
es=781)





Statistics
----------------------------------------------------------
11782 recursive calls
1798464 db block gets
56059 consistent gets
9720 physical reads
125413008 redo size
1020 bytes sent via SQL*Net to client
851 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
10 sorts (memory)
0 sorts (disk)
297585 rows processed



Q2:
~~~~
Why there is no user_context table? Is cotexts global to the DB and whats the reason for it?


Thanks,


and Tom said...

q1) you have indexes on the table. indexes cannot be "appended" to, they must be modified. These modifications must be logged. The redo/undo for the table itself is not generated, the redo/undo for the INDEXES always is.

Consider:

ops$tkyte@ORA8I.WORLD> create table t nologging as select* from all_objects where 1=0;
Table created.

ops$tkyte@ORA8I.WORLD> set autotrace on statistics;

ops$tkyte@ORA8I.WORLD> insert /*+ append */ into t select * from all_objects;
34855 rows created.


Statistics
----------------------------------------------------------
151 recursive calls
116 db block gets
474434 consistent gets
2173 physical reads
10208 redo size

1000 bytes sent via SQL*Net to client
827 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
3 sorts (memory)
0 sorts (disk)
34855 rows processed

ops$tkyte@ORA8I.WORLD> truncate table t;
Table truncated.

ops$tkyte@ORA8I.WORLD> create index object_name_idx on t(object_name);
Index created.
ops$tkyte@ORA8I.WORLD> insert /*+ append */ into t select * from all_objects;

34856 rows created.


Statistics
----------------------------------------------------------
205 recursive calls
3424 db block gets
474764 consistent gets
2465 physical reads
4963388 redo size

1000 bytes sent via SQL*Net to client
827 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
3 sorts (memory)
1 sorts (disk)
34856 rows processed

ops$tkyte@ORA8I.WORLD>
q2) don't know what you mean by "user_context". if you are referring to a SYS_CONTEXT context -- then that is in session_context (since a context is specific to a SESSION not a user)




Rating

  (103 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

I will try disabling the index and do the test. but mean time...

Prince, March 28, 2002 - 10:45 pm UTC

Tom,

Thanks for the response.

Sorry if I was not clear on Q2:

What I meant there was, the dictionary view like ???_CONTEXT.

ex.

SQL> desc dba_context
 Name                                                  Null?    Type
 ----------------------------------------------------- -------- ------------------------------------
 NAMESPACE                                             NOT NULL VARCHAR2(30)
 SCHEMA                                                NOT NULL VARCHAR2(30)
 PACKAGE                                               NOT NULL VARCHAR2(30)

SQL> desc user_context
ERROR:
ORA-04043: object user_context does not exist


Also, in Q1:


SQL>  alter table contactversion nologging;

Table altered.

SQL> insert /*+ append */ into testusr.contactversion  select * from perm_owner.contactversion ;

125449904  redo size

SQL>  alter table contactversion logging;

Table altered.

SQL> insert into testusr.contactversion  select * from perm_owner.contactversion ;

125413008  redo size

SQL> select sum(bytes) from dba_extents where segment_name = 'CONTACTVERSION' and owner = 'PERM_OWNER' ;

SUM(BYTES)
----------
  43515904

Shouldn't the redo log size should be much lesser, even though the index entries are logged?. From the above two their doesn't seem to be much difference at all (infact the one with nologging generated more redolog).

Also, I was testing this in a test DB which is not in "ARCHIVE LOG" mode. Does this has anything to do with the disparity?

Thanks,   

Tom Kyte
March 29, 2002 - 10:21 am UTC

q2) there is no "user_context" because a context (like a rollback segment, or directory object or a public synonym) isn't really "owned" by anyone.

The package/procedure used to set it is owned by someone but the context itself is "public" in nature. Hence no "user_context"

q1) no one said that append generated NO redo. The redo you see generated in the non-indexed example is used to protect the data dictionary. We allocated space above the HWM for the table. If the system failed in the middle of that insert /*+ append */ -- we need to roll back that data dictionary update. There will be redo generated but it is to protect the data dictionary -- not your newly inserted data (we just undo the space allocation if it fails, your data "disappears" as if by magic)

As for the indexed example -- you have to remember that an index is a complex (very very complex) data structure. As you add each row blocks are splitting perhaps, new leaf nodes added, data moved from block to block and so on. Every 4 byte key you add might need to move many kilobytes of data (generating REDO to redo that operation). Index maintainence is expensive. There is never really a one to one correspondance between bytes inserted and redo generated when it comes to an index.

WHY

A reader, March 29, 2002 - 9:41 am UTC

Hi,

Why "10208 redo size" when there is no index? Is it
suppose to be 0 redo size?

What is the reason, please clarify?

Thanks


Tom Kyte
March 29, 2002 - 10:26 am UTC

no one said that append generated NO redo. The redo you see generated in
the non-indexed example is used to protect the data dictionary. We allocated
space above the HWM for the table. If the system failed in the middle of that
insert /*+ append */ -- we need to roll back that data dictionary update. There
will be redo generated but it is to protect the data dictionary -- not your
newly inserted data (we just undo the space allocation if it fails, your data
"disappears" as if by magic)



But, why more redo log when "nologging" compared to logging?

Prince., March 29, 2002 - 2:22 pm UTC

I understand completely, the insert operation may generate log for dictionay operation. But, my question is, shouldn't the redo log generated be comparitively less when the table is in "nologging" mode.

Thanks,


Tom Kyte
March 29, 2002 - 2:39 pm UTC

not neccessarily. MOST all of the redo generated by an insert is due to its indexes. The index maintenance is done very differently for /*+ append */ inserts then for non-append inserts. This difference accounts for what you are seeing here.

/*+ append */ into a heavily indexed (or just indexed for that matter ) table is mostly a waste of keystrokes. It's best for a bulk load into a table with no indexes.



How about PK

Rob Balter, March 29, 2002 - 8:58 pm UTC

Tom:
You say
/*+ append */ into a heavily indexed (or just indexed for that matter ) table is mostly a waste of keystrokes. It's best for a bulk load into a table with no indexes.

Does this apply if the only index is a Primary Key or
unique index, or is it better to drop the PK, bulk
load and then recreate the index




Tom Kyte
March 30, 2002 - 10:15 am UTC

an index is an index is an index -- be it primary key or otherwise. 

Consider:

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t ( x int );

Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> set autotrace on statistics
ops$tkyte@ORA817DEV.US.ORACLE.COM> insert /*+ append */ into t select rownum from all_objects;

17183 rows created.


Statistics
----------------------------------------------------------
         11  recursive calls
          7  db block gets
      80334  consistent gets
          0  physical reads
        224  redo size
        996  bytes sent via SQL*Net to client
        839  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
      17183  rows processed

ops$tkyte@ORA817DEV.US.ORACLE.COM> truncate table t;

Table truncated.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> insert into t select rownum from all_objects;

17183 rows created.


Statistics
----------------------------------------------------------
          7  recursive calls
        220  db block gets
      80371  consistent gets
          0  physical reads
     257208  redo size
       1012  bytes sent via SQL*Net to client
        825  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      17183  rows processed

ops$tkyte@ORA817DEV.US.ORACLE.COM> truncate table t;

Table truncated.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> alter table t add constraint t_pk primary key(x);

Table altered.

ops$tkyte@ORA817DEV.US.ORACLE.COM> insert /*+ append */ into t select rownum from all_objects;

17184 rows created.


Statistics
----------------------------------------------------------
        162  recursive calls
        772  db block gets
      80425  consistent gets
          0  physical reads
     920844  redo size
        995  bytes sent via SQL*Net to client
        839  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          8  sorts (memory)
          0  sorts (disk)
      17184  rows processed

ops$tkyte@ORA817DEV.US.ORACLE.COM> truncate table t;

Table truncated.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> insert into t select rownum from all_objects;

17184 rows created.


Statistics
----------------------------------------------------------
          7  recursive calls
        995  db block gets
      80459  consistent gets
          1  physical reads
    1179008  redo size
       1013  bytes sent via SQL*Net to client
        825  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      17184  rows processed

ops$tkyte@ORA817DEV.US.ORACLE.COM>


it would be better to disable the index, do the insert and then rebuild it unrecoverable.

 

Mahomed Suria, July 29, 2002 - 3:23 am UTC

This was very helpful. Sorry, but I have to tried to get a question in for a long time and failed. In desperation I have resorted to getting my question in this way.

I have a problem that my database has suddenly started producing too many redo log switches. From 10 switches per day to 300 a day. How do I find out what has caused this. i.e whisch SQL(s) and user(s) .


Mahomed

Tom Kyte
July 29, 2002 - 7:24 am UTC

A reader, July 29, 2002 - 12:07 pm UTC

Hi tom,

In your response below i saw sort operation.

ops$tkyte@ORA8I.WORLD> insert /*+ append */ into t select * from all_objects;
34855 rows created.


Statistics
----------------------------------------------------------
151 recursive calls
116 db block gets
474434 consistent gets
2173 physical reads
10208 redo size
1000 bytes sent via SQL*Net to client
827 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
3 sorts (memory) <----------------
0 sorts (disk)
34855 rows processed

We are just inserting rows.So how come sort occur here.

Thanks,

Tom Kyte
July 29, 2002 - 1:57 pm UTC

select * from all_objects;

that query required some sorts.

DMT or LMT

Rob, July 29, 2002 - 3:25 pm UTC

You said:

q1) no one said that append generated NO redo. The redo you see generated in the non-indexed example is used to protect the data dictionary. We allocated space above the HWM for the table. If the system failed in the middle of that insert /*+ append */ -- we need to roll back that data dictionary update. There will be redo generated but it is to protect the data dictionary -- not your newly inserted data (we just undo the space allocation if it fails, your data "disappears" as if by magic)

Tom:

Are you assuming a Dictionary managed tablespace when you talk about the redo generated in the above insert. Would your answer be different if it was an LMT.


Tom Kyte
July 29, 2002 - 4:35 pm UTC

Nope, you would still get some redo generated, just probably less.

Here users is an LMT, system is DMT:

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t tablespace users as select * from all_objects where 1=0;
Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> set autotrace on statistics

ops$tkyte@ORA817DEV.US.ORACLE.COM> insert /*+ append */ into t select * from all_objects;

22829 rows created.


Statistics
----------------------------------------------------------
         91  recursive calls
         67  db block gets
      74678  consistent gets
          0  physical reads<b>
       5960  redo size</b>
        996  bytes sent via SQL*Net to client
        827  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
      22829  rows processed

ops$tkyte@ORA817DEV.US.ORACLE.COM> alter session set sql_trace=false;

Session altered.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> drop table t;

Table dropped.

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t tablespace system as select * from all_objects where 1=0;

Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> insert /*+ append */ into t select * from all_objects;

22829 rows created.


Statistics
----------------------------------------------------------
       2740  recursive calls
        382  db block gets
      76122  consistent gets
          0  physical reads<b>
      48708  redo size</b>
        997  bytes sent via SQL*Net to client
        827  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
      22829  rows processed


I traced it and found:

update tsq$ set blocks=:3,maxblocks=:4,grantor#=:5,priv1=:6,priv2=:7,priv3=:8 where ts#=:1 and user#=:2

for the LMT (more for the DMT). TSQ$ -- tablespace quotas.  Still some dictionary stuff happening, albeit it alot less. 

Thanks a lot!!!

Anny, October 25, 2002 - 11:54 am UTC

I hade the same bottle neck!!!
And I overcome it!

Could you pl give an example of disable pk, insert, rebuild nologging?

Ramakrishna, January 10, 2003 - 5:31 am UTC

Dear Tom,
I have the following situation - a large table with a primary key index. My aim is to minimize logging for this table (without resorting to partitioning if possible) during daily data loads. I want to:
a. Disable the pkey index
b. Insert /*+ APPEND */ data into it
c. Rebuild the pkey index nologging
However, I am unable to do the insert once the pkey constraint is disabled. Could you please explain how this can be done?

thanks
ramakrishna



Tom Kyte
January 10, 2003 - 7:31 am UTC

alter session set skip_unusable_indexes=true;


And in this case ? Is it waste o keystrokes here ?

A reader, January 10, 2003 - 10:12 am UTC

INSERT /*+ append */ INTO t1
SELECT * FROM t2
WHERE NOT EXISTS ( SELECT 'X'
FROM t1
WHERE t1.f1 = t2.f1);
commit;

If both t1 and t2 have an non unique index on f1.

Tom Kyte
January 10, 2003 - 10:19 am UTC

no?

large insert

Vikas Sharma, February 25, 2003 - 6:01 pm UTC

Hi Tom,

I have the following secnario.
user A has a table tab_A.
user b has a table tab_b.

When connected to user a i want to load data from to temp_a to user.temp_b.

I want to be use
insert into b.temp_b(i,j,k) select i,j,k from a.temp_a;

this insert statment of mine is dynamically created. i dont know how many cols will be selected by user. so i have to use execute immediate.

Now i should first disable the primary key of temp_b then
disable the index if any on temp_b and create a statment like :

insert /*+ append */ into b.temp_b(i,j,k) select i,j,k from a.temp_b;

and finally execute the statement for better performance by generating the less redo.

what if insert fails in between.

2.What will the follwing
alter session set skip_unusable_indexes=true;

will do. Do i need to set this before start my insert.

Regards,
Vikas Sharma



Tom Kyte
February 25, 2003 - 8:51 pm UTC

1) the premise that disabling the indexes will create better run times may be flawed.

Are you adding 1,000,000 rows to a 50,000,000 row table? Just insert.
Are you adding 1,000,000 rows to an empty table? disable/rebuild
Are you adding N rows to M rows ? depends, might disable might not. use common sense and testing to see what is best.


what if the insert fails indeed? so?

2) you need to DROP unique indexes they are not skipped. but yes, you would need that.

Saminathan

Saminathan, May 06, 2003 - 8:24 am UTC

Dear Tom,

I'm getting UNEXPECTED result in both 8i and 9iR2.

Could you please tell me why insert /*+APPEND*/ generates more redo log.

I'm pretty sure that there is NO index involved in my test table(redolog_test). Also the user is not a SYS and SYSTEM.
The DB is running in ARCHIVE LOG mode.

SQL> show user
USER is "MUT"

SQL> drop table redolog_test;
Table dropped.

SQL> create table redolog_test pctfree 99 pctused 1 as select * from all_objects where 1=2;
Table created.

SQL> insert into redolog_test select * from all_objects;
19412 rows created.

Statistics
----------------------------------------------------------
       2003  recursive calls
      89992  db block gets
     161720  consistent gets
          0  physical reads
   10661144  redo size  <========
        438  bytes sent via SQL*Net to client
        433  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
      19412  rows processed

SQL> truncate table redolog_test;
Table truncated.

SQL> insert /*+APPEND*/ into redolog_test select * from all_objects;
19412 rows created.

Statistics
----------------------------------------------------------
       2012  recursive calls
        365  db block gets
     142229  consistent gets
          0  physical reads
  159534376  redo size  <=======(15 Times higher)
        430  bytes sent via SQL*Net to client
        445  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
      19412  rows processed
SQL>

What could be wrong?  

Tom Kyte
May 06, 2003 - 9:03 am UTC

Interesting.  INSERT /*+ APPEND */ into a table in an archivelog mode database that is not marked "nologging" appears to do full block logging instead of changed bytes only.


You have an absurdly high pctfree.  This causes every row to go into a separate block.  Normally, insert /*+ append */ (built for speed) fills a block up -- logs it.  You are causing each and every row to fill up a block and instead of taking the time to parse out just the changed bytes -- it logs the block.

If you alter the table to be NOLOGGING (then it would actually make sense to use append) OR if you set pctfree to a reasonable value, this does not occur.  Consider:


ops$tkyte@ORA9I> create table t pctfree 99 pctused 1 as select * from all_objects where 1=2;
Table created.

ops$tkyte@ORA9I> set autotrace on statistics
ops$tkyte@ORA9I> insert /*+ append */ into t select * from all_objects where rownum <= 19412;

19412 rows created.


Statistics
----------------------------------------------------------
  162,794,128  redo size
       19,412  rows processed

ops$tkyte@ORA9I> set autotrace off

ops$tkyte@ORA9I> analyze table t compute statistics for table;
Table analyzed.

ops$tkyte@ORA9I> select blocks from user_tables where table_name = 'T';

    BLOCKS
----------
     19412

<b>one row per block, 8k blocks, 

ops$tkyte@ORA9I> select 19412*8*1024 from dual;

19412*8*1024
------------
 159,023,104

we logged full block images and since you had pctfree 99, most of the block was in fact *empty*...</b>


ops$tkyte@ORA9I> truncate table t;
Table truncated.

ops$tkyte@ORA9I> set autotrace on statistics
ops$tkyte@ORA9I> insert into t select * from all_objects where rownum <= 19412;
19412 rows created.


Statistics
----------------------------------------------------------
 11,116,860  redo size
     19,412  rows processed

ops$tkyte@ORA9I> set autotrace off
ops$tkyte@ORA9I> analyze table t compute statistics for table;
Table analyzed.

ops$tkyte@ORA9I> select blocks from user_tables where table_name = 'T';

    BLOCKS
----------
     19416

<b>same number of blocks -- however, "changed bytes" logging -- not full block
</b>

ops$tkyte@ORA9I> truncate table t;
Table truncated.

<b>
ops$tkyte@ORA9I> alter table t nologging;
Table altered.
</b>

ops$tkyte@ORA9I> set autotrace on statistics
ops$tkyte@ORA9I> insert /*+ append */ into t select * from all_objects where rownum <= 19412;

19412 rows created.


Statistics
----------------------------------------------------------
     447,376  redo size
      19,412  rows processed

ops$tkyte@ORA9I> set autotrace off

ops$tkyte@ORA9I> analyze table t compute statistics for table;
Table analyzed.

ops$tkyte@ORA9I> select blocks from user_tables where table_name = 'T';

    BLOCKS
----------
     19412

ops$tkyte@ORA9I> set autotrace off

<b>so, if you use APPEND in conjunction with NOLOGGING -- very little redo -- of if you use a sane pctfree and actually fill up the blocks:</b>

ops$tkyte@ORA9I> truncate table t;

Table truncated.

ops$tkyte@ORA9I> alter table t logging;

Table altered.

ops$tkyte@ORA9I> alter table t pctfree 10;

Table altered.

ops$tkyte@ORA9I>
ops$tkyte@ORA9I> set autotrace on statistics
ops$tkyte@ORA9I> insert /*+ append */ into t select * from all_objects where rownum <= 19412;

19412 rows created.


Statistics
----------------------------------------------------------
        301  recursive calls
         55  db block gets
      93649  consistent gets
        164  physical reads
    2212388  redo size
        779  bytes sent via SQL*Net to client
        844  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          6  sorts (memory)
          0  sorts (disk)
      19412  rows processed

ops$tkyte@ORA9I> set autotrace off
ops$tkyte@ORA9I> analyze table t compute statistics for table;

Table analyzed.

ops$tkyte@ORA9I> select blocks from user_tables where table_name = 'T';

    BLOCKS
----------
       267


<b>it generates about the same amount of redo as a "normal" insert would:
</b>

ops$tkyte@ORA9I> truncate table t;

Table truncated.

ops$tkyte@ORA9I> set autotrace on statistics;
ops$tkyte@ORA9I> insert into t select * from all_objects where rownum <= 19412;

19412 rows created.


Statistics
----------------------------------------------------------
         91  recursive calls
       1549  db block gets
      93947  consistent gets
          0  physical reads
    2166700  redo size
        794  bytes sent via SQL*Net to client
        830  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      19412  rows processed


<b>So here -- it is because

a) you have a very very high pctfree leaving almost the entire block empty
b) the table is in LOGGING mode</b>


If you want insert /*+ append */ to work fast you will set the table NOLOGGING anyhow.  So, this is something of "academic interest" since I cannot fathom the business case whereby you would leave most of the block empty OR use append reguarly on a LOGGING table.





 

Dear Guru, Thank you so much for the explanation with an example

Saminathan, May 06, 2003 - 9:42 am UTC

Sorry for bothering you again...

One more question,

What is the difference between below <CASE1> and
<CASE2> w.r.t redolog?  What is the logic behind in CASE1 
to generate huge volume of REDO.

Kindly explain why <CASE1> generates
~30MB redologs and <CASE2> generates ~ <= 5MB redolog.

<CASE1>
SQL> create table redolog_test1 pctfree 99 pctused 1
as select * from all_objects;
Table created.
<CASE1>
5MB redolog (Log switch happend 6 times)

<CASE2>
SQL> create table redolog_test2 pctfree 99 pctused 1
as select * from all_objects where 1=2;
Table created.

SQL> insert into redolog_test2 select * from
all_objects where rownum <=10000;
2741 rows created.
</CASE2>
No log switch at all.

Since CASE1 generates more REDO(basically more I/O), it would be better to avoid CASE1. Am i right?
Need your comment Tom. 

Tom Kyte
May 06, 2003 - 10:00 am UTC

I told you --


when you do an insert /*+ append */ AND the database is in archivelog mode AND the table is LOGGING

THEN

full blocks are written to the redo log.
In most every case -- this is a performance enhancing feature -- your mixture of an excessively high pctfree combined with logging mode defeated that. In your extreme case, the blocks were almost entirely EMPTY due to the excessively high pctfree setting.


when you do an insert and the database is in archivelog mode

THEN

changed bytes are painstakingly computed and written to the redo log
-- so the 90% or more of the block that was not used in your case was not written to the redo log.



The bottom line to using APPPEND

a) the table should be NOLOGGING for it to make total sense.


done, period -- as demonstrated above -- either set the table NOLOGGING or use a REASONABLE pctfree (as demonstrated)





A reader, May 19, 2003 - 6:54 pm UTC

It seems that /*+ APPEND */ only works for INSERT SELECT not for just INSERT.

FOr example,

INSERT /*+ APPEND */ INTO a VALUES (:1, :2);

will still generate redo's.

Is it true?

thanks

Tom Kyte
May 20, 2003 - 7:31 am UTC

as the documentation says -- insert append is for BULK inserts done with insert select.

It would be utterly a terrible idea to direct path insert a single row. think of what a direct path insert does and what would happen.

you would

a) get one row per block
b) never reuse any free space

all to save a couple of bytes of redo. it would not be even remotely close to "worth it"

A reader, May 27, 2003 - 6:30 am UTC


Vipin, June 05, 2003 - 9:57 pm UTC

Hi Tom,

I think the second question was, how will anyone know what are the CONTEXTS created in my database. With sys_context and session_context I think we will be able to know only which all contexts are active.

Please tell how we will get this info.



Tom Kyte
June 06, 2003 - 7:01 am UTC

you mean like ALL_CONTEXT, did you look at the data dictionary tables. That is where we register pretty everything.

Vipin, June 06, 2003 - 10:34 am UTC

Hi Tom,

What I mean is suppose I create contexts using following statements:-

Create context context1 using package1;
Create context context2 using package2;

Now where I can go and see that I have two contexts which can be used.

Tom Kyte
June 06, 2003 - 10:49 am UTC

dba_context = all contexts that can be used.
all_context = all contexts in use in your session.


if you, as the developer, don't know the context -- you have a bigger problem at hand. You have to create the package after all to populate it.

as an end user, you don't have a need to know the universe of context's out there, it is simply "not relevant".

more redo?

Reader, June 06, 2003 - 12:17 pm UTC

Please see below <---------comments. Why direct insert as nologging into my table which is defined as logging produces a lot of redo?

SQL> create table t ( x int ) tablespace users; <------------- LOGGING table

Table created.

SQL> insert /*+ append */ into t select rownum from all_objects;

29106 rows created.


Statistics
----------------------------------------------------------
        135  recursive calls
        158  db block gets
     133565  consistent gets
          0  physical reads
     383520  redo size <--------------------------- A lot of REDO - no surprise
        612  bytes sent via SQL*Net to client
        554  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
      29106  rows processed


SQL> alter table t nologging; <----------------------- NOLOGGING 

Table altered.

SQL> insert /*+ append */ into t select rownum from all_objects;

29106 rows created.


Statistics
----------------------------------------------------------
        285  recursive calls
        164  db block gets
     133582  consistent gets
          2  physical reads
      11952  redo size <------------------------ much reduced REDO - no surprise
        612  bytes sent via SQL*Net to client
        554  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          6  sorts (memory)
          0  sorts (disk)
      29106  rows processed


SQL> truncate table t;

Table truncated.

SQL> alter table t logging; <------------ again LOGGING table

Table altered.

SQL> insert /*+ append */ into t nologging select rownum from all_objects;<---- NOLOGGING

29106 rows created.


Statistics
----------------------------------------------------------
        285  recursive calls
        164  db block gets
     133582  consistent gets
          2  physical reads
     383840  redo size    <------------------------ surprise- why more redo as 1st case
        612  bytes sent via SQL*Net to client
        564  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          6  sorts (memory)
          0  sorts (disk)
      29106  rows processed 

Tom Kyte
June 06, 2003 - 12:57 pm UTC

why surprise? the table is logging the inserts as directed?

the only thing "surprising" is the surprise you are experiencing....

BUT

Reader, June 06, 2003 - 1:04 pm UTC

I specified nologging in my direct insert as below.

insert /*+ append */ into t nologging select rownum from all_object

Tom Kyte
June 06, 2003 - 1:27 pm UTC

Oh, sorry -- didn't notice that nologging -- but only cause it is not relevant... It is not any different then:


insert /*+ append */ into t foobar_this_is_just_an_alias select rownum from all_objects;


that "nologging" word was just a correlation name, an alias for T in the query, not a keyword.

why the syntax has nologging

Reader, June 06, 2003 - 1:55 pm UTC

I think i am confused with the syntax.

INSERT /*+APPEND */ INTO [SCHEMA. ] table
[ [NO]LOGGING ]
sub-query;

also, oracle doc says, If the NOLOGGING option is used, changes to data are not recored in the redolog buffer.

am i missing something here? you are right and i tried by specifying alias in place of nologging and the behavior was same. It took it as an alias. So, what is this NOLOGGING clause in the syntax? and how should I use it properly to not generate redo as suggested by the doc? Thanks.

Tom Kyte
June 06, 2003 - 2:54 pm UTC

where do you see that?

give me a pointer into otn.oracle.com please.

I looked at the 817 and 9iR2 SQL Reference and didn't see that.


the docs I'm reading say "if the TABLE is in nologging mode or the database is in NOARCHIVELOG mode"

in OU course book

Reader, June 06, 2003 - 3:15 pm UTC

Tom, Please look in Oracle University 9i DBA fundamentals 1 course book on page 19-4 and 19-5 (chapter 19 Loading data into a database). I consider these course books also as oracle docs! I am glad I am checking with you. Because that syntax and explanation in the course book do not reflect what we/I see from testing.

Tom Kyte
June 06, 2003 - 4:02 pm UTC

I don't consider them as docs and haven't really ever seen them.

It is *wrong*, dead wrong. It conflicts with the real documentation. You can use the contact links on education.oracle.com to ask them to update it.

alter index idx nologging

A reader, June 19, 2003 - 11:57 am UTC

hi

if we set index to nologging can we avoid redo in insert append operation?

Tom Kyte
June 19, 2003 - 1:19 pm UTC

no

All this nologging for performance..what happened to recoverability?

Chuan Satcha, June 20, 2003 - 1:27 am UTC

Tom

This nologging seems to be a great performance tool when doing right. However since not all REDOs are generated and other redo transactions occur with other tables, will the database be recoverable even in archivelog mode?

Tom Kyte
June 20, 2003 - 4:47 pm UTC

the "database" -- yes, always.

the "objects you performed non-logged operations on", no NOT until you backup the affected datafiles.

Question

PRS, June 20, 2003 - 2:34 pm UTC

Hi Tom,
Can I use /*+ append */ for new oracle9i feature insert all (MULICAST INSERT)? Do I need to put all the tables into NOLOGIGN mode?

Thanks,
PRS

Tom Kyte
June 20, 2003 - 5:46 pm UTC

Yes and 

Yes if you are in archive log mode
No if you are in noarchive log mode (which would only be true on your prototype/test/scratch database)

so, Yes and Yes

ops$tkyte@ORA920> create table t ( x varchar2(50) );

Table created.

ops$tkyte@ORA920> alter table t nologging;

Table altered.

ops$tkyte@ORA920>
ops$tkyte@ORA920> set autotrace on statistics
ops$tkyte@ORA920>
ops$tkyte@ORA920> insert all
  2  into t (x) values (object_id)
  3  into t (x) values (object_name)
  4  select * from all_objects;

63156 rows created.


Statistics
----------------------------------------------------------
        495  recursive calls
       3004  db block gets
     146505  consistent gets
          0  physical reads
    1667532  redo size
        772  bytes sent via SQL*Net to client
        874  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          6  sorts (memory)
          0  sorts (disk)
      63156  rows processed

ops$tkyte@ORA920>
ops$tkyte@ORA920> insert /*+ APPEND */ all
  2  into t (x) values (object_id)
  3  into t (x) values (object_name)
  4  select * from all_objects;

63156 rows created.


Statistics
----------------------------------------------------------
         28  recursive calls
         33  db block gets
     146023  consistent gets
          0  physical reads
       2564  redo size
        772  bytes sent via SQL*Net to client
        888  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      63156  rows processed

ops$tkyte@ORA920>
ops$tkyte@ORA920> set autotrace off
ops$tkyte@ORA920> select * from t;
select * from t
              *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel

 

Hmm....insert all....I've never seen that before.....

Mark J Bobak, June 20, 2003 - 6:03 pm UTC

It's interesting how the conventional insert interleaved
rows in blocks of 255, but for direct path, the interleave
factor was 1.

-Mark

Insert /*+ Append */

Bharath, June 23, 2003 - 1:11 pm UTC

Can Direct Insert can be used in Distributed transcation.We have a java procedure (i.e Simple Java Class) which uses JDBC to populate the Data from the distributed Datbase using DBLINK.
E.g

Insert /*+ Append*/ into Table A as select * from remotetableA@dblink

Insert /*+ Append*/ into Table B as select * from remotetableB@dblink

When executing the second statement i am getting an oracle error or if i do any select statement from the local database i am getiing the same error.Is this a limitation.


Tom Kyte
June 23, 2003 - 1:51 pm UTC

well, if you are getting and ora-12840 (guessing, you opted not to share today), it is fairly self explanatory:

scott@ORA817DEV> !oerr ora 12840
12840, 00000, "cannot access a remote table after parallel/insert direct load txn"
// *Cause: Within a transaction, an attempt was made to perform distributed
// access after a PDML or insert direct statement had been issued.
// *Action: Commit/rollback the PDML transaction first, and then perform
// the distributed access, or perform the distributed access before the
// first PDML statement in the transaction.


So, can a direct path insert be used in a distributed transaction?

scott@ORA817DEV> insert into emp@ora920.us.oracle.com
2 select -empno, ename, job, mgr, hiredate, sal, comm, deptno
3 from t1;

14 rows created.

scott@ORA817DEV> insert /*+ append */ into t1 select * from emp@ora920.us.oracle.com;

28 rows created.

scott@ORA817DEV> commit;

Commit complete.


Yes


But, there is the limitation that only one direct path load may take PLACE when using a distributed query in a single transaction.

Thanks

A reader, June 23, 2003 - 2:49 pm UTC


question

PRS, June 24, 2003 - 11:00 am UTC

Tom,
Once I put the table into nologging mode and do the
data load using insert all with APPEND hint(Direct Path Load) then rebuild the index in nologging mode, then
I turn the table and all its indexes in log mode.
Then take the cold backup as there were no arcivelog.

So my question is, it will start generating the archive log for tables and index after truning back to logging mode?
Is this assumption right?

Thanks,
PRS



Tom Kyte
June 25, 2003 - 9:18 am UTC

nologging only applies to direct path type operations.

all insert/update/delete after that would be logged regardless of the logging mode.

Thanks a lot

A reader, June 25, 2003 - 9:44 am UTC


Question

PRS, June 25, 2003 - 11:46 am UTC

Hi Tom,
Why is the below SELECT statement is failing with APPEND hint with ORA-01401: inserted value too large for column. As soon as I remove the APPEND hit, it does insert rows. We are on ORACLE 9.2.0.3.0.

insert /*+ APPEND */ all
into PS_RSF_OPP_NOTE
(OPPORTUNITY_ID -- VARCHAR2 (15) NOT NULL
,NOTE_SEQ_NBR -- INTEGER NOT NULL
,RC_SUMMARY -- VARCHAR2 (80) NOT NULL
,RC_NOTE_TYPE -- VARCHAR2 (5) NOT NULL
,RC_VISIBILITY -- VARCHAR2 (4) NOT NULL
,ROW_ADDED_DTTM -- DATE
,ROW_ADDED_OPRID -- VARCHAR2 (30) NOT NULL
,ROW_LASTMANT_DTTM -- DATE
,ROW_LASTMANT_OPRID -- VARCHAR2 (30) NOT NULL
,SYNCID -- INTEGER
,SYNCDTTM -- DATE
,DESCRLONG) -- LONG
values (SWOBJECTID -- NUMBER (15,5) OPPORTUNITY_ID VARCHAR2 (15) NOT NULL
,note_seq_num -- NOTE_SEQ_NBR INTEGER NOT NULL
,WSI_SUBJECT -- VARCHAR2 (30) RC_SUMMARY VARCHAR2 (80) NOT NULL
,' ' -- RC_NOTE_TYPE VARCHAR2 (5) NOT NULL
,' ' -- RC_VISIBILITY VARCHAR2 (4) NOT NULL
,SWDATECREATED -- DATE ROW_ADDED_DTTM DATE,
,SWCREATEDBY -- VARCHAR2 (20) ROW_ADDED_OPRID VARCHAR2 (30) NOT NULL
,null -- ROW_LASTMANT_DTTM DATE,
,' ' -- ROW_LASTMANT_OPRID VARCHAR2 (30) NOT NULL
,WSIOPPORTUNITYCOMMENTID -- NUMBER (15,5) SYNCID INTEGER,
,sysdate -- SYNCDTTM DATE,
,WSI_NOTES) -- VARCHAR2 (4000) DESCRLONG LONG
select WSIOPPORTUNITYCOMMENTID -- NUMBER (15,5)
,SWOBJECTID -- NUMBER (15,5)
,SWOBJECTTYPE -- VARCHAR2 (20)
,WSI_SUBJECT -- VARCHAR2 (30)
-- ,WSI_NOTE_SAMPLE -- VARCHAR2 (75)
,WSI_NOTES -- VARCHAR2 (4000)
-- ,SWMASTERVER -- VARCHAR2 (30)
,SWDATECREATED -- DATE
,SWCREATEDBY -- VARCHAR2 (20)
-- ,TIMESTAMP -- VARCHAR2 (16)
,row_number() over (partition by SWOBJECTID order by WSIOPPORTUNITYCOMMENTID) note_seq_num
from WSI_OPPORTUNITY_COMMENT@vant8iqa
where (SWOBJECTID >= 1) and (SWOBJECTID < 100000)
/


Thanks,
PRS


Tom Kyte
June 25, 2003 - 7:24 pm UTC

seems to be that long column...


Here is a simple test case you can "bug" with support if you like:

ops$tkyte@ORA920> create table t ( x long );

Table created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> insert into t select * from dual@ora920@loopback;

1 row created.

ops$tkyte@ORA920> insert /*+ APPEND */ into t select * from dual@ora920@loopback;
insert /*+ APPEND */ into t select * from dual@ora920@loopback
                                   *
ERROR at line 1:
ORA-01401: inserted value too large for column


ops$tkyte@ORA920>
ops$tkyte@ORA920> drop table t;

Table dropped.


<b>However, you should be using a CLOB -- and if you do:</b>


ops$tkyte@ORA920> create table t ( x clob );

Table created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> insert into t select * from dual@ora920@loopback;

1 row created.

ops$tkyte@ORA920> insert /*+ APPEND */ into t select * from dual@ora920@loopback;

1 row created. 

ORA-12838

A reader, July 11, 2003 - 4:57 am UTC

Hi

I am testig two insert append sql statements and one I get ORA-12838 and the other not. The test cases are as follows


>INSERT /*+ APPEND */ INTO y select * from dba_objects;

26006 rows created.


Statistics
----------------------------------------------------------
688 recursive calls
458 db block gets
20029 consistent gets
2 physical reads
33876 redo size
604 bytes sent via SQL*Net to client
549 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
26006 rows processed

>r
1* INSERT /*+ APPEND */ INTO y select * from dba_objects
INSERT /*+ APPEND */ INTO y select * from dba_objects
*
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel




working case (small table)

>INSERT /*+ APPEND */ INTO x select * from dept;

4 rows created.


Statistics
----------------------------------------------------------
0 recursive calls
7 db block gets
4 consistent gets
0 physical reads
1140 redo size
620 bytes sent via SQL*Net to client
542 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
4 rows processed

>r
1* INSERT /*+ APPEND */ INTO x select * from dept

4 rows created.


Statistics
----------------------------------------------------------
0 recursive calls
6 db block gets
4 consistent gets
0 physical reads
1016 redo size
620 bytes sent via SQL*Net to client
542 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
4 rows processed


how so?

Tom Kyte
July 11, 2003 - 10:20 am UTC

all that means is that X is NOT a candidate for a direct path insert, so the append hint was ignored.  I can do simulate your findings but since you gave us an incomplete example from which to start -- I don't know if it is exactly your case.  

In a nutshell, the first insert was in fact a direct path, the second was just a normal old fashioned insert.

(does make my job more interesting when I play forensic scientist to try and reassemble what might have happened -- but it would be easier to answer if I knew all of the facts :)

ops$tkyte@ORA817DEV> create table t1 ( x int primary key );

Table created.

ops$tkyte@ORA817DEV> create table t2 ( x int primary key ) <b>organization index;
</b>
Table created.

ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV> insert /*+ append */ into t1 select rownum from dual;

1 row created.

ops$tkyte@ORA817DEV> select * from t1;
select * from t1
              *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


ops$tkyte@ORA817DEV> insert /*+ append */ into t2 select rownum from dual;

1 row created.

ops$tkyte@ORA817DEV> select * from t2;

         X
----------
         1

<b>since an IOT is one of the objects you cannot direct path load, the append hint was ignored -- as hints will be when then do not apply</b>

 

direct insert and benchmark

A reader, July 17, 2003 - 7:00 pm UTC

Hi Tom
While creating benchmark - sometimes you need to insert
lots of data. is it "safe" to use direct insert in
terms of benchmarking? What I mean is do I risk getting
misleading results if I use direct insert versus
normal insert simply because the actual application
uses normal insert?

thanx!


Tom Kyte
July 17, 2003 - 9:10 pm UTC

direct path inserts result in the same data being loaded.

it is NOT a benchmark special tool -- it is a tool to use when appropriate, in the real world.

Why in procedure

Ashok, July 18, 2003 - 11:11 am UTC

>>12840, 00000, "cannot access a remote table after >>parallel/insert direct load txn"

i am not getting 12840 when i execute multiple direct path insert inside the procedure and execute...

for e.g
Create or replace proc
as
Insert /*+append*/ from tableA@remote
Insert /*+append*/ from tableB@remote
commit;
end;
No problem when i execute the above procedure.But if i execute individual statements why i am getting 12840

Pls clarify.



Tom Kyte
July 19, 2003 - 11:16 am UTC

that is not a stored procedure in oracle -- don't know what you are doing wrong based on that.

Why In procedure

Ashok, July 21, 2003 - 8:50 am UTC

I was trying to do direct insert(i.e APPEND + NoLogging) from multiple remote table . I am getting an oracle error
12840, 00000, "cannot access a remote table after parallel/insert direct load txn" if i try to execute the statement(i.e Insert /*+APPEND*/ into )from JDBC i am getting the above error(i.e on the second execute of the direct path insert statement).But if i put all the direct path insert statement in Strored proc and execute .No problem all the direct path insert statement was sucessfull.Why?

(Note :All the views(v_edgetrc_) has accessed from the remotetable thru DBLINK)

for e.g
procedure TR_PROC_REFRESH_AGENT_PLUS
IS

BEGIN
/* Insert Into tr_ap_deals */
insert/*+APPEND*/ into tr_ap_deals(FNEN, ENTITY, DEAL_ID, DEAL_NAME)
select FNEN, ENTITY, DEAL_ID, DEAL_NAME from v_edgetrc_deals;

/* Insert Into tr_ap_borrowers */
insert /*+APPEND*/ into tr_ap_borrowers(DEAL_ID, BORROWER_ID, BORROWER_NAME, PARENT_BORROWER_IND, MHUCN, MHUCN_SFX)
select VEB.DEAL_ID
, VEB.BORROWER_ID
, UPPER(VEB.BORROWER_NAME)
, VEB.PARENT_BORROWER_IND
, VEB.MHUCN
, VEB.MHUCN_SFX
from v_edgetrc_borrowers VEB
, v_edgetrc_deals VED
where VEB.PARENT_BORROWER_IND = 'Y'
and VEB.DEAL_ID = VED.DEAL_ID;

/* Insert Into tr_ap_investors */
insert /*+APPEND*/ into tr_ap_investors(INVESTOR_ID, INVESTOR_NAME, PARENT_SAME_AS_INVESTOR_IND, PARENT_INVESTOR_ID, MHUCN, MHUCN_SFX)
select INVESTOR_ID, UPPER(INVESTOR_NAME), PARENT_SAME_AS_INVESTOR_IND, PARENT_INVESTOR_ID, MHUCN, MHUCN_SFX from v_edgetrc_investors;


/* Insert Into tr_ap_fac_info */
insert /*+APPEND*/ into tr_ap_fac_info
( FACILITY_ID
, FAC_NAME
, FAC_DESC
, FAC_TYPE
, FAC_TYPE_DESC
, BASE_CURRENCY_CODE
, OPEN_FAC_AMT
, LC_LINE_AMT
, BORROWING_BASE_AMT
, CREDIT_AGRMNT_DATE
, UNDER_AGRMT_DATE
, MATURITY_DATE
, CANCEL_DATE
, ENTRY_DATE
, BILATERAL_IND
, GES_FACID
, CLIENT_ID
, RRB_NAME
, RRB_DESC
, CUR_FAC_AMT
, FAC_OUTS
, NOTE_OUTS
, LC_OUTS
, OTHER_OUTS
)
select VEFS.FACILITY_ID
, VEFS.FAC_NAME
, VEFS.FAC_DESC
, VEFS.FAC_TYPE
, VEFS.FAC_TYPE_DESC
, VEFS.BASE_CURRENCY_CODE
, VEFS.OPEN_FAC_AMT
, VEFS.LC_LINE_AMT
, VEFS.BORROWING_BASE_AMT
, VEFS.CREDIT_AGRMNT_DATE
, VEFS.UNDER_AGRMT_DATE
, VEFS.MATURITY_DATE
, VEFS.CANCEL_DATE
, VEFS.ENTRY_DATE
, VEFS.BILATERAL_IND
, VEFS.GES_FACID
, VEFS.CLIENT_ID
, VEFC.RRB_NAME
, VEFC.RRB_DESC
, VEFC.CUR_FAC_AMT
, VEFO.FAC_OUTS
, VEFO.NOTE_OUTS
, VEFO.LC_OUTS
, VEFO.OTHER_OUTS
from v_edgetrc_fac_select VEFS
, v_edgetrc_fac_calc VEFC
, v_edgetrc_fac_outs VEFO
where VEFS.FACILITY_ID = VEFC.FACILITY_ID (+)
and VEFS.FACILITY_ID = VEFO.FACILITY_ID (+);


/* Insert Into tr_ap_ishare_open */
insert /*+APPEND*/ into tr_ap_ishare_open(FACILITY_ID, INVESTOR_ID, INV_SHARE, SHARE_PRO_RATA)
select VEIO.FACILITY_ID
, VEIO.INVESTOR_ID
, VEIO.INV_SHARE
, VEIO.SHARE_PRO_RATA
from v_edgetrc_ishare_open VEIO
, tr_ap_fac_info tafi
where VEIO.FACILITY_ID = TAFI.FACILITY_ID;
END TR_PROC_REFRESH_AGENT_PLUS;



Tom Kyte
July 21, 2003 - 3:17 pm UTC

I'm going to guess - you are running a really old version of Oracle and if you set sql-trace=true, and looked at the tkprof file, you would find that the FIRST character in a /*+APPEND*/ hint is missing so it is really /*+PPEND*/ - hence, it is NOT doing a direct path insert.

there was a bug, a long time ago, whereby the first character in a hint was "gone", so you needed to /*+ APPEND */ (add a space)

The problem however is intractable here, you CANNOT do those multiple direct path inserts, so "don't"

Why in a procedure

Ashok, July 21, 2003 - 4:49 pm UTC

Thanks!


Problem with insert /*+append*/

Sridhar, October 09, 2003 - 4:01 pm UTC

Tom,
I have a problem while inserting.
I have a unique index on 3 columns together.
ALTER index idx_U1 unusable;
ALTER SESSION SET skip_unusable_indexes=true;
when i do an insert/*+append*/ it says
ORA-26026: unique index idx_u1 initially in unusable state.
there are so many indexes on this table which iam able to mark them unusable and do an insert ,but when i mark this one unusable it throws an error.the only difference is that this index is a unique index and the rest are nonunique indexes.
Thanks,
Sridhar

Tom Kyte
October 09, 2003 - 6:59 pm UTC

Question regarding Direct load

Suhail, December 17, 2003 - 2:23 pm UTC

I have a customer table which has millions of rows. I receive rows in text file and using UTL_FILE package , I extract and load the data into first a staging table staging_customer. This table is a staging table in nature ie after every 30 days we delete the rows from the table. From this table we load data to another permanent table and we use INSERT INTO customer ( field1,filed2....) values clause to insert data ie we load row by row ( which is synonyms to slow by slow).

Is there a better way to load data into this table, should I create this table with no logging option and use direct load approach ie insert /* append */ into customer as select * from staging_customer where fileid=p_fileid
or
should I enable parallel DML ie

execute immediate "ALTER SESSION ENABLE PARALLEL DML"
INSERT INTO customer SELECT * FROM staging_customer where fileid=p_fileid;

Is there any problem in using one on another? Is nologging option means , incase of some database failure, recovery is not possible?

Please exaplain.

Thank you






Tom Kyte
December 18, 2003 - 8:49 am UTC

I would

a) skip utl_file
b) skip stage table
c) use an external table
d) with insert /*+ APPEND */




I agree with you

Suhail, December 18, 2003 - 9:21 am UTC

Tom,
I agree with you that I should use external tables but client is not ready and we consultant has to do our job so I amusing UTL and staging tables.

So if I am right to undrstand, your recommendation is that my main table ( not staging) should be created with nologging option and I will use insert /+* append*/ option.?

Thanks

Tom Kyte
December 18, 2003 - 11:27 am UTC

tell them "hey, if you want this to go faster, we need to change the process, else it runs at this speed"

then, see what they say.

UTL_FILE will be the choke point in this processing. It is the low hanging fruit here.

I tried

Suhail, December 18, 2003 - 11:55 am UTC

Tom,

Thanks for your help. I have one more question . How to write muti insert in my situation. ie I am reading data from the staging table and I have to insert data into multiple tables which are joined by sequence id.

Following are my tables

1)Stage_cust table
SSD_SEQ_ID
FILE_SEQID
LAST_NAME
FIRST_NAME
MID_INITIAL
GENDER
DATE_OF_BIRTH
ADDRESS1
ADDRESS2
CITY
STATE
ZIP_CODE
PHONE_cell
PHONE_home
PHOME_office
)
My other two main tables are

Master table Customer
2) CUST_MASTER
(CUST_ID -- a sequence id -PK
LAST_NAME
FIRST_NAME
MID_INITIAL
GENDER
DATE_OF_BIRTH
ADDRESS1
ADDRESS2
CITY
STATE
ZIP_CODE
)
Detail table
3)cust_phones
(ph_id a sequence id PK
cust_id
phone
phone_type
)

I have database trigger which handles automatice generation of sequence id when data is loaded into these tables. I know there is a way to do multi-insert using INSERT ALL, could I do it in my case, could you please give me the code , I will only add record in cust_phone table if
phone_cell, phone_home etc are not null , how am I going to write all this in one INSERT ALL and still use /*+append */ clause.
I would like to do
insert /*+ append*/ into cust_master
insert /*+ append*/ into cust_phones
select from stage_cust where file_seqid=200 and saimilary for detail table.
Thanks

Tom Kyte
December 18, 2003 - 12:59 pm UTC

code your triggers like I coded the one on "P" below -- so you can supply the sequence value in the insert itself if you want (and you want)

ops$tkyte@ORA920PC> create table t
  2  ( lname varchar(20),
  3    phone_cell varchar2(20),
  4    phone_home varchar2(20),
  5    phone_office varchar2(20)
  6  )
  7  /
 
Table created.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> create table c
  2  ( id number constraint c_pk primary key,
  3    lname varchar2(20)
  4  )
  5  /
 
Table created.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> create table p
  2  ( id number constraint p_pk primary key,
  3    c_id references p,
  4    phone varchar2(20),
  5    phone_type varchar2(20)
  6  )
  7  /
 
Table created.
 
ops$tkyte@ORA920PC> create sequence s1;
 
Sequence created.
 
ops$tkyte@ORA920PC> create sequence s2;
 
Sequence created.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> create or replace trigger p_trigger
  2  before insert on p for each row
  3  begin
  4          if ( :new.id is null )
  5          then
  6                  select s2.nextval into :new.id from dual;
  7          end if;
  8  end;
  9  /
 
Trigger created.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> insert into t values ( 'xxx', 'abc', 'def', 'ghi' );
 
1 row created.
 
ops$tkyte@ORA920PC> insert into t values ( 'yyy', 'lmn', 'opq', 'rst' );
 
1 row created.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> insert all
  2  into c ( id, lname ) values ( s1.nextval, lname )
  3  into p ( c_id, phone, phone_type ) values ( s1.currval, phone_cell, 'cell' )
  4  into p ( c_id, phone, phone_type ) values ( s1.currval, phone_home, 'home' )
  5  into p ( c_id, phone, phone_type ) values ( s1.currval, phone_office, 'office' )
  6  select * from t
  7  /
 
8 rows created.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> select *
  2    from c, p
  3   where c.id = p.c_id;
 
 
        ID LNAME         ID       C_ID PHONE PHONE_TYPE
---------- ----- ---------- ---------- ----- --------------------
         1 xxx            1          1 abc   cell
         1 xxx            2          1 def   home
         1 xxx            3          1 ghi   office
         2 yyy            4          2 lmn   cell
         2 yyy            5          2 opq   home
         2 yyy            6          2 rst   office
 
6 rows selected.
 

Good one

Suhail, December 18, 2003 - 1:35 pm UTC

Tom,

I tried following on my set of tables EMP, EMP_ADDR and EMP_PH

and the code is as follows  but its giving me ORA error. It seems to me a bug. 
SQL> desc emp
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 EMP_ID                                    NOT NULL NUMBER
 LAST_NAME                                          VARCHAR2(20)
 FIRST_NAME                                         VARCHAR2(20)

SQL> desc emp_addr
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ADDR_ID                                   NOT NULL NUMBER
 ADDRESS                                            VARCHAR2(30)
 CITY                                               VARCHAR2(30)
 EMP_ID                                    NOT NULL NUMBER

SQL> desc emp_ph
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 PH_ID                                     NOT NULL NUMBER
 PHONE                                              VARCHAR2(10)
 EMP_ID                                             NUMBER
 TYPE                                               VARCHAR2(10)

SQL> 
SQL> insert all
  2  into emp ( emp_id,last_name,first_name) values(SEQ_EMP.nextval,last_name,first_name)
  3  into emp_addr(emp_id,address,city) values(seq_emp.nextval,address,city)
  4  into emp_ph(emp_id,phone,type) values(seq_emp.nextval,phone1,'Cell')
  5  into emp_ph(emp_id,phone,type) values(seq_emp.nextval,phone2,'Home')
  6  select last_name,first_name,address,city,phone1,phone2
  7  from emp_temp;
into emp ( emp_id,last_name,first_name) values(SEQ_EMP.nextval,last_name,first_name)
                                               *
ERROR at line 2:
ORA-00600: internal error code, arguments: [15852], [4], [3], [], [], [], [],
[]


I am using 9i Rel 1  9.0.1.4.0. 

Tom Kyte
December 18, 2003 - 3:44 pm UTC

the multiple references to the "nextval" are going to be problematic.

don't you mean to use currval (but yes, ora-600 = please open a tar)

however, this does not faile

Suhail, December 18, 2003 - 2:01 pm UTC

Tom,

Following INSERT ALL works but when commint , it disconnect with db.

SQL>  insert all
  2   into emp ( emp_id,last_name,first_name) values(SEQ_EMP.nextval,last_name,first_name)
  3   into emp_addr(emp_id,address,city) values(seq_emp.nextval,address,city)
  4   select last_name,first_name,address,city
  5  from emp_temp;

10 rows created.

SQL> commit;
commit
     *
ERROR at line 1:
ORA-03113: end-of-file on communication channel


From TOAD, this INSERT ALL works OK and from sqlplus this and previous one got failed



Thanks 

Tom Kyte
December 18, 2003 - 3:47 pm UTC

ora-3113 also means "please open a tar with support"

with CURRVAL sam eproblem

Suhail, December 19, 2003 - 10:44 am UTC

Sorry, Tom,

I would had used CURRVAL but its having the same problem ie

1 insert all
2 into emp ( emp_id,last_name,first_name)
3 values(SEQ_EMP.nextval,last_name,first_name)
4 into emp_addr(emp_id,address,city) values(seq_emp.currval,address,city)
5 into emp_ph(emp_id,phone,type) values(seq_emp.currval,phone1,'Cell')
6 into emp_ph(emp_id,phone,type) values(seq_emp.currval,phone2,'Home')
7 select last_name,first_name,address,city,phone1,phone2
8* from emp_temp
QL> /
values(SEQ_EMP.nextval,last_name,first_name)
*
RROR at line 3:
RA-00600: internal error code, arguments: [15852], [4], [3], [], [], [], [],



Well I had opened a TAR , it seems my version of 9i has some bug.

Thanks for your help.

insert append and unusable indexes

A reader, March 17, 2004 - 7:02 am UTC

Hi

I am inserting a table using insert /*+ append */, the table has 7 indexes and they are set to unusable. I am not expecting any large amount of redo to be generated nor more than 1 used_ublk in v$transaction however I see this

SQL> r
  1* select used_ublk from v$transaction

 USED_UBLK
----------
      6642

SQL> r
  1* select used_ublk from v$transaction

 USED_UBLK
----------
      6643

SQL> @sesstat
Nombre de usuario:  [SYSTEM] 
SID:  [9] 14
Statistic name:  [NONE] redo size

   SID USERNAME         NAME                                                                 VALUE
------ ---------------- ------------------------------------------------------------ -------------
    14 SYSTEM           redo size                                                       1321028904

1 row selected.

SQL> @sesstat
Nombre de usuario:  [SYSTEM] 
SID:  [9] 14
Statistic name:  [NONE] redo size

   SID USERNAME         NAME                                                                 VALUE
------ ---------------- ------------------------------------------------------------ -------------
    14 SYSTEM           redo size                                                       1321233112

SQL> @sesstat
Nombre de usuario:  [SYSTEM] 
SID:  [9] 14
Statistic name:  [NONE] redo size

   SID USERNAME         NAME                                                                 VALUE
------ ---------------- ------------------------------------------------------------ -------------
    14 SYSTEM           redo size                                                       1331016744


It generated over 1gb redo already (I ran this queries after 1.5 hour the process was running)


SQL> select status from dba_indexes where table_name='OM_TECHNICAL_INST';

STATUS
--------
UNUSABLE
UNUSABLE
UNUSABLE
UNUSABLE
UNUSABLE
UNUSABLE
UNUSABLE

SQL> select logging from dba_tables where table_name = 'OM_TECHNICAL_INST';

LOGGING
---------
NO


How so??? 

Tom Kyte
March 17, 2004 - 8:22 am UTC

that is just 50m of undo (rollback) assuming an 8k blocks.

However, insert /*+ append */ does NOT bypass REDO. NOLOGGING combined with APPEND can.

I do not have a full example here, so, I cannot comment in any way. I cannot even see the insert append statement. don't know if the table is in fact 'nologging'. don't know if you are using append correctly.

sorry forgot to add this

A reader, March 17, 2004 - 8:29 am UTC

Hi

here is the statement,

insert /*+ APPEND */ into swb.OM_TECHNICAL_INST b
select /*+ parallel(a 2) */ *
  from swb.OM_TECHNICAL_INST_BK a;

and it´s still going after 3 hours! (the table is 8g, only 2gb done so far)

As for logging attribute, the table is nologging as attached previously

I just ran again to check redo size and the value is negative now!!!!

SQL> @sesstat
Nombre de usuario:  [SYSTEM] 
SID:  [9] 14
Statistic name:  [NONE] redo size

   SID USERNAME         NAME                                                                 VALUE
------ ---------------- ------------------------------------------------------------ -------------
    14 SYSTEM           redo size                                                      -1859404656
 

Tom Kyte
March 17, 2004 - 8:35 am UTC

select force_logging from v$database;





it´s 8.17.4

A reader, March 17, 2004 - 8:48 am UTC

I think it may because my database is in noarchive log?

SQL> select LOG_MODE from v$database; 

LOG_MODE
------------
NOARCHIVELOG

Will this insert go faster if I put the database in archive log and bypass redo generation? 

Tom Kyte
March 17, 2004 - 10:57 am UTC

archive log mode GENERATES redo with insert append

In noarchive log mode, it DOES NOT generate redo

you've got something else happening here, the insert append in a noarchive log mode database does not generate redo.

rollback

A reader, March 17, 2004 - 9:16 am UTC

I always thought direct insert only uses 1 block used_ublk in v$transaction?

Tom Kyte
March 17, 2004 - 10:59 am UTC

(something else is happening here -- this is not a direct path insert, i don't see the actual sql out of v$sql, but I don't believe it is direct pathing)

I TKPROFED and this is the output

A reader, March 17, 2004 - 12:11 pm UTC

Hi

I tried with 100000 rows (table has 100 million rows) and it generates tons of redo...


insert /*+ APPEND */
into swb.OM_TECHNICAL_INST
select *
from swb.OM_TECHNICAL_INST_BK
where rownum < 100001

Execution Plan
----------------------------------------------------------
0 INSERT STATEMENT Optimizer=RULE
1 0 LOAD AS SELECT
2 1 COUNT (STOPKEY)
3 2 TABLE ACCESS (FULL) OF 'OM_TECHNICAL_INST_BK'

Statistics
----------------------------------------------------------
0 recursive calls
472934 db block gets
2168 consistent gets
1141 physical reads
6430016 redo size
643 bytes sent via SQL*Net to client
569 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
100000 rows processed

insert /*+ APPEND */
into swb.OM_TECHNICAL_INST
select /* parallel(a 2) */ *
from swb.OM_TECHNICAL_INST_BK
where rownum < 100001

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.01 0 0 0 0
Execute 2 5.89 13.41 1160 2202 472998 100000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 5.89 13.42 1160 2202 472998 100000

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: 5 (SYSTEM)
error during parse of EXPLAIN PLAN statement
ORA-26028: index SWB.RL_OM_TECHNICAL_INST_CMFID_FK initially in unusable
state


The indexes are unusable, table is nologging, hint is correct, there are no triggers on this table... I am totally lost.

solved but is this how it works?

A reader, March 17, 2004 - 1:30 pm UTC

The APPEND works, the problem is the table had 4 FKs enabled and that generates redo. I didnt know the table must not have FKs in order for append to work..... I disable the FKs and now it runs fast

Is this in the docs?

Tom Kyte
March 17, 2004 - 3:25 pm UTC

well, sort of. it mentions that "referential integrity constraints are ignored", but I think it really should say "with direct path loads they are ignored but a direct path insert will be conventional if you have RI enabled" (cause that is what happens.

two appends to a table

A reader, April 01, 2004 - 5:49 pm UTC

Sorry to ask again, there is not a way to avoid this error.
( even some secret hint)
because I need to do two or more appends to a table
before commiting or rollbacking.
Thanks

SQL> CREATE TABLE TEST( A NUMBER );

Tabla creada.

SQL> INSERT /*+ APPEND */ INTO TEST SELECT ROWNUM FROM DBA_OBJECTS WHERE ROWNUM<
5;

4 filas creadas.

SQL> INSERT /*+ APPEND */ INTO TEST SELECT ROWNUM FROM DBA_OBJECTS WHERE ROWNUM<
5;
INSERT /*+ APPEND */ INTO TEST SELECT ROWNUM FROM DBA_OBJECTS WHERE ROWNUM<5
                          *
ERROR en lÝnea 1:
ORA-12838: no se puede leer/modificar un objeto despuÚs de modificarlo en
paralelo

 

Tom Kyte
April 02, 2004 - 9:46 am UTC

insert /*+ append */ into test ( .... )
query1
UNION ALL
query2
UNION ALL
query3
...........




A reader, April 05, 2004 - 4:36 pm UTC

Thanks Tom, but they are two independent inserts.

insert /*+ append */ into test
select * from query1
(several processes in other tables and then in query1, then )
insert /*+ append */ into test
select * from query1

I investigated, this seems to be a limitation in Oracle 9.2,
and there is no way to do it :)


Tom Kyte
April 05, 2004 - 5:37 pm UTC

well, that would be a strange requirement -- but yes, it can in fact be done.


ops$tkyte@ORA9IR2> column SCN new_val S

ops$tkyte@ORA9IR2> select dbms_flashback.get_system_change_number SCN from dual; 
       SCN
----------
  20527895
 
ops$tkyte@ORA9IR2> drop table t;
Table dropped.
 
ops$tkyte@ORA9IR2> create table t as select * from scott.emp where 1=0;
Table created.
 
ops$tkyte@ORA9IR2> update scott.emp set empno = -empno;
14 rows updated.
 
ops$tkyte@ORA9IR2> insert /*+ append */
  2  into t
  3  select * from scott.emp as of scn &s
  4  union all
  5  select * from scott.emp;
old   3: select * from scott.emp as of scn &s
new   3: select * from scott.emp as of scn   20527895
 
28 rows created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select empno from t;
select empno from t
                  *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel

<b>proved it was an append...</b>
 
 
ops$tkyte@ORA9IR2> commit;
 
Commit complete.
 
ops$tkyte@ORA9IR2> select empno from t;
 
     EMPNO
----------
      7369
      7499
      7521
      7566
      7654
      7698
      7782
      7788
      7839
      7844
      7876
      7900
      7902
      7934
     -7369
     -7499
     -7521
     -7566
     -7654
     -7698
     -7782
     -7788
     -7839
     -7844
     -7876
     -7900
     -7902
     -7934
 
28 rows selected.

<b>and for flashback table before flashback table :)</b>
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> delete from scott.emp;
 
14 rows deleted.
 
ops$tkyte@ORA9IR2> insert into scott.emp select * from scott.emp as of scn &s;
old   1: insert into scott.emp select * from scott.emp as of scn &s
new   1: insert into scott.emp select * from scott.emp as of scn   20527895
 
14 rows created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> commit;
 
Commit complete.
 
 

A reader, April 06, 2004 - 6:34 pm UTC

Sorry Tom and really thanks for your help in your vacations.

Maybe I'm stupid but I don't understand, could you giveme an example with this, please.

Some experts toldme this was not possible, so I think this is really THE TIP.

I have to execute, in this example, two times

INSERT /*+ append */ INTO FON.HICUENTASF_RW(
HCF_FECHA, HCF_CODCLI, HCF_CTACORR, HCF_DBCUO, HCF_CRCUO,
HCF_IVA, HCF_MONTO_IVA, HCF_STATUS, HCF_CIUDAD )
SELECT
sysdate,CUF_CODCLI, CUF_CTACORR, CUF_DBCUO, CUF_CRCUO,
CUF_IVA, CUF_MONTO_IVA, CUF_STATUS, CUF_CIUDAD
FROM CUENTASF;

--- Table CUENTASF is updated based on several process
and parameter users defines in the moment (no possible a union all, I tried to do it.), then the second insert.

INSERT /*+ append */ INTO FON.HICUENTASF_RW(
HCF_FECHA, HCF_CODCLI, HCF_CTACORR, HCF_DBCUO, HCF_CRCUO,
HCF_IVA, HCF_MONTO_IVA, HCF_STATUS, HCF_CIUDAD )
SELECT
dFechaHoy+1,CUF_CODCLI, CUF_CTACORR, CUF_DBCUO, CUF_CRCUO,
CUF_IVA, CUF_MONTO_IVA, CUF_STATUS, CUF_CIUDAD
FROM CUENTASF;


Thanks :)

Tom Kyte
April 07, 2004 - 8:59 am UTC

instead of

insert
process
insert again


do this

get SCN
process
insert the UNION ALL of the current version of the table and the previous version of the table



I used flashback to query the table "as of now" and "as of some time ago, before the process modified it"

A reader, April 06, 2004 - 7:30 pm UTC

This is a simpler example of what I want to do. and the error I got :)


SQL> create table test ( a varchar2(1));

Tabla creada.

SQL> iNSERT /*+ append */ into test select 'a' from dual;

1 fila creada.

SQL> iNSERT /*+ append */ into test select 'a' from dual;
iNSERT /*+ append */ into test select 'a' from dual
                          *
ERROR en lÝnea 1:
ORA-12838: no se puede leer/modificar un objeto despuÚs de modificarlo en par
lo


SQL> 

A reader, April 07, 2004 - 12:12 pm UTC

Thanks Tom, good idea I got it :)

What about index organized tables..

Nags, July 29, 2004 - 1:20 pm UTC

Would it happen that if the table is ORGANIZATION INDEX then the REDO is always generated.

Tom Kyte
July 29, 2004 - 2:10 pm UTC

correct, just like if the table had an index on it.

Dave, August 10, 2004 - 11:14 am UTC

We are seeing a problem where insert /*+APPEND*/ into a subpartitioned table is causing missing index entries (9.2.0.4 and .5)

Support thinks it may have something to do with the append hint, this table which has about 300 million rows in is indexed 11 times – would you imagine the inserts (about 2 million at a time) will go at the same speed if we took it out

Thanks


Tom Kyte
August 10, 2004 - 4:11 pm UTC

*thinks*???  this would be something they better *ascertain*.  please do pursue this aggresively with them.  do you have a tar?


the append hint doesn't affect index maintanence "much" (we build mini indexes off to the side and merge them in in bulk at the end).  It reduces redo generated by the index maintainance but that's about it.

Here the bulk of the work is probably (probably) in index maintanance -- with 11 indexes -- so the append might not be buying you too much.



ops$tkyte@ORA9IR2> create table t as select * from big_table.big_table where 1=0;
 
Table created.
 
Elapsed: 00:00:00.80
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create index t_idx1 on t(id,owner);
 
Index created.
 
Elapsed: 00:00:00.01
ops$tkyte@ORA9IR2> create index t_idx2 on t(id,object_name);
 
Index created.
 
Elapsed: 00:00:00.03
ops$tkyte@ORA9IR2> create index t_idx3 on t(id,object_id);
 
Index created.
 
Elapsed: 00:00:00.01
ops$tkyte@ORA9IR2> create index t_idx4 on t(id,created);
 
Index created.
 
Elapsed: 00:00:00.01
ops$tkyte@ORA9IR2> create index t_idx5 on t(id,last_ddl_time);
 
Index created.
 
Elapsed: 00:00:00.01
ops$tkyte@ORA9IR2> create index t_idx6 on t(id,owner,foobar);
 
Index created.
 
Elapsed: 00:00:00.04
ops$tkyte@ORA9IR2> create index t_idx7 on t(id,owner,object_name);
 
Index created.
 
Elapsed: 00:00:00.14
ops$tkyte@ORA9IR2> create index t_idx8 on t(id,object_name,owner);
 
Index created.
 
Elapsed: 00:00:00.00
ops$tkyte@ORA9IR2> create index t_idx9 on t(id,object_id,object_name);
 
Index created.
 
Elapsed: 00:00:00.06
ops$tkyte@ORA9IR2> create index t_idx10 on t(id,created,object_name);
 
Index created.
 
Elapsed: 00:00:00.03
ops$tkyte@ORA9IR2> create index t_idx11 on t(id,last_ddl_time,object_name);
 
Index created.
 
Elapsed: 00:00:00.01
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> set linesize 121
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> set autotrace on statistics
ops$tkyte@ORA9IR2> insert into t select * from big_table.big_table where rownum <= 500000;
 
500000 rows created.
 
Elapsed: 00:06:23.97
 
Statistics
----------------------------------------------------------
      33442  recursive calls
     855919  db block gets
     104496  consistent gets
       8736  physical reads
  898276784  redo size
        793  bytes sent via SQL*Net to client
        839  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
     500000  rows processed
 
ops$tkyte@ORA9IR2> set autotrace off
ops$tkyte@ORA9IR2> truncate table t;
 
Table truncated.



ops$tkyte@ORA9IR2> insert /*+ APPEND */ into t select * from big_table.big_table where rownum <= 500000;
 
500000 rows created.
 
Elapsed: 00:06:09.10
 
Statistics
----------------------------------------------------------
      38793  recursive calls
     638723  db block gets
      88055  consistent gets
      39281  physical reads
  901145048  redo size
        762  bytes sent via SQL*Net to client
        853  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          1  sorts (memory)
         11  sorts (disk)
     500000  rows processed



 

Dave, August 10, 2004 - 4:30 pm UTC

tar number 3859980.996 if you are interested. They basic answer to reproduce is to set up a test case (really quite hard as it wont reproduce in test).

As for the APPEND hint, looks like it wont hit me much, i'll give that a try because adding a a couple of minutes is preferablt to missing entries!

Insert /*+ append parallel */ generates lots of redo

Andre, August 19, 2004 - 8:47 am UTC

Dear Tom,

Thanks for all your answers they are very helpful. I have the following scenario.
Table call is 130 millions table with 30GB worth of data in it and 5 indexes (25GB). We are trying to convert this table into partitioned one. After reading your recomendations we created table with nologging clause and disabled indexes on this table. After this we tried to execute insert/*+ append parallel*/... select ... from, but during execution it started generate lots of redo logs (about 20GB). I did double check and table is definatelly in nologging mode and all indexes are disabled.
I tried to use log miner to analyze what is source of this redo information, but this example of what I get out of them:
OPERATION SQL REDO
----------- ------------------------
START set transaction read write;
INTERNAL
INTERNAL
INTERNAL
INTERNAL
INTERNAL
INTERNAL
UNSUPPORTED Unsupported
UNSUPPORTED Unsupported
COMMIT commit;

Please help identify source of the generated redo.

Thanks.

Tom Kyte
August 19, 2004 - 9:59 am UTC

I doubt your indexes are disabled.

ops$tkyte@ORA9IR2> create table t
  2  as
  3  select * from all_objects
  4  where 1=0;
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create index t_idx on t(object_id);
 
Index created.
 
ops$tkyte@ORA9IR2> alter table t nologging;
 
Table altered.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> set autotrace on statistics
ops$tkyte@ORA9IR2> insert /*+ append */ into t select * from all_objects;
 
30710 rows created.
 
 
Statistics
----------------------------------------------------------
        780  recursive calls
       2285  db block gets
     173626  consistent gets
          0  physical reads
    2078488  redo size
        774  bytes sent via SQL*Net to client
        822  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          9  sorts (memory)
          0  sorts (disk)
      30710  rows processed
 
ops$tkyte@ORA9IR2> rollback;
 
Rollback complete.
 
ops$tkyte@ORA9IR2> alter index t_idx unusable;
 
Index altered.
 
ops$tkyte@ORA9IR2> insert /*+ append */ into t select * from all_objects;
 
30710 rows created.
 
 
Statistics
----------------------------------------------------------
        226  recursive calls
         59  db block gets
     173224  consistent gets
          0  physical reads
       1024  redo size
        775  bytes sent via SQL*Net to client
        822  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          9  sorts (memory)
          0  sorts (disk)
      30710  rows processed


<b>when they are, no redo -- when they aren't, redo</b>

 

Redo generated from insert /*+ append*/

Andre, August 19, 2004 - 5:07 pm UTC

Thanks Tom for your prompt answer.

I checked status of all indexes on the table right before running command and it was showing status of 4 of them 'INVALID' and status for the partitioning index as 'N/A'.
Are these statuses what they should be?

We have Oracle 10.1 with flashback recovery enabled in archivelog mode.

Thanks.



Tom Kyte
August 19, 2004 - 8:10 pm UTC

'N/A' is the status of a partitioned index. so you have a partitioned index whose subpartitions are probably enabled.

Why insert into ... select faster than its select part

Steve, November 05, 2004 - 5:08 pm UTC

Hi Tom,

When I am tuning a SQL, insert into ... select ... from ..
I focus on its select part. But I found that
insert into A select c1,c2,..ck from B is faster than
just select c1,c2,..ck from B. I was wondering why is that. Can you explain this?

Thanks!

Steve



Tom Kyte
November 05, 2004 - 6:03 pm UTC

did you compare the plans?

ORA-12838: cannot read/modify an object after modifying it in parallel

steve, November 15, 2004 - 12:29 pm UTC

I run the following statements

1) create table t nologging as select* from all_objects
where 1=0;

2) set autotrace on statistics;

3) insert /*+ append */ into t select * from all_objects;

then
select count(*) from t;

and I got ORA-12838.
even though I run:
alter session disable parallel DML before 3)
or change 3)'s hint to /*+ append noparallel*/

I can get ride of message only by 'set autocommit on'

I was wondering Which operation is in parallel? I PS the unix processes during the insert. It seems just one Oracle process.
How do I know how many processes involved in the parallel operations?

Thanks!

Steve








Tom Kyte
November 15, 2004 - 3:51 pm UTC

you cannot read an object in a transaction after a direct path (normally associated with parallel) operation.

it wasn't in parallel, it was the direct path (insert append) that did it.

Why insert into ... select faster than its select part

Steve, November 17, 2004 - 4:37 pm UTC

Hi Tom,
I ran the insert first and then select. insert select took less time. Is it normal? if yes, can you give me your thought?

1) insert into steve_t select * from all_objects;

30822 rows created.

Elapsed: 00:00:06.22

Statistics
----------------------------------------------------------
1724 recursive calls
2571 db block gets
150082 consistent gets
0 physical reads
3516544 redo size
792 bytes sent via SQL*Net to client
693 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
30822 rows processed


however,
2) select * from all_objects;
Elapsed: 00:00:07.14

Statistics
----------------------------------------------------------
7 recursive calls
0 db block gets
151377 consistent gets
0 physical reads
0 redo size
2315786 bytes sent via SQL*Net to client
23245 bytes received via SQL*Net from client
2056 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
30822 rows processed

Thanks!

Steve



Tom Kyte
November 17, 2004 - 4:49 pm UTC

wallclock time is almost meaningless here -- especially over networks shared by many, on a database shared, shared this that and the other thing.

did the select have to "paint the screen"?
the insert as select had no network.
the select formatted all of the data (even if you didn't print it)
and so on.

tkprof shows you raw database performance.

missingindex entries

dave, November 24, 2004 - 5:28 pm UTC

Tom, a while ago I mentioned missing index entries.

Well it turns out it is an oracle bug (serious one as far am im concerned) tar is 4083830.994 for a very simple test case if interested


Tom Kyte
November 24, 2004 - 5:41 pm UTC

thanks for the followup -- appreciate you following thru on it, getting the test case is the most important part.

A reader, November 24, 2004 - 5:49 pm UTC

sure was interesting anyway.

Even got it to fail inserting 1 row :-) made me chuckle once the test case had been proved

Why not...

A reader, November 25, 2004 - 4:39 am UTC

Tom,,Just like alter table <> nologging...there is something like alter index <> nologging ??

Tom Kyte
November 25, 2004 - 9:48 am UTC

did you look or try it?  we do document stuff like "the syntax" after all :)

ops$tkyte@ORA9IR2> alter index SYS_C004063 nologging;
 
Index altered.
 


Now, if I rebuild that index -- it won't generate redo (if the database permits that mode).

but inserts/updates/deletes/merges whatever -- they ALWAYS will. 

good stuff, but how about tables with nested table columns

Nik Malenovic, December 22, 2004 - 4:07 pm UTC

Tom,

This in fact may be sneaking in a "new question", however, I believe it's still the same old type of a question: "why is there so much logging for my insert /*+ append */ into table." I have no indices on the table(s), my database is NOT in the archive log mode, my tables are created with nologging clause, and I am using insert /*+ append */ and I *still* get a load of redo.

I may be just being stupid here and may need to be slapped out of it - you have a poetic license to do so. So here's the full load:

I have a table with a nested table column. Both the table and nested table are created with nologging keywords, and my insert is using /*+ append */ hint and I still generate loads of redo (~500Kb). I am also showing (in my session log included below) that the unwrapping of the nested table using /*+ append */ hint generates minimal redo log (~500B).

At the last moment I thought that I am generating this much redo log because there is a primary key index/constraint on the nested table, but then I remembered that my O-R views were doing table scans when retrieving nested values for a given row in the "master" table (included in the script below!). Also my reading of Expert One-On-One (surprise!) Chapter 6 "Database Tables" - "Nested Table Storage" lends an interpretation that nested tables have no index structures attached to them and we should use IOT and compression on the nested table to improve performance (index) and efficacy of storage (compression).

So I guess it's still the same question: why is my insert /*+ append */ into a table with nested table column generating so much redo logs and how do I avoid this?


Thanks,


Nik

Here's the session output, and colorfull comments that go with it:

C:\Documents and Settings\nmalenovic>sqlplus /nolog

SQL*Plus: Release 10.1.0.2.0 - Production on Wed Dec 22 13:00:55 2004

Copyright (c) 1982, 2004, Oracle. All rights reserved.

@ > @connect "mmesys/content1@m4u"
MMESYS@m4u >
MMESYS@m4u > -- list of siblings
MMESYS@m4u > create type bar_table as table of varchar2(200);
2 /

Type created.

Elapsed: 00:00:00.01
MMESYS@m4u >
MMESYS@m4u > -- table with nested table column - mark both the table and nested table as nologging
MMESYS@m4u > create table foo_table
2 (
3 foo varchar2(64) not null,
4 bar bar_table null
5 )
6 nologging
7 nested table bar store as bar_nested_table (nologging);

Table created.

Elapsed: 00:00:00.01
MMESYS@m4u >
MMESYS@m4u > -- let's track our redo size
MMESYS@m4u > @mystat "redo size"

NAME VALUE
---------------------------------------------------------------- ----------
redo size 61348

Elapsed: 00:00:00.01
MMESYS@m4u >
MMESYS@m4u > -- create a read-query optimized table of my siblings
MMESYS@m4u > insert /*+ append */ into foo_table
2 select object_name foo,
3 (select cast(multiset(select object_name from all_objects a2 where a2.owner = a1.owner and rownum < 100) as bar_table) from dual) bar
4 from all_objects a1 where rownum < 100;

99 rows created.

Elapsed: 00:00:00.28
MMESYS@m4u >
MMESYS@m4u > -- how much redo did we just burn up?
MMESYS@m4u > @mystat2 "redo size"
MMESYS@m4u > set echo off

NAME V DIFF
---------------------------------------------------------------- ---------- ----------
redo size 526548 465200 <---------------- where did THIS come from????

Elapsed: 00:00:00.01
MMESYS@m4u >
MMESYS@m4u > -- we did a bulk insert, let's do a block cleanout
MMESYS@m4u > analyze table foo_table compute statistics;

Table analyzed.

Elapsed: 00:00:00.03
MMESYS@m4u >
MMESYS@m4u > create table unwrapped_foo_table
2 (
3 foo varchar2(64) not null,
4 bar varchar2(200) not null
5 )
6 nologging;

Table created.

Elapsed: 00:00:00.00
MMESYS@m4u >
MMESYS@m4u > -- let's see the execution plan for this one query
MMESYS@m4u > set autotrace on
MMESYS@m4u >
MMESYS@m4u > -- what happens if we query for a particular value? version @1
MMESYS@m4u > -- see Oracle Documentation B10799-01 "Application Developer’s Guide - Object-Relational Features 10g Release 1 (10.1)",
MMESYS@m4u > -- p3-13 "Unnesting Results of Collection Queries" paragraph with query "SELECT e.* FROM department_persons d, TABLE(d.dept_emps) e;"
MMESYS@m4u > select count(*) /* in lieu of f.foo foo, b.column_value bar */
2 from foo_table f, table(f.bar) b
3 where f.foo = 'DUAL';

COUNT(*)/*INLIEUOFF.FOOFOO,B.COLUMN_VALUEBAR*/
----------------------------------------------
198

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=19 Card=1 Bytes=36)
1 0 SORT (AGGREGATE)
2 1 HASH JOIN (Cost=19 Card=107 Bytes=3852)
3 2 TABLE ACCESS (FULL) OF 'FOO_TABLE' (TABLE) (Cost=3 Card=1 Bytes=26)
4 2 TABLE ACCESS (FULL) OF 'BAR_NESTED_TABLE' (TABLE (NESTED)) (Cost=15 Card=9
801 Bytes=98010)





Statistics
----------------------------------------------------------
14 recursive calls
1 db block gets
135 consistent gets <--------- this must be an indicator of a full table scan
0 physical reads
176 redo size
432 bytes sent via SQL*Net to client
508 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed

MMESYS@m4u >
MMESYS@m4u > -- ah, you didn't like the crtasian product above? well, let's try verion #2 straight out of
MMESYS@m4u > -- Oracle Documentation B10799-01 "Application Developer’s Guide - Object-Relational Features 10g Release 1 (10.1)",
MMESYS@m4u > -- p3-14 "Unnesting Queries Containing Table Expression Subqueries" paragraph
MMESYS@m4u > -- with query "SELECT * FROM TABLE(SELECT d.dept_emps FROM department_persons d WHERE d.dept_no = 101);"
MMESYS@m4u > -- slice by rownum < 2 because there are two DUAL entries in all_objects with rownum < 100.
MMESYS@m4u > select count(*) /* in lieu of * */ from table ( select f.bar from foo_table f where f.foo = 'DUAL' and rownum < 2 );

COUNT(*)/*INLIEUOF**/
---------------------
99

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=18 Card=1 Bytes=10)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'BAR_NESTED_TABLE' (TABLE (NESTED)) (Cost=15 Card=98
Bytes=980)

3 2 COUNT (STOPKEY)
4 3 TABLE ACCESS (FULL) OF 'FOO_TABLE' (TABLE) (Cost=3 Card=1 Bytes=26)




Statistics
----------------------------------------------------------
13 recursive calls
0 db block gets
131 consistent gets <--------- this must be an indicator of a full table scan
0 physical reads
0 redo size
406 bytes sent via SQL*Net to client
508 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

MMESYS@m4u >
MMESYS@m4u > -- let's track our redo size for the unwrapping statement
MMESYS@m4u > @mystat "redo size"
MMESYS@m4u > set echo off

NAME VALUE
---------------------------------------------------------------- ----------
redo size 548072

Elapsed: 00:00:00.00
MMESYS@m4u >
MMESYS@m4u > -- compare to redo log generation for unwrapped nested table
MMESYS@m4u > insert /*+ append */ into unwrapped_foo_table
2 select f.foo foo, b.column_value bar
3 from foo_table f, table(f.bar) b;

9801 rows created.

Elapsed: 00:00:00.04
MMESYS@m4u >
MMESYS@m4u > -- how much redo did we just burn up?
MMESYS@m4u > @mystat2 "redo size"
MMESYS@m4u > set echo off

NAME V DIFF
---------------------------------------------------------------- ---------- ----------
redo size 548200 128 <--------- that's the way I like it!

Elapsed: 00:00:00.00
MMESYS@m4u >
MMESYS@m4u > -- be nice to your fellow PL/SQLer - clean up after yourself!
MMESYS@m4u > drop table unwrapped_foo_table;

Table dropped.

Elapsed: 00:00:00.01
MMESYS@m4u > drop table foo_table;

Table dropped.

Elapsed: 00:00:00.03
MMESYS@m4u > drop type bar_table;

Type dropped.

Elapsed: 00:00:00.04
MMESYS@m4u > purge recyclebin;

Recyclebin purged.

Elapsed: 00:00:00.03
MMESYS@m4u >


Tom Kyte
December 22, 2004 - 6:52 pm UTC

if you have a nested table, you sure do have an index -- it is on the HIDDEN 16 byte raw column added to your parent table and has a unique constraint on it.


and that child table, i seriously doubt it is appended to. you have a bunch of tiny inserts into it -- if you inserted 100 rows into the parent, you did like 100 inserts into the child. As a matter of fact, I'm now certain it isn't appended into:

scott@ORA9IR2> create or replace type myTable as table of varchar2(30)
2 /

Type created.

scott@ORA9IR2>
scott@ORA9IR2> create table t ( owner varchar2(30), onames myTable )
2 nested table onames store as onames_tab;

Table created.

scott@ORA9IR2>
scott@ORA9IR2>
scott@ORA9IR2> insert /*+ append */ into t
2 select object_name foo,
3 (select cast(multiset(select object_name
4 from all_objects a2
5 where a2.owner = a1.owner
6 and rownum < 100) as myTable) from dual) bar
7 from all_objects a1 where rownum < 100;

99 rows created.

scott@ORA9IR2>
scott@ORA9IR2> select /*+ nested_table_get_refs */ * from onames_tab where rownum = 1;

COLUMN_VALUE
------------------------------
/1005bd30_LnkdConstant

scott@ORA9IR2> select * from t where rownum = 1;
select * from t where rownum = 1
*
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel

if we appended into it -- we couldn't have read it....


scott@ORA9IR2>
scott@ORA9IR2> commit;

Commit complete.

scott@ORA9IR2> set echo off
Wrote file /tmp/xtmpx.sql
Datatypes for Table t

Data Data
Column Name Type Length Nullable
------------------------------ -------------------- ---------- --------
OWNER VARCHAR2 30 null
ONAMES MYTABLE 16 null


Indexes on t

Index Is
Name Unique COLUMNS
------------------------------ ------ --------------------------------
SYS_C004477 Yes SYS_NC0000200003$


Triggers on t
scott@ORA9IR2>


there is your index -- it is definitely there




awesome, as usual... but I can't win!

Nik Malenovic, December 23, 2004 - 3:04 am UTC

Tom,

thanks for the speedy and as usuall insightful response (although it does owe an edit to Expert One-On-One silence on default unique constraints on nested table).

I am trying to figure out if I can drop the "automagically created constraint" for nested tables before my insert /*+ append */ and then readding it with nologgin option. This works just dandy for an index (drop/add). However, I see loads of redo logs being generated if it's a constraint (that's really a unique index under the hood).

[yes I know this is a manually added constraint and not system generated one as in nested table case - I figured if I can nip this one in the bud, the other one will fall in, right?]

any ideas why this is and how to fix it so that adding constraints doesn't generate loads of redo?

Here's the session log:

MMESYS@m4u > create table foo
2 (
3 bar varchar(30) not null
4 ) nologging;

Table created.

Elapsed: 00:00:00.00
MMESYS@m4u >
MMESYS@m4u > alter table foo add constraint foo_pk primary key (bar);

Table altered.

Elapsed: 00:00:00.01
MMESYS@m4u >
MMESYS@m4u > -- many actions later...
MMESYS@m4u >
MMESYS@m4u > @mystat "redo size"
MMESYS@m4u > set echo off

NAME VALUE
---------------------------------------------------------------- ----------
redo size 112644

Elapsed: 00:00:00.00
MMESYS@m4u >
MMESYS@m4u > -- in preparation to insert /*+ append */
MMESYS@m4u > alter table foo drop constraint foo_pk;

Table altered.

Elapsed: 00:00:00.01
MMESYS@m4u >
MMESYS@m4u > insert /*+ append */ into foo
2 select distinct object_name from all_objects;

20141 rows created.

Elapsed: 00:00:01.12
MMESYS@m4u > @mystat2 "redo size"
MMESYS@m4u > set echo off

NAME V DIFF
---------------------------------------------------------------- ---------- ----------
redo size 124052 11408 <--- that's the way I like it

Elapsed: 00:00:00.00
MMESYS@m4u > alter table foo add constraint foo_pk primary key (bar) nologging;

Table altered.

Elapsed: 00:00:00.06
MMESYS@m4u > @mystat2 "redo size"
MMESYS@m4u > set echo off

NAME V DIFF
---------------------------------------------------------------- ---------- ----------
redo size 955908 843264 <----- wow nelly!

Elapsed: 00:00:00.00
MMESYS@m4u > drop table foo;

Table dropped.

Elapsed: 00:00:00.03
MMESYS@m4u > purge recyclebin;

Recyclebin purged.

Elapsed: 00:00:00.03
MMESYS@m4u >


Tom Kyte
December 23, 2004 - 11:22 am UTC

Pages 246/247 :)

I covered the bits and bytes of the structures of nested tables in the chapter on tables...

My advice in the book was unambigous regarding the use of nested tables as a persistent storage mechanism wasn't it?

you don't need that 16 byte raw (you already have a primary key)
you don't need that extra constraint
you don't need that fake forieng key

You just need TWO TABLES.

If you like the OR syntax for querying, fine, create an OR view.

it's boring when the home team wins all the time!

Nik Malenovic, December 23, 2004 - 5:28 pm UTC

Tom,

I know you will just get the big bat out because I keep sneaking in "new questions" on top of the "old ones"...

but this is in direct response to your "My advice in the book was unambiguous regarding the use of nested tables as a persistent storage mechanism wasn't it?".

what if I had no choice? the heart of the problem is that I need nested tables to unwrap my recursive hierarchy, and I don't want to generate redo logs (as I can and do rebuild the hierarchy so often and so on demand).

the root cause is that for performance reason I need to convert recursive structure

employee_id manager_id
1 null
2 1
3 1
4 2
5 2
6 3

into a read query optimized table
employee_id pion_id
1 2
1 3
1 4
1 5
1 6
2 4
2 5
3 6

the issue is that the only way I know how to unwrap the recursive hierarchy is query like this:

insert into org_h_unwrapped
select h.manager_id,
(
select cast
(
multiset
(
select h2.employee_id, level hops
from hr_data h2
where not h2.employee_id = h.manager_id
connect by nocycle prior h2.employee_id = h2.manager_id
start with h2.employee_id = h.manager_id
) as pion_table
) from dual
) all_reports
from hr_data_mgr h;

where hr_data_mgr contains the list of "managers" (there is no need to connect by on people that are nobody's manager - remember what YOU said: the best way to speed things up is to not do things you don't need to do).


I am including the complete script below - god knows I combed your site for example on:

1. generate sample recursive hierarchy data
2. unwrap recursive hierarchy into a read optimized query
3. minimize redo log generation

I know that if I paid attention to your previous answer I'd realize that my quest to make nested table insert /*+append*/ is futile based on the reading on your select /*+ nested_table_get_refs */ * from onames_tab where rownum = 1; detail.

so, whilst I bow to your big bat that is going to slap me for any number of reasons unbeknownst to me at this time (ignorance is bliss you know), I ask if you can nip my problem of redo log generation for nested tables right in the bud by providing me a way to unwrap my recursive hierarchy without using nested tables (again remember: the best way to fix the problem is to avoid the problem altogether, eh?)


thanks,


Nik

Here's the script, followed by the actual log

@connect "mmesys/************@m4u"
spool asktom_org_h_question

-- this is hr data
create table hr_data
(
employee_id varchar(64) not null,
manager_id varchar(64) null
) nologging;

-- create table to hold sample data
create table hr_data_temp
as
select rownum rnum, 'user-' || rownum employee_id from all_objects where 1=0;

-- insert data
@mystat "redo size"
insert /*+ append */ into hr_data_temp
select rownum rnum, 'user-' || rownum employee_id from all_objects where rownum < 30000;
@mystat2 "redo size"

-- block cleanout after bulk insert
analyze table hr_data_temp compute statistics;

-- we need an index to make fast lookups in the build query below!
create index hr_data_temp_rnum on hr_data_temp (rnum);

@mystat "redo size"
-- populate hierarchical sample data - a balanced tree with 5 reports per manager
insert /*+ append */ into hr_data
select employee_id,
(select employee_id from hr_data_temp where rnum = ceil((h.rnum-1)/5)) manager_id
from hr_data_temp h;
@mystat2 "redo size"

-- block cleanout after bulk insert and index rebuild
analyze table hr_data compute statistics;

-- now rebuild the constraints and indecies with nologging option
-- this one will generate load of redo as per my previous comment
@mystat "redo size"
alter table hr_data
add constraint hr_data_pk
primary key (employee_id) nologging;
@mystat2 "redo size"

@mystat "redo size"
create index hr_data_mid on hr_data (manager_id) nologging;
create index hr_data_eid_mid on hr_data (employee_id, manager_id) nologging;
@mystat2 "redo size"

-- we need "managers only table" so that the unwrapping query doesn't swift through unneccessary data
create table hr_data_mgr
(
manager_id varchar(64) null
)
cache
nologging;

-- insert manager data
@mystat "redo size"
insert /*+ append */ into hr_data_mgr
select distinct manager_id
from (select h1.employee_id, h2.employee_id manager_id from hr_data h1 left outer join hr_data h2 on h1.employee_id = h2.manager_id) h3;
@mystat2 "redo size"

@mystat "redo size"

-- block cleanout after bulk insert
analyze table hr_data_mgr compute statistics;

-- create index for fast lookups
create index hr_data_mgr_mid on hr_data_mgr (manager_id) nologging;

-- remove the null entry (we know we have one!)
delete from hr_data_mgr where manager_id is null;
@mystat2 "redo size"

-- create "table of employee ids" concept
create or replace type pion_scalar as object
(
employee_id varchar2(64),
hops number(10,0)
);
/

-- create "table of employees and how many hops are they away from me" concept
create type pion_table as table of pion_scalar;
/

-- create our unwrapped view
create table org_h_unwrapped
(
employee_id varchar(64) not null,
all_reports pion_table null
) nologging
nested table all_reports store as org_h_unwrapped_ar_nt( nologging );

-- now let's unwrap this recursive nightmare of ours
@mystat "redo size"
insert /*+ append */ into org_h_unwrapped
select h.manager_id,
(select cast(multiset(select h2.employee_id, level hops from hr_data h2 where not h2.employee_id = h.manager_id connect by nocycle prior h2.employee_id = h2.manager_id start with h2.employee_id = h.manager_id) as pion_table) from dual) all_reports
from hr_data_mgr h;
@mystat2 "redo size"

-- block cleanout after bulk insert
analyze table org_h_unwrapped compute statistics;

-- now rebuild the constraints and indecies with nologging option
-- this one will generate load of redo as per my previous comment
@mystat "redo size"
alter table org_h_unwrapped
add constraint org_h_unwrapped_pk
primary key (employee_id) nologging;
@mystat2 "redo size"

@mystat "redo size"
create index org_h_unwrapped_ar_1 on org_h_unwrapped_ar_nt (employee_id) nologging;
create index org_h_unwrapped_ar_2 on org_h_unwrapped_ar_nt (employee_id, hops) nologging;
create index org_h_unwrapped_ar_3 on org_h_unwrapped_ar_nt (hops) nologging;
@mystat2 "redo size"

-- now flatten out nested tables
create table org_h_flat
(
employee_id varchar(64) not null,
pion_id varchar(64) not null,
hops number(10,0) not null
) nologging;

-- now unwrap the hierarchy!
@mystat "redo size"
insert /*+ append */ into org_h_flat
select u.employee_id, ar_nt.employee_id pion_id, ar_nt.hops hops
from org_h_unwrapped u, table(u.all_reports) ar_nt;
@mystat2 "redo size"

-- block cleanout after bulk insert
analyze table org_h_unwrapped compute statistics;

-- now rebuild the constraints and indecies with nologging option
@mystat "redo size"
alter table org_h_flat
add constraint org_h_flat_pk
primary key (employee_id, pion_id) nologging;
@mystat2 "redo size"

@mystat "redo size"
-- speed up queries of type "who are my pions"
create index org_h_flat_eid on org_h_flat (employee_id) nologging;

-- speed up queries of type "whose pion am I"
create index org_h_flat_pid on org_h_flat (pion_id) nologging;

-- speed up queries of type "who are my pions that are 3 steps away from me"
create index org_h_flat_eid_h on org_h_flat (employee_id, hops) nologging;
@mystat2 "redo size"

-- let's see those execution paths
set autotrace on

-- how well does the nested table perform?
select u.employee_id, ar_nt.employee_id pion_id, ar_nt.hops hops
from org_h_unwrapped u, table(u.all_reports) ar_nt
where u.employee_id in ('user-1999', 'user-2000');

-- now milk the fruits of your labor - 10x less consistent gets!
column employee_id format a30;
column pion_id format a30;
column hops format 999;
select *
from org_h_flat
where employee_id in ('user-1999', 'user-2000');

-- cleanup
-- for some odd reason the nested table indexes do not get cascade dropped when dropping the master table
drop index org_h_unwrapped_ar_1;
drop index org_h_unwrapped_ar_2;
drop index org_h_unwrapped_ar_3;

drop table hr_data_temp;
drop table hr_data_mgr;
drop table hr_data;
drop table org_h_flat;
drop table org_h_unwrapped;
drop type pion_table;
drop type pion_scalar;
purge recyclebin;



and here's the execution script log




MMESYS@m4u >
MMESYS@m4u > -- this is hr data
MMESYS@m4u > create table hr_data
2 (
3 employee_id varchar(64) not null,
4 manager_id varchar(64) null
5 ) nologging;

Table created.

Elapsed: 00:00:00.01
MMESYS@m4u >
MMESYS@m4u > -- create table to hold sample data
MMESYS@m4u > create table hr_data_temp
2 as
3 select rownum rnum, 'user-' || rownum employee_id from all_objects where 1=0;

Table created.

Elapsed: 00:00:00.06
MMESYS@m4u >
MMESYS@m4u > -- insert data
MMESYS@m4u > @mystat "redo size"

NAME VALUE
---------------------------------------------------------------- ----------
redo size 35516

Elapsed: 00:00:00.00
MMESYS@m4u > insert /*+ append */ into hr_data_temp
2 select rownum rnum, 'user-' || rownum employee_id from all_objects where rownum < 30000;

29999 rows created.

Elapsed: 00:00:00.48
MMESYS@m4u > @mystat2 "redo size"
MMESYS@m4u > set echo off

NAME V DIFF
---------------------------------------------------------------- ---------- ----------
redo size 36000 484

Elapsed: 00:00:00.00
MMESYS@m4u >
MMESYS@m4u > -- block cleanout after bulk insert
MMESYS@m4u > analyze table hr_data_temp compute statistics;

Table analyzed.

Elapsed: 00:00:00.21
MMESYS@m4u >
MMESYS@m4u > -- we need an index to make fast lookups in the build query below!
MMESYS@m4u > create index hr_data_temp_rnum on hr_data_temp (rnum);

Index created.

Elapsed: 00:00:00.04
MMESYS@m4u >
MMESYS@m4u > @mystat "redo size"
MMESYS@m4u > set echo off

NAME VALUE
---------------------------------------------------------------- ----------
redo size 592908

Elapsed: 00:00:00.01
MMESYS@m4u > -- populate hierarchical sample data - a balanced tree with 5 reports per manager
MMESYS@m4u > insert /*+ append */ into hr_data
2 select employee_id,
3 (select employee_id from hr_data_temp where rnum = ceil((h.rnum-1)/5)) manager_id
4 from hr_data_temp h;

29999 rows created.

Elapsed: 00:00:00.34
MMESYS@m4u > @mystat2 "redo size"
MMESYS@m4u > set echo off

NAME V DIFF
---------------------------------------------------------------- ---------- ----------
redo size 593444 536

Elapsed: 00:00:00.00
MMESYS@m4u >
MMESYS@m4u > -- block cleanout after bulk insert and index rebuild
MMESYS@m4u > analyze table hr_data compute statistics;

Table analyzed.

Elapsed: 00:00:00.20
MMESYS@m4u >
MMESYS@m4u > -- now rebuild the constraints and indecies with nologging option
MMESYS@m4u > -- this one will generate load of redo as per my previous comment
MMESYS@m4u > @mystat "redo size"
MMESYS@m4u > set echo off

NAME VALUE
---------------------------------------------------------------- ----------
redo size 598512

Elapsed: 00:00:00.01
MMESYS@m4u > alter table hr_data
2 add constraint hr_data_pk
3 primary key (employee_id) nologging;

Table altered.

Elapsed: 00:00:00.07
MMESYS@m4u > @mystat2 "redo size"
MMESYS@m4u > set echo off

NAME V DIFF
---------------------------------------------------------------- ---------- ----------
redo size 1329616 731104

Elapsed: 00:00:00.00
MMESYS@m4u >
MMESYS@m4u > @mystat "redo size"
MMESYS@m4u > set echo off

NAME VALUE
---------------------------------------------------------------- ----------
redo size 1329616

Elapsed: 00:00:00.01
MMESYS@m4u > create index hr_data_mid on hr_data (manager_id) nologging;

Index created.

Elapsed: 00:00:00.06
MMESYS@m4u > create index hr_data_eid_mid on hr_data (employee_id, manager_id) nologging;

Index created.

Elapsed: 00:00:00.07
MMESYS@m4u > @mystat2 "redo size"
MMESYS@m4u > set echo off

NAME V DIFF
---------------------------------------------------------------- ---------- ----------
redo size 1354568 24952

Elapsed: 00:00:00.01
MMESYS@m4u >
MMESYS@m4u > -- we need "managers only table" so that the unwrapping query doesn't swift through unneccessary data
MMESYS@m4u > create table hr_data_mgr
2 (
3 manager_id varchar(64) null
4 )
5 cache
6 nologging;

Table created.

Elapsed: 00:00:00.00
MMESYS@m4u >
MMESYS@m4u > -- insert manager data
MMESYS@m4u > @mystat "redo size"
MMESYS@m4u > set echo off

NAME VALUE
---------------------------------------------------------------- ----------
redo size 1360984

Elapsed: 00:00:00.00
MMESYS@m4u > insert /*+ append */ into hr_data_mgr
2 select distinct manager_id
3 from (select h1.employee_id, h2.employee_id manager_id from hr_data h1 left outer join hr_data h2 on h1.employee_id = h2.manager_id) h3;

29999 rows created.

Elapsed: 00:00:00.09
MMESYS@m4u > @mystat2 "redo size"
MMESYS@m4u > set echo off

NAME V DIFF
---------------------------------------------------------------- ---------- ----------
redo size 1361468 484

Elapsed: 00:00:00.00
MMESYS@m4u >
MMESYS@m4u > @mystat "redo size"
MMESYS@m4u > set echo off

NAME VALUE
---------------------------------------------------------------- ----------
redo size 1361468

Elapsed: 00:00:00.00
MMESYS@m4u >
MMESYS@m4u > -- block cleanout after bulk insert
MMESYS@m4u > analyze table hr_data_mgr compute statistics;

Table analyzed.

Elapsed: 00:00:00.17
MMESYS@m4u >
MMESYS@m4u > -- create index for fast lookups
MMESYS@m4u > create index hr_data_mgr_mid on hr_data_mgr (manager_id) nologging;

Index created.

Elapsed: 00:00:00.04
MMESYS@m4u >
MMESYS@m4u > -- remove the null entry (we know we have one!)
MMESYS@m4u > delete from hr_data_mgr where manager_id is null;

1 row deleted.

Elapsed: 00:00:00.00
MMESYS@m4u > @mystat2 "redo size"
MMESYS@m4u > set echo off

NAME V DIFF
---------------------------------------------------------------- ---------- ----------
redo size 1376364 14896

Elapsed: 00:00:00.01
MMESYS@m4u >
MMESYS@m4u > -- create "table of employee ids" concept
MMESYS@m4u > create or replace type pion_scalar as object
2 (
3 employee_id varchar2(64),
4 hops number(10,0)
5 );
6 /

Type created.

Elapsed: 00:00:00.06
MMESYS@m4u >
MMESYS@m4u > -- create "table of employees and how many hops are they away from me" concept
MMESYS@m4u > create type pion_table as table of pion_scalar;
2 /

Type created.

Elapsed: 00:00:00.01
MMESYS@m4u >
MMESYS@m4u > -- create our unwrapped view
MMESYS@m4u > create table org_h_unwrapped
2 (
3 employee_id varchar(64) not null,
4 all_reports pion_table null
5 ) nologging
6 nested table all_reports store as org_h_unwrapped_ar_nt( nologging );

Table created.

Elapsed: 00:00:00.01
MMESYS@m4u >
MMESYS@m4u > -- now let's unwrap this recursive nightmare of ours
MMESYS@m4u > @mystat "redo size"
MMESYS@m4u > set echo off

NAME VALUE
---------------------------------------------------------------- ----------
redo size 1461252

Elapsed: 00:00:00.00
MMESYS@m4u > insert /*+ append */ into org_h_unwrapped
2 select h.manager_id,
3 (select cast(multiset(select h2.employee_id, level hops from hr_data h2 where not h2.employee_id = h.manager_id connect by nocycle prior h2.employee_id = h2.manager_id start with h2.employee_id = h.manager_id) as pion_table) from dual) all_reports
4 from hr_data_mgr h;

29998 rows created.

Elapsed: 00:00:15.06
MMESYS@m4u > @mystat2 "redo size"
MMESYS@m4u > set echo off

NAME V DIFF
---------------------------------------------------------------- ---------- ----------
redo size 12674460 11213208

Elapsed: 00:00:00.00
MMESYS@m4u >
MMESYS@m4u > -- block cleanout after bulk insert
MMESYS@m4u > analyze table org_h_unwrapped compute statistics;

Table analyzed.

Elapsed: 00:00:00.17
MMESYS@m4u >
MMESYS@m4u > -- now rebuild the constraints and indecies with nologging option
MMESYS@m4u > -- this one will generate load of redo as per my previous comment
MMESYS@m4u > @mystat "redo size"
MMESYS@m4u > set echo off

NAME VALUE
---------------------------------------------------------------- ----------
redo size 12680988

Elapsed: 00:00:00.01
MMESYS@m4u > alter table org_h_unwrapped
2 add constraint org_h_unwrapped_pk
3 primary key (employee_id) nologging;

Table altered.

Elapsed: 00:00:00.04
MMESYS@m4u > @mystat2 "redo size"
MMESYS@m4u > set echo off

NAME V DIFF
---------------------------------------------------------------- ---------- ----------
redo size 13416452 735464

Elapsed: 00:00:00.01
MMESYS@m4u >
MMESYS@m4u > @mystat "redo size"
MMESYS@m4u > set echo off

NAME VALUE
---------------------------------------------------------------- ----------
redo size 13416452

Elapsed: 00:00:00.00
MMESYS@m4u > create index org_h_unwrapped_ar_1 on org_h_unwrapped_ar_nt (employee_id) nologging;

Index created.

Elapsed: 00:00:00.39
MMESYS@m4u > create index org_h_unwrapped_ar_2 on org_h_unwrapped_ar_nt (employee_id, hops) nologging;

Index created.

Elapsed: 00:00:00.51
MMESYS@m4u > create index org_h_unwrapped_ar_3 on org_h_unwrapped_ar_nt (hops) nologging;

Index created.

Elapsed: 00:00:00.56
MMESYS@m4u > @mystat2 "redo size"
MMESYS@m4u > set echo off

NAME V DIFF
---------------------------------------------------------------- ---------- ----------
redo size 13481308 64856

Elapsed: 00:00:00.01
MMESYS@m4u >
MMESYS@m4u > -- now flatten out nested tables
MMESYS@m4u > create table org_h_flat
2 (
3 employee_id varchar(64) not null,
4 pion_id varchar(64) not null,
5 hops number(10,0) not null
6 ) nologging;

Table created.

Elapsed: 00:00:00.01
MMESYS@m4u >
MMESYS@m4u > -- now unwrap the hierarchy!
MMESYS@m4u > @mystat "redo size"
MMESYS@m4u > set echo off

NAME VALUE
---------------------------------------------------------------- ----------
redo size 13501444

Elapsed: 00:00:00.00
MMESYS@m4u > insert /*+ append */ into org_h_flat
2 select u.employee_id, ar_nt.employee_id pion_id, ar_nt.hops hops
3 from org_h_unwrapped u, table(u.all_reports) ar_nt;

155583 rows created.

Elapsed: 00:00:00.25
MMESYS@m4u > @mystat2 "redo size"
MMESYS@m4u > set echo off

NAME V DIFF
---------------------------------------------------------------- ---------- ----------
redo size 13514708 13264

Elapsed: 00:00:00.00
MMESYS@m4u >
MMESYS@m4u > -- block cleanout after bulk insert
MMESYS@m4u > analyze table org_h_unwrapped compute statistics;

Table analyzed.

Elapsed: 00:00:00.21
MMESYS@m4u >
MMESYS@m4u > -- now rebuild the constraints and indecies with nologging option
MMESYS@m4u > @mystat "redo size"
MMESYS@m4u > set echo off

NAME VALUE
---------------------------------------------------------------- ----------
redo size 13524760

Elapsed: 00:00:00.01
MMESYS@m4u > alter table org_h_flat
2 add constraint org_h_flat_pk
3 primary key (employee_id, pion_id) nologging;

Table altered.

Elapsed: 00:00:00.59
MMESYS@m4u > @mystat2 "redo size"
MMESYS@m4u > set echo off

NAME V DIFF
---------------------------------------------------------------- ---------- ----------
redo size 18779748 5254988

Elapsed: 00:00:00.00
MMESYS@m4u >
MMESYS@m4u > @mystat "redo size"
MMESYS@m4u > set echo off

NAME VALUE
---------------------------------------------------------------- ----------
redo size 18779748

Elapsed: 00:00:00.00
MMESYS@m4u > -- speed up queries of type "who are my pions"
MMESYS@m4u > create index org_h_flat_eid on org_h_flat (employee_id) nologging;

Index created.

Elapsed: 00:00:00.35
MMESYS@m4u >
MMESYS@m4u > -- speed up queries of type "whose pion am I"
MMESYS@m4u > create index org_h_flat_pid on org_h_flat (pion_id) nologging;

Index created.

Elapsed: 00:00:00.48
MMESYS@m4u >
MMESYS@m4u > -- speed up queries of type "who are my pions that are 3 steps away from me"
MMESYS@m4u > create index org_h_flat_eid_h on org_h_flat (employee_id, hops) nologging;

Index created.

Elapsed: 00:00:00.43
MMESYS@m4u > @mystat2 "redo size"
MMESYS@m4u > set echo off

NAME V DIFF
---------------------------------------------------------------- ---------- ----------
redo size 18841012 61264

Elapsed: 00:00:00.00
MMESYS@m4u >
MMESYS@m4u > -- let's see those execution paths
MMESYS@m4u > set autotrace on
MMESYS@m4u >
MMESYS@m4u > -- how well does the nested table perform?
MMESYS@m4u > select u.employee_id, ar_nt.employee_id pion_id, ar_nt.hops hops
2 from org_h_unwrapped u, table(u.all_reports) ar_nt
3 where u.employee_id in ('user-1999', 'user-2000');

EMPLOYEE_ID PION_ID HOPS
---------------------------------------------------------------- ---------------------------------------------------------------- ----------
user-1999 user-9992 2
user-1999 user-9993 2
user-1999 user-9994 2
user-1999 user-9995 2
user-1999 user-9996 2
user-2000 user-9997 2
user-2000 user-9998 2
user-2000 user-9999
Tom Kyte
December 23, 2004 - 8:07 pm UTC

you do have a choice.


this is way way WAYYYYYYYYY TOOOOOOO LOOOOONNNNGGGG i'm not reading it all.


you do have a choice..

what is a nested table?

It is oracle adding a 16 byte raw to a parent table.
It is oracle adding a unique constraint on that table.
It is oracle creating a child table (that you have to know to index).
It is oracle adding to that child table a foreign key back to this 16 byte raw.


Now, please tell me "why you have no choice" -- you cannot

a) create a parent table with a primary key (hey, you got that already)
b) create a child table with a foreign key back to the parent

and if you want the syntatic sugar of the nested table, create an OR view.


A nested table is nothing more than an inefficient parent child construct without any of the advantages of a true parent child relationship (eg: you have only LIMITS, no advantages, nothing "new", nothing "special")


So, why don't you have "a choice" again?

Why insert /*+ append */ into is quicker than insert into

A reader, December 26, 2004 - 8:08 am UTC

Thank you for answer

too long? here's a short version

Nik Malenovic, December 28, 2004 - 10:26 am UTC

Tom,

Here's the short version: Evil HR systems feed me write-optimized recursive employee-manager information. I need to build a read-query optimized table that holds "who are my reports". Here's the only way I know how to do this without using slow-iterative-programmer-approach-that-takes-18-hours-to-do:

insert into org_h_unwrapped
select h.manager_id,
(
select cast
(
multiset
(
select h2.employee_id, level hops
from hr_data h2
where not h2.employee_id = h.manager_id
connect by nocycle prior h2.employee_id = h2.manager_id
start with h2.employee_id = h.manager_id
) as pion_table
) from dual
) all_reports
from hr_data_mgr h;

if there is way to do this WITHOUT nested tables, I am all ears Mr. Spock!

thanks,

Nik


Tom Kyte
December 28, 2004 - 11:08 am UTC

you do know that you are doing a "slow-iterative-programmer-approach" -- that subquery is being run in a nested loop, just like you would be.

but yes, I can do this without nested tables -- it takes at most 2 sql statements, but probably just one (i don't see the need for the parent table whatsoever, you just want the flattened hierarchy -- that is all you need)


ops$tkyte@ORA10G> create table emp as select empno, ename, mgr from scott.emp;
Table created.
<b>
that is your HR_DATA table</b>
 
ops$tkyte@ORA10G> create table mgrs as select distinct mgr, (select ename from emp where empno=e.mgr) nm from scott.emp e;
Table created.


ops$tkyte@ORA10G> column nm format a20
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> select * from mgrs;
 
       MGR NM
---------- --------------------
      7566 JONES
      7698 BLAKE
      7782 CLARK
      7788 SCOTT
      7839 KING
      7902 FORD
 
 
7 rows selected.

 
<b>that is your HR_DATA_MGR table -- which suffices to be your "unwrapped" table, unwrapped doesn't need to exist.  To populate the flattened structure:</b>
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> select rpad('*',2*level, '*' ) || ename nm,
  2         empno,
  3         mgr,
  4         connect_by_root empno  top_mgr
  5    from emp
  6   start with empno in ( select mgr from mgrs )
  7  connect by nocycle prior empno = mgr
  8  /
 
NM                        EMPNO        MGR    TOP_MGR
-------------------- ---------- ---------- ----------
**SCOTT                    7788       7566       7788
****ADAMS                  7876       7788       7788
**FORD                     7902       7566       7902
****SMITH                  7369       7902       7902
**JONES                    7566       7839       7566
****SCOTT                  7788       7566       7566
******ADAMS                7876       7788       7566
****FORD                   7902       7566       7566
******SMITH                7369       7902       7566
**BLAKE                    7698       7839       7698
****ALLEN                  7499       7698       7698
****WARD                   7521       7698       7698
****MARTIN                 7654       7698       7698
****TURNER                 7844       7698       7698
****JAMES                  7900       7698       7698
**CLARK                    7782       7839       7782
****MILLER                 7934       7782       7782
**KING                     7839                  7839
****JONES                  7566       7839       7839
******SCOTT                7788       7566       7839
********ADAMS              7876       7788       7839
******FORD                 7902       7566       7839
********SMITH              7369       7902       7839
****BLAKE                  7698       7839       7839
******ALLEN                7499       7698       7839
******WARD                 7521       7698       7839
******MARTIN               7654       7698       7839
******TURNER               7844       7698       7839
******JAMES                7900       7698       7839
****CLARK                  7782       7839       7839
******MILLER               7934       7782       7839
 
31 rows selected.


<b>I see you are using 10g, nocycle, so connect by root is there, but sys_connect_by_path could to the same (which is what I use in my system we've had that does frankly the same example thing :).....</b>


 

man of few words

Nik Malenovic, December 28, 2004 - 12:10 pm UTC

Tom,

you are a verbose-less webster. My hat off to you.

thanks,

Nik


A reader, March 23, 2005 - 3:09 pm UTC

Hi Tom,

I was doing a

insert /*+ append */ into TABA select * from TABB

it was taking around 20 minutes - TABA is compressed table

But after giving a
ALTER SESSION FORCE PARALLEL DDL;

the above insert completes in around 7 minutes ?

Any idea why this insert into a compressed table with append hint is faster after giving a

ALTER SESSION FORCE PARALLEL DDL;

thanks
Regi

Tom Kyte
March 23, 2005 - 6:27 pm UTC

did it perhaps run in parallel.

A reader, March 23, 2005 - 3:10 pm UTC

Even before giving

ALTER session force parallel ddl;

the insert /*+ append */ into ... was executed in parallel

So was wondering what effect this

ALTER session force parallel ddl;

had on the insert ?

Tom Kyte
March 23, 2005 - 6:27 pm UTC

probably not the alter session then, probably the load on machine was not "the same"

if you think it was, show us a direct cut and paste. - tkprof would be best even.

Purpose of index with nologging option

Sean, April 21, 2005 - 12:24 pm UTC

Hi Tom,

You mentioned that for direct-load insert on table with nologging option, it still generates log file for nologging index. What is the purpose of index with nologging option if that is the case?

Thanks so much for your help.




Tom Kyte
April 22, 2005 - 9:03 am UTC

create index
rebulid index

a total "do the index over" can be done nologging.

generate log file for index with nologging option for table.

Sean, April 24, 2005 - 5:57 pm UTC

Hi Tom,

When one decides to direct-load data with nologging option, it seems that there is no point to generate log file for index. Shoudn’t Oracle be smart enough to notice it and not to generate log file for index?

Thanks so much for your help.


Tom Kyte
April 24, 2005 - 8:25 pm UTC

in order to MERGE into an existing index -- you need to log.

Else, if something fails in the middle, you end up with corruption.

to add to the "end" of the table (append, above the high water mark) you can skip redo

to merge into an existing complex data structure, you have to be recoverable.

If you have no index, we can of course build a new one without logging (it is like an append), but to merge into an existing structure -- nope.

Direct insert with database link

A reader, April 25, 2005 - 8:42 am UTC

Hi Tom,

I want to insert about 2,000,000 records to an empty table from database A to B (both tables have the same table structure). A is archive log database, B is non-archive log database. Below 4 statements:

1) insert /*+ append */ into table@B select * from table
(in database A)
2) insert into table@B select * From table
(in database A)
3) insert /*+ append */ into table select * From table@A
(in database B)
4) insert into table select * from table@A
(in database B)

Q1) Which statement do you prefer? I tried and found that 3 is the best option in terms of redo log generated, rollback segment used and performance. Am I right? But I remember that you mentioned that direct insert does not "take effect" in using database link (in other questions). However, I found that the redo log generated in option 3 (18648 redo log in autotrace) is much smaller than option 4 (1006752 in autotrace). Does this mean that direct insert can still take effect in using database link?

Q2) Do you think using this method is better than using imp/exp (assume both database is 8.1.7)?

Thanks
David


Tom Kyte
April 25, 2005 - 9:07 am UTC

there is a difference between attempting to direct path insert OVER a dblink and attempting to direct path insert data retrieved FROM a dblink...

do this,

select * from table_you_inserted_into where rownum=1;

right after the insert append, before a commit. If you get a row, it was NOT direct path, if you get an error that you cannot read the table if WAS a direct path operation.



A reader, April 26, 2005 - 9:36 am UTC

Hi Tom,

I tries your suggestion and found that pull can use the direct insert (while push cannot). Thanks.


David

drop index to avoid redo log during direct load

Sean, April 28, 2005 - 3:57 pm UTC

You mentioned that one should drop index, do direct load and recreate index with nologging in order to generate least amount of relog. But I can not drop index used for primary key. Any workaround?

Thanks so much for your help.



Tom Kyte
April 28, 2005 - 4:24 pm UTC

disable the constraint, away goes the unique index.

ORA-12838

Yogesh, August 01, 2005 - 1:04 pm UTC



Can you please tell us why it is not allowed to read / modifiy table after direct load? I mean internally what will go wrong if one select/modify the data?

Tom Kyte
August 01, 2005 - 1:31 pm UTC

there is no undo generated for a direct path load, it (the data) was written above the high water mark. Read consistency - something that cannot be bypassed - would have to be bypassed.

Why not select

Yogesh, August 02, 2005 - 6:25 am UTC

I can understand about modify. But why not select?

Tom Kyte
August 02, 2005 - 7:55 am UTC

because read consistency is all about SELECT (reading) not modification.

nologging in ARCHIVELOG database

sns, August 18, 2005 - 1:27 pm UTC

Does INSERT /*+ APPEND NOLOGGING */ make sense in ARCHIVELOG database? Does the redo log files and archive log files grow in this case?

I am trying to do INSERT..SELECT for about 100 million records in an ARCHIVELOG database and I don't want the archive log files to get filled up in this case.

Is there any other way to bypass archive log/redo log?

Thanks,

Tom Kyte
August 18, 2005 - 4:45 pm UTC

insert /*+ append nologging */ is the same as:

insert /*+ append foobar */
or
insert /*+ append hello-world */


it is not meaningful, the word NOLOGGING that is.

You would have to alter the table to be nologging (remember -- INDEXES -- will be logged regardless)

BUT FIRST -- get the permission of your DBA since they must now schedule a backup right after you are done.

OK

Kumar, August 19, 2005 - 6:52 am UTC

Hi Tom,
I have a table as 

SQL> create table e as select * from emp where 1=0
  2  /

Table created.


I would like to insert rows from emp table into table "E"
and on success delete rows from source table (emp table) in a "SINGLE COMMAND".

Is that possible in SQL using single command??


 

Tom Kyte
August 20, 2005 - 4:01 pm UTC

no, there is no "insert into t2 and delete from t1 in a single command" command.

nologging/archive log

sns, August 19, 2005 - 9:43 am UTC

Actually I collected this piece of information probably from one of your discussion.
Table Mode Insert Mode ArchiveLog mode result
----------- ------------- ----------------- ----------
LOGGING APPEND ARCHIVE LOG redo generated
NOLOGGING APPEND ARCHIVE LOG no redo
LOGGING no append "" redo generated
NOLOGGING no append "" redo generated
LOGGING APPEND noarchive log mode no redo
NOLOGGING APPEND noarchive log mode no redo
LOGGING no append noarchive log mode redo generated
NOLOGGING no append noarchive log mode redo generated

Why should the DBA's have to worry about the backups? If my 100 million record insert fail, then I will rerun the sql. In fact, I don't want this 100 million record to be recorded in archive log file.

thanks,

Tom Kyte
August 20, 2005 - 4:11 pm UTC

you didn't collect a "nologging hint" from me, don't know what you mean by that coment?

the second after you do your insert append the dba's need to backup the data you just LOADED.


You in fact do want this in the archive log files most likely.

nologging/archivelog,

sns, August 20, 2005 - 4:54 pm UTC

The chart about table mode, insert mode, archive log mode, and result probably I got it from somewhere else may be. Since I browse your website at least 10 times a day, I guessed you would have given that chart.

<quote>
insert /*+ append nologging */ is the same as:

insert /*+ append foobar */
or
insert /*+ append hello-world */
<quote>

I presented that chart to make sure your comment (in the quote) and the information in the chart are both same.




Tom Kyte
August 20, 2005 - 5:19 pm UTC

ok, I gave you the chart, but I didn't give you that (non-functional) hint.


NOLOGGING is *not* a hint. putting it into the hint makes it look like you believe it would.

chart,

sns, August 20, 2005 - 5:38 pm UTC

I totally agree your statement.

NOLOGGING or avoiding too many entries in redo files and archive log files (if the database is in archive log mode) is done by setting the NOLOGGING parameter at the table level. Specifying NOLOGGING as a hint DOES NOT make any sense.

If my INSERT (using append hint) to a table which is set to NOLOGGING does not generate sufficient information in the archive log files, why do we need to backup the archive log files (or redo log files)?

Say for example:
create a table create table abc nologging as select * from all_objects where 1=2;

alter session enable parallel dml;
insert /*+ append */ into abc select * from all_objects;

commit;

In a NON ARCHIVE LOG database,

Will my redo log files or archive log files contains enough information to restore my table (abc) in case I loose my table?

Same question in ARCHIVE LOG database.

Thanks,



Thanks,

Tom Kyte
August 20, 2005 - 8:13 pm UTC

ok, you do this at time T0:

insert /*+ append */ into t select * from something;
commit;

At time t1, I do:

update t set the_most_important_data_setting_it_right = whatever;
commit;

at time t2, media failure happens


Table T is effectively GONE and all changes between T0 and T2 are gone as well. Bummer.

NOLOGGING indexes and direct-path inserts

Brandon Allen, August 24, 2005 - 1:07 pm UTC

Hi Tom, in your first reply on this thread, you said:

"q1) you have indexes on the table. indexes cannot be "appended" to, they must be modified. These modifications must be logged."

My own testing supports your claim, but the Oracle documentation says we are both wrong - see below, from:
</code> http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_510a.htm#SQLRF01209 <code>:

Here is the relevant text:

"logging_clause

Specify whether the creation of the index will be logged (LOGGING) or not logged (NOLOGGING) in the redo log file. This setting also determines whether subsequent Direct Loader (SQL*Loader) and direct-path INSERT operations against the index are logged or not logged. LOGGING is the default."

Am I missing something here, or is this a documentation bug?

Thanks,
Brandon Allen

Tom Kyte
August 24, 2005 - 6:34 pm UTC

ops$tkyte@ORA10GR1> create table t ( x int, constraint t_pk primary key(x) );

Table created.

ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> set autotrace on
ops$tkyte@ORA10GR1> insert /*+ APPEND */ into t select rownum from all_objects;

47804 rows created.


Execution Plan
----------------------------------------------------------
ERROR:
ORA-12838: cannot read/modify an object after modifying it in parallel


SP2-0612: Error generating AUTOTRACE EXPLAIN report

Statistics
----------------------------------------------------------
        954  recursive calls
       2979  db block gets
      61609  consistent gets
          0  physical reads
    2780840  redo size
        789  bytes sent via SQL*Net to client
        844  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
      47804  rows processed

ops$tkyte@ORA10GR1> set autotrace off
ops$tkyte@ORA10GR1> select * from t;
select * from t
              *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


ops$tkyte@ORA10GR1> alter table t drop constraint t_pk;

Table altered.

ops$tkyte@ORA10GR1> set autotrace on
ops$tkyte@ORA10GR1> insert /*+ APPEND */ into t select rownum from all_objects;

47803 rows created.


Execution Plan
----------------------------------------------------------
ERROR:
ORA-12838: cannot read/modify an object after modifying it in parallel


SP2-0612: Error generating AUTOTRACE EXPLAIN report

Statistics
----------------------------------------------------------
        474  recursive calls
        268  db block gets
      61185  consistent gets
          0  physical reads
      19584  redo size
        790  bytes sent via SQL*Net to client
        844  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          5  sorts (memory)
          0  sorts (disk)
      47803  rows processed

ops$tkyte@ORA10GR1> set autotrace off
ops$tkyte@ORA10GR1> select * from t where rownum = 1;
select * from t
              *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel

<b>the documentation is in error, thanks-- one more to file when I get back :)</b>
 

LMT redo

Phil, November 12, 2007 - 1:35 pm UTC

Hi Tom
Great thread this - like the others I am after a way of generating the fastest possible insert on a system that has reached its limit. We have moved the db to no archive log mode and moved the index tablespace to another disk but still can not increase performance speed. I guess due to indxes we are generating redo - is there anything else we can do to improve the situation such as moving the tablespace for redo?
Phil
Tom Kyte
November 16, 2007 - 1:30 pm UTC

no archive log mode won't go any faster. All that did was ensure that you WILL LOSE LOTS OF DATA someday very soon, maybe right now. Don't do that. That would be the least smart move.



Without knowing what your BOTTLENECKS are - no one can give you a silver bullet. Have you considered tracing this - or at least looking at a 10-15 minute statspack report during high load - to see what you are WAITING on.

Getting pressure from management.

Tom, February 11, 2008 - 1:52 pm UTC

Management wants a process to go faster and is trying to force us to implement no logging during the maintenance window on an OLTP table.

Scenario: OLTP system all day long doing several hundred thousands transactions a day. During the maintenance window we pull data from another system into a staging table as a nologing event.

Next we insert this data into a 2 hundred million plus row table as a logged transaction. The table has several indexes.

I stated that I was ok with the staging table being no logging because I can go back to the source and get the data if we have a media failure. But I want the insert into the OLTP table to be logged since the user will continue to add more data to this table through the course of the business day. If we have a failure I want the DBA group to be able to recover this OLTP table.

Does this logic make sense?

Will having a staging table as no logging cause the DBAs problems if they have to recover the database? Or is it just recovered with out that table?

Thank you for your help.

Tom Kyte
February 11, 2008 - 10:46 pm UTC

... Management wants a process to go faster and is trying to force us to implement
no logging during the maintenance window on an OLTP table. ...

what has management done to ascertain that redo log generation is their bottleneck?

Unless and until I see numbers for this, I'll tell management to please go back to OK'ing my expense reports and making sure I have a career path.


so, how did they figure out "redo generation is causing us 1 hour of extra time, if we use insert /*+ append */ to skip undo and redo generation on the TABLE (but NEVER NEVER the indexes - and the indexes probably generate the BULK of the undo and redo), we'll get an hour back"??!?!?!?!?

numbers please.


you'll find that MOST of the undo/redo is actually.........

on the indexes, do a test like this with your volumes of data and see what you see.... measure the redo generated - you (and "management") might be really surprised....



ops$tkyte%ORA10GR2> create table t as select * from all_objects where rownum <= &rows;
ops$tkyte%ORA10GR2> create index t_idx1 on t(object_name);
ops$tkyte%ORA10GR2> create index t_idx2 on t(object_type);
ops$tkyte%ORA10GR2> create index t_idx3 on t(object_id,created);
ops$tkyte%ORA10GR2> create index t_idx4 on t(last_ddl_time);

ops$tkyte%ORA10GR2> @mystat "redo size"
ops$tkyte%ORA10GR2> set echo off
NAME                        VALUE
---------------------- ----------
redo size               126886324

ops$tkyte%ORA10GR2> insert into t select * from all_objects where rownum <= &rows;
49779 rows created.

ops$tkyte%ORA10GR2> commit;
Commit complete.

ops$tkyte%ORA10GR2> @mystat2
ops$tkyte%ORA10GR2> set echo off

NAME                        VALUE DIFF
---------------------- ---------- ------------------
redo size               174686564       47,800,240

ops$tkyte%ORA10GR2> drop table t;
ops$tkyte%ORA10GR2> create table t as select * from all_objects where rownum <= &rows;
ops$tkyte%ORA10GR2> create index t_idx1 on t(object_name);
ops$tkyte%ORA10GR2> create index t_idx2 on t(object_type);
ops$tkyte%ORA10GR2> create index t_idx3 on t(object_id,created);
ops$tkyte%ORA10GR2> create index t_idx4 on t(last_ddl_time);
ops$tkyte%ORA10GR2> alter table t nologging;
ops$tkyte%ORA10GR2> alter index t_idx1 nologging;
ops$tkyte%ORA10GR2> alter index t_idx2 nologging;
ops$tkyte%ORA10GR2> alter index t_idx3 nologging;
ops$tkyte%ORA10GR2> alter index t_idx4 nologging;

ops$tkyte%ORA10GR2> @mystat "redo size"
ops$tkyte%ORA10GR2> set echo off

NAME                        VALUE
---------------------- ----------
redo size               186562680

ops$tkyte%ORA10GR2> insert /*+ append */ into t select * from all_objects where rownum <= &rows;

49764 rows created.

ops$tkyte%ORA10GR2> select * from t;
select * from t
              *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel

ops$tkyte%ORA10GR2> commit;
Commit complete.

ops$tkyte%ORA10GR2> @mystat2
ops$tkyte%ORA10GR2> set echo off

NAME                        VALUE DIFF
---------------------- ---------- ------------------
redo size               213955520       27,392,840

A reader, September 25, 2008 - 3:08 am UTC

Hi,

SQL> create table t ( x int );

Table created.

SQL> set autotrace on statistics
SQL> insert /*+ append */ into t select rownum from all_objects;

17904 rows created.


Statistics
----------------------------------------------------------
       1342  recursive calls
         99  db block gets
      40633  consistent gets
         86  physical reads
      13708  redo size
        772  bytes sent via SQL*Net to client
        707  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
         16  sorts (memory)
          0  sorts (disk)
      17904  rows processed

SQL>
SQL> select * from t;
select * from t
              *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


Why?

Tom Kyte
September 25, 2008 - 3:16 pm UTC

because you did a direct path load and that does not generate undo and hence 'breaks' read consistency. You cannot read a table you have direct pathed unless and until you commit the transaction that direct pathed. It is the "rule", the way it works.

Index organized tables and redo with NOLOGGING

Ranga Chakravarthi, October 23, 2008 - 9:48 pm UTC

I understand that indexes generate redo and hence IOT's. What I don't get is why generate redo in the first place when we specify a nologging to the IOT and use append hint to insert.
However, when creating the same IOT nologging using CTAS, it doesn't generate a lot of redo.

Doesn't make sense. I wish the behavior was the same as ctas for inserts on IOTs.
The inserts takes 3 times the time it took than the ctas.
Tom Kyte
October 24, 2008 - 1:02 pm UTC

you understand of course that append doesn't make sense for an index right? You cannot "append" to an index, you have to sort of, well, place the data precisely where it belongs.


ops$tkyte%ORA9IR2> drop table t;

Table dropped.

ops$tkyte%ORA9IR2> create table t ( x int primary key, y int ) ;

Table created.

ops$tkyte%ORA9IR2> insert /*+ APPEND */ into t select 1, 1 from dual;

1 row created.

ops$tkyte%ORA9IR2> select * from t;
select * from t
              *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel

<b>that shows is was direct pathed into - it was "appended" to</b>

ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> drop table t;

Table dropped.

ops$tkyte%ORA9IR2> create table t ( x int primary key, y int ) organization index;

Table created.

ops$tkyte%ORA9IR2> insert /*+ APPEND */ into t select 1, 1 from dual;

1 row created.

ops$tkyte%ORA9IR2> select * from t;

         X          Y
---------- ----------
         1          1

<b>that shows it was NOT direct pathed</b>

ops$tkyte%ORA9IR2>




You cannot direct path into an existing IOT, it is a complex structure, the data MUST BE MERGED with any existing data. During that merge process - we are modifying existing blocks (and maybe adding new ones - but the EXISTING root block at the very least will be modified if the index is 'empty'). Whenever you modify existing blocks - you have to be prepared to REDO THAT operation in the event of failure after the commit of that process.

Hence, redo must exist.


In the case of a create table as select - we can in fact direct path, there is nothing "existing" already.



and you know what, in a system whereby the redo is configured correctly, it should in no way cause it to take three times or even two times or even 1.x times really. You might want to trace that insert and create table as select and see what is different about them - what waits you experienced, if the plans used were the same and so on.

Remember also, a create table as select will bypass the buffer cache - I can show you lots of create table as selects that take lots longer then the equivalent inserts solely because the data isn't buffered as well. Direct path is not necessarily "fast path"

using /*+ APPEND /* for insert from an External Table

Thiru, February 04, 2009 - 8:41 am UTC

Hi Tom,

I have External tables that point to half to one million records long file generated every minute. In order to access this data through regular relational tables, I do an /*+ APPEND */.
insert /*+ APPEND */ into regular_tbl select * from ext_tbl;

Will the following approach scale?

a. The regular_tbl is created without any indexes to avoid the huge redo.
b. Created a MV log on the regular_tbl with ROWID.
c. Created a MV as select * from regular_tbl with Fast Refresh option. Also indexed the MV on appropriate columns.
d. Did the MV refresh once say 15 min.

Will the redo saving I got from the /*+ APPEND */ still hold good with the MV approach?

The insert using the APPEND takes 9 sec for 1 mill records compared to 3 min without the APPEND. This way I thought that the database would catch up to the speed at which the Ext tables files are getting generated.

Thanks for the time.
Tom Kyte
February 04, 2009 - 11:58 am UTC

why are you blaming redo for the time differential?

You are doing a ton of extra work here, why wouldn't you just insert /*+append*/ into a table with indexes instead of this multi-step process????????

Why use a materialized view at all.

Time taken to insert with index

Thiru, February 04, 2009 - 1:04 pm UTC

Tom,

The time taken to insert with append hint into an index table is around 11 min for 1 mill records compared to 9 sec without index. As we are getting data inflow at almost half million every min, I thought going this MV refresh route would make sense.

Comments? Suggestions?

Thanks again for the response.
Tom Kyte
February 04, 2009 - 1:28 pm UTC

umm, how much time does maintaining the materialized view take, please - you need to consider that as well - it is part of the process.

So, the time to get the data from file into the table it needs to be in is???

MV Log not workign with APPEND hint

A reader, February 05, 2009 - 11:37 am UTC

Tom,

This is what I did and looks like MV Log does not work with APPEND Inserts.

truncate table data_from_ext_tbl;

drop materialized view log on data_from_ext_tbl

create materialized view log on data_from_ext_tbl with rowid;

drop materialized view CONCORD_DATA_MV;

CREATE MATERIALIZED VIEW CONCORD_DATA_MV
REFRESH FAST ON DEMAND
WITH ROWID
AS
select * from data_from_ext_tbl;

create index concord_data_mv_idx on CONCORD_DATA_MV ("id","di","hi","ti")

select count(*) from CONCORD_DATA_MV
0
select count(*) from DATA_FROM_EXT_TBL
0
insert /*+ APPEND */ into DATA_FROM_EXT_TBL select * from vizgems_data_ext_tbl;
commit;
begin
dbms_mview.refresh('CONCORD_DATA_MV','F');
end;
/
select count(*) from concord_data_mv;
0 records
insert into DATA_FROM_EXT_TBL select * from vizgems_data_ext_tbl;
commit;
begin
dbms_mview.refresh('CONCORD_DATA_MV','F');
end;
/
select count(*) from concord_data_mv;
--212 records

Why does the MV Log not capture the inserts with APPEND hints ? Any workaround?

Thanks a lot.
Tom Kyte
February 05, 2009 - 12:06 pm UTC

it doesn't have to. When you do a direct path load like that - we know that everything between rowid X and Y are "new" (direct path loads write above the high water mark in free space, not used space).

Hence we do magic and log the rowids.


ops$tkyte%ORA10GR2> create table t
  2  as
  3  select * from all_users;

Table created.

ops$tkyte%ORA10GR2> alter table t add constraint t_pk primary key(user_id);

Table altered.

ops$tkyte%ORA10GR2> create materialized view log on t
  2  with sequence, rowid (created)
  3  including new values;

Materialized view log created.

ops$tkyte%ORA10GR2> create materialized view mv
  2  refresh fast
  3  as
  4  select created, count(*) from t group by created;

Materialized view created.


so, there is my complete setup (hey, you would actually be able to replicate my findings - unlike ME with YOU).

Now, for conventional path inserts


ops$tkyte%ORA10GR2> insert into t (username,created,user_id)
  2  values ( 'test',sysdate,1234 );

1 row created.

ops$tkyte%ORA10GR2> commit;

Commit complete.


we log into the mv log and refresh from there:

ops$tkyte%ORA10GR2> select created, m_row$$, snaptime$$,
  2         dmltype$$, old_new$$
  3    from mlog$_t;

CREATED              M_ROW$$              SNAPTIME$$           D O
-------------------- -------------------- -------------------- - -
05-feb-2009 11:58:51 AAApcCAAEAAAAwVAAA   01-jan-4000 00:00:00 I N

ops$tkyte%ORA10GR2> select staleness from user_mviews;

STALENESS
-------------------
NEEDS_COMPILE

ops$tkyte%ORA10GR2> exec dbms_mview.refresh('MV');

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select staleness from user_mviews;

STALENESS
-------------------
FRESH


but if you direct path it:

ops$tkyte%ORA10GR2> insert into t (username,created,user_id)
  2  select /*+ APPEND */ username, created, -user_id
  3  from all_users where rownum <= 5;

5 rows created.

ops$tkyte%ORA10GR2> commit;

Commit complete.

ops$tkyte%ORA10GR2> select created, m_row$$, snaptime$$,
  2         dmltype$$, old_new$$
  3    from mlog$_t;

no rows selected


we don't (thankfully, why would you want a lot of convention path inserts in the middle of your big direct path load?)

But, no worries, we still know what is new:


ops$tkyte%ORA10GR2> select dmloperation, timestamp, lowrowid, highrowid
  2    from sys.sumdelta$
  3   where tableobj# = (select object_id
  4                        from dba_objects
  5                       where object_name = 'T'
  6                         and object_type='TABLE'
  7                         and owner = 'OPS$TKYTE')
  8  /

D TIMESTAMP            LOWROWID           HIGHROWID
- -------------------- ------------------ ------------------
I 01-jan-4000 12:00:00 AAApcCAAEAAAAw5AAA AAApcCAAEAAAAw5H//



Magic table - we know it is there...

ops$tkyte%ORA10GR2> select t.*
  2    from t, ( select dmloperation, timestamp, lowrowid, highrowid
  3                from sys.sumdelta$
  4               where tableobj# = (select object_id
  5                                    from dba_objects
  6                                   where object_name = 'T'
  7                                     and object_type = 'TABLE'
  8                                     and owner = 'OPS$TKYTE')
  9             ) x
 10   where t.rowid between x.lowrowid and x.highrowid
 11  /

USERNAME                          USER_ID CREATED
------------------------------ ---------- --------------------
BIG_TABLE                             -58 14-dec-2005 20:58:32
DIP                                   -19 30-jun-2005 19:14:45
TSMSYS                                -21 30-jun-2005 19:20:34
LOTTOUSER                             -65 30-dec-2005 11:22:47
MDDATA                                -50 30-jun-2005 19:41:20


and it shows us the new stuff...


NEW VALUES keyword in MV log

A reader, February 05, 2009 - 1:42 pm UTC

Tom,

Thank you so much for not only responding so quick but in so much detail.

I was able to get the refresh with the APPEND mode insert.

So, is it the "including new values" in the MV log creation that is doing the work/trick here?




Tom Kyte
February 05, 2009 - 2:24 pm UTC

they were needed for the materialized view itself, regardless of direct path.

APPEND Inserts

A reader, February 05, 2009 - 9:36 pm UTC

Tom,

Here is something very interesting.

When I do an insert with APPEND and specify VALUES keyword, the MV Log and the refresh goes well. But doing a insert with APPEND as select * from .. does not do the job.

drop table t1
create table t1 ( a number)

drop materialized view log on t1
drop materialized view t_mv

create materialized view log on t1
with rowid
including new values;

create index t_mv_idx on t_mv(a)

create materialized view t_mv
refresh fast with rowid as select * from t1

INSERT /*+ APPEND */ INTO t1 values(1000)
commit;

select * from t1
--1000


begin
dbms_mview.refresh('T_MV','F');
end;
/

select * from t_mv
--1 records


INSERT /*+ APPEND */ INTO t1 select rownum from user_users;
commit;

select * from t1 ( 2 records)
--1000
--1


begin
dbms_mview.refresh('T_MV','F');
end;
/

select * from t_mv
--1 records

The new insert did not come through.

Any ideas ? But I am very surprised that if the APPEND can create a MV Log entry with regular insert, why does it not create with a regular select insert.

Tom Kyte
February 06, 2009 - 3:25 pm UTC

that is because insert /*+ append */ works with BULK inserts - you use insert as select with it, not values.

It did not direct path with values. so the append with values is "not useful"

NOTE: in 11g, there is an open bug currently, append with values is appending... beware of that...



It's other way round

Thiru, February 06, 2009 - 3:29 pm UTC

Sorry Tom if you meant otherway..

Actually, It is working when used with VALUES, but now working when doing insert /*+ APPEND */ into tbl select * from tbl2


Tom Kyte
February 06, 2009 - 4:19 pm UTC

what I was saying is

append + values = conventional path insert, not direct

append + select = possibly direct path insert.


what I wrote mirrors that:

...
that is because insert /*+ append */ works with BULK inserts - you use insert as select with it, not values.

It did not direct path with values. so the append with values is "not useful" ....


Thiru, February 06, 2009 - 3:30 pm UTC

There was a typo above : Please read "now working when" as "not working when"

Help !! Your Example works as expected

Thiru, February 06, 2009 - 4:44 pm UTC

I executed the exact way you did for your test case and it works.. Now trying almost the same way but the refresh does not work. Even after the refresh, the "magic tbl" still has the records. The only difference that I have in my test case is that there is no index on the table on which the mv log is created. And that would be the way in our upcoming application.

----Please can you let me know where I am making this blunder:

drop table t1
create table t1 ( a number)

drop materialized view log on t1
drop materialized view t_mv

create materialized view log on t1
with sequence, rowid (a)
including new values;

create materialized view t_mv
refresh fast with rowid as select * from t1;

create index t_mv_idx on t_mv(a);

INSERT INTO t1(a) select /*+ APPEND */ ROWNUM FROM ALL_USERS where rownum < 5;

select * from t1
--4 records


select t.*
from t1 t, ( select dmloperation, timestamp, lowrowid, highrowid
from sys.sumdelta$
where tableobj# = (select object_id
from dba_objects
where object_name = 'T1'
and object_type = 'TABLE'
and owner = 'CONCORD')
) x
where t.rowid between x.lowrowid and x.highrowid

--4 records
----This refresh DOES NOT work

begin
dbms_mview.refresh('T_MV');
end;
/

select * from t_mv;
--0 records

select t.*
from t1 t, ( select dmloperation, timestamp, lowrowid, highrowid
from sys.sumdelta$
where tableobj# = (select object_id
from dba_objects
where object_name = 'T1'
and object_type = 'TABLE'
and owner = 'CONCORD')
) x
where t.rowid between x.lowrowid and x.highrowid
--still gives 4 records




Tom Kyte
February 09, 2009 - 5:47 pm UTC

the other thing if you have no aggregates, no filters - it is a copy. do you really have such a materialized view, of what use is it? why do you have it?

bug 8241756 for insert with APPEND

Thiru, February 10, 2009 - 9:03 am UTC

Tom,

Oracle has opened bug 8241756 for the issue that I am facing.

INSERT INTO t2(a) select /*+ APPEND */ ROWNUM FROM ALL_USERS where rownum < 5

(The mv refresh for the above insert does not work)

The MV route was just one of the many approaches we were thinking of getting data into db from the xml files. We tried XMLTYPE using Sql*Loader, tried using java OracleXML, tried using DMBS_XMLSAVE and other DBMS packages. Though the sql*loader was very fast, extracting data ( half to a million rec every min) from within the xmltype and using it as regular relational data has been very cumbersome.

The one thing thatlooks like going to work well is using External Tables to read from the XML file and using the tags like :

ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY CONCORD_DIR
ACCESS PARAMETERS
(
records delimited by "</stat>"
badfile CONCORD_DIR:'conextt%a_%p.bad'
logfile CONCORD_DIR:'conextt%a_%p.log'
fields
(
filler char(200) terminated by "<stat>",
"v" char(200) enclosed by "<v>" and "</v>",
"sid" char(200) enclosed by "<sid>" and "</sid>",
"di" char(200) enclosed by "<di>" and "</di>",


As we cannot create any index on external tables, we are having a background job that would process this data.

As always, any advise/suggestions is very much appreciated.

Ricardo Queresma

A reader, July 25, 2009 - 10:57 pm UTC

Dear Tom

I know that not commited transactions are also written to datafiles.
After commit, how does oracle modify the uncommitted blocks in the datafiles?
How does oracle understand if the block in datafiles commited or not?
Does it always check this?
Tom Kyte
July 26, 2009 - 8:15 am UTC

After commit, how does oracle modify the uncommitted blocks in the datafiles?

it does not, they are Ok as is.

How does oracle understand if the block in datafiles commited or not?

information in the block header tells us what transactions have actions against the block - we can tell what transactions are done and what transactions are not - therefore we can deduce if the data on the block is committed or not.

A reader, August 01, 2009 - 12:41 am UTC

Assume, uncommitted blocks are written to datafiles. After commit, oracle only updates the
block header.Is that right?

Tom Kyte
August 04, 2009 - 12:37 pm UTC

... After commit, oracle only
updates the
block header.Is that right?
...

it might not ever touch those blocks ever again. We do not have to revisit them *ever*.

Scofield

A reader, August 14, 2009 - 9:58 pm UTC

Respected Sir,
I performed a transaction (less than %10 of buffer cache).
I havent committed yet, and somehow checkpoint occurred and my uncommitted dirty blocks are written to datafiles.
After I issue commit, block cleanout will occur.
Does oracle retrives these uncommited blocks from disk to cache, clean them in cache and they will be rewritten after checkpoint.
Is that right Tom?

Tom Kyte
August 24, 2009 - 8:03 am UTC

... After I issue commit, block cleanout will occur. ...

only for blocks in the cache.


Archive generate more and more after bulk delete operation

Devid, July 22, 2016 - 6:24 am UTC

My database is in archive mod and also i run rman backup for backup configuration.

My Question is when we delete more than 1 Million historiacal data from old table then original size is 5 to 7 GB. but orcle database 11g generate more than 10 GB archive log.

I not understand why this happen?
Connor McDonald
July 22, 2016 - 10:41 am UTC

Well first up, what else is happening in your database? Is anyone else doing DML that generates lots of redo?

Secondly, do you have any indexes on the table? These will also generate redo. So if there's lots on the table, this may explain the why you're getting so much.

A reader, July 23, 2016 - 4:18 am UTC

You are right.
My table have a two index and alots of data in million.
Connor McDonald
July 24, 2016 - 9:03 pm UTC

Plus other possbilities....eg supplemental logging

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.