Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Chris Saxon

Thanks for the question, Rajeshwaran.

Asked: October 02, 2008 - 10:31 am UTC

Last updated: September 27, 2022 - 3:41 pm UTC

Version: 9.2.0

Viewed 100K+ times! This question is

You Asked

insert /*+ append */ into t
select rownum,mod(rownum,5)
from all_objects
where rownum <=1000

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.03       0.04          0       4292          1        1000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.03       0.05          0       4292          1        1000

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 59  


insert  into t
select rownum,mod(rownum,5)
from all_objects
where rownum <=1000

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.09          0          3          0           0
Execute      1      0.03       0.03          0       4294         17        1000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.04       0.12          0       4297         17        1000

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 59  

Hi Tom,
 comparing the abouve two outputs the One with Append hint executes better then the other.
 Then why CBO by default cannot uses this append approch? so that all the queries with INsert statement execute fast.
 can you please explain detaily with and example ?
 Please correct me if some thing is worng with my understandings?
Many Thanks,
Rajesh.

and Tom said...

it does? This isn't supported by anything I see here.

Basically, the execute and fetch phases - where the work was done - are 100% equal and equivalent. There was no observable differences for such a small tiny example.

If you are talking about the total line with 0.05 versus 0.12 - that was entirely due to a hard parse of the second statement taking 0.09 - the parse time shouldn't be included.

And even if you include it, well, there is no real difference still. Run the example 100 times and average and you would see "they are the same"


Insert /*+ APPEND */ - why it would be horrible for Oracle to make that the "default".

a) it isn't necessarily faster in general. It does a direct path load to disk - bypassing the buffer cache. There are many cases - especially with smaller sets - where the direct path load to disk would be far slower than a conventional path load into the cache.

b) a direct path load always loads above the high water mark, since it is formatting and writing blocks directly to disk - it cannot reuse any existing space. Think about this - if you direct pathed an insert of a 100 byte row that loaded say just two rows - and you did that 1,000 times, you would be using at least 1,000 blocks (never reuse any existing space) - each with two rows. Now, if you did that using a conventional path insert - you would get about 70/80 rows per block in an 8k block database. You would use about 15 blocks. Which would you prefer?

c) you cannot query a table after direct pathing into it until you commit.

d) how many people can direct path into a table at the same time? One - one and only one. It would cause all modifications to serialize. No one else could insert/update/delete or merge into this table until the transaction that direct paths commits.

Direct path inserts should be used with care, in the proper circumstances. A large load - direct path. But most of the time - conventional path.

Rating

  (55 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

oracle, October 03, 2008 - 3:31 pm UTC

You say
c) you cannot query a table after direct pathing into it until you commit.

Is that true?
Tom Kyte
October 06, 2008 - 2:41 pm UTC

well, um, yes.... in the session that does the direct path operation (all other sessions see the data as it was before the direct path operation, but cannot TOUCH the table otherwise - it is all serial modifications)

Wolfgang Bauer, October 03, 2008 - 3:39 pm UTC

Well you cannot query this table from the same session before you commit.
Another session can query the table without problems, but of course, does not see the data.

ashish agrawal, February 09, 2009 - 10:57 am UTC

If the query is getting executed having APPEND hint.
can't I execute the SELECT statement on that table from the same session or different session?

Also what should be the pre conditions to use the APPEND hint and benefit from it?

Tom Kyte
February 09, 2009 - 7:32 pm UTC

did you check out the documentation at all to see what the append hint is or does?


each of the following is a benefit in some cases
each of the following is a disaster in other cases

append does a direct path load (if it can, it is not a promise, you are requesting and we may or may not do it for you - silently)

if you direct path load, the transaction that did the direct path load CANNOT query that segment - but other transactions can, they just cannot see the newly loaded data.

if you direct path load, you never use any existing free space, it always writes above the high water mark.

if you direct path load, we bypass UNDO on the table - only the table - modifications

if you direct path load, you'll maintain indexes - we build mini indexes on the newly loaded data and merge them into the 'real' indexes in bulk. A direct path load of large amounts of data will maintain indexes very efficiently.

if you direct path load you can bypass redo on the TABLE in archivelog mode, if the database is set up to allow nologging and you have the segment set to nologging

direct path loading bypasses the buffer cache, you write directly to the datafiles.

direct path loading is only appropriate for the first load of a segment or an increment load of lots of data - or an increment load into a table that never has any deletes (so there is no free space to consider)

transactional systems - you probably won't use it.
warehouses - a tool you'll use a lot

direct path insert

A reader, November 06, 2009 - 6:44 am UTC

Direct Path insert operation takes place when APPEND hint is used.
http://download.oracle.com/docs/cd/B10501_01/server.920/a96524/c21dlins.htm
Refering the above link says referential integrity constraints are ignored. I tested it and while inserting into child table a foreign key which is not present in parent table, It gave the same error as in conventional insert.

Can anybody please tell me what kind of referential integrity constraint is ignored?
Also let me know if i can use combination of hints as shown /*+ NOAAPEND PARALLEL(tab, 2) */
Tom Kyte
November 11, 2009 - 1:39 pm UTC

what it says is:

.... During direct-path INSERT operations, Oracle appends the inserted data after existing data in the table. Data is written directly into datafiles, bypassing the buffer cache. Free space in the existing data is not reused, and referential integrity constraints are ignored. These procedures combined can enhance performance. ...


when you do a direct path insert, referential integrity is not validated during the load.

however, what it seems to neglect to say in that old documentation is that....

insert /*+ append */ will ignore the append hint and use conventional path loading when the table has referential integrity or a trigger...


consider:

ops$tkyte%ORA9IR2> create table p( x int primary key);

Table created.

ops$tkyte%ORA9IR2> create table c ( x int );

Table created.

ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> insert into p (x) values (1);

1 row created.

ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> insert /*+ append */ into c select 1 from dual;

1 row created.

ops$tkyte%ORA9IR2> select * from c;
select * from c
              *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel

<b>that shows we did a direct path operation, definitely.  You cannot read from a table in the same transaction that direct path loads it... Now</b>

ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> commit;

Commit complete.

ops$tkyte%ORA9IR2> alter table c add constraint c_fk_p foreign key(x) references p(x);

Table altered.

ops$tkyte%ORA9IR2> insert /*+ append */ into c select 1 from dual;

1 row created.

ops$tkyte%ORA9IR2> select * from c;

         X
----------
         1
         1

<b>that shows the append was ignored, referential integrity makes append "go away"</b>

ops$tkyte%ORA9IR2> alter table c drop constraint c_fk_p;

Table altered.

ops$tkyte%ORA9IR2> insert /*+ append */ into c select 1 from dual;

1 row created.

ops$tkyte%ORA9IR2> select * from c;
select * from c
              *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel

<b>so we are back to being able to direct path... with append</b>

ops$tkyte%ORA9IR2> create or replace trigger c before insert on c for each row
  2  begin
  3          null;
  4  end;
  5  /

Trigger created.

ops$tkyte%ORA9IR2> insert /*+ append */ into c select 1 from dual;

1 row created.

ops$tkyte%ORA9IR2> select * from c;

         X
----------
         1
         1
         1
         1

<b>and apparently we did not direct path - we can read the table..</b>

ops$tkyte%ORA9IR2> drop trigger c;

Trigger dropped.

ops$tkyte%ORA9IR2> insert /*+ append */ into c select 1 from dual;

1 row created.

ops$tkyte%ORA9IR2> select * from c;
select * from c
              *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel

<b>and now back to being able to again.</b>





sqlldr can and does ignore triggers/referential integrity and even uniqueness during a direct path load, insert /*+ append */ does not - you are not using direct path when they (constraints) exist.


parallel is always a direct path, if you go parallel, you will be appending.

A reader, December 03, 2009 - 4:29 pm UTC

Hi sir;
As far as I know, no undo is generated with direct path load.

When I issue :
insert /*+ append */

I can rollback this transaction.. If undo is not used, how can I perform rollback?

Tom Kyte
December 04, 2009 - 4:35 pm UTC

... As far as I know, no undo is generated with direct path load. ...

not stated with sufficient clarity.

In a direct path load, undo for the table being loaded into - but not the indexes on said table and not for any data dictionary modifications made as a result of the load (like extent allocations) - may be bypassed. A direct path load may result in LESS undo than a convention path load, but will always generate some amount of undo.




That aside, when you direct path load, we lock the segment(s) (no one else can modify it). Then we load into newly allocated space. If you say "rollback", all we do to rollback the table modifications is to release this newly allocated space - the modifications to the index and such are rolled back using undo as normal.

A reader, December 07, 2009 - 9:06 am UTC

Hi Tom,

Please confirm the below with respect to your above reply

(1) undo is not generated for table but it would be generated for the index define on the table

(2) Also undo is generate for data dictionary tables, so that when there is any rollback (by user or error)
oracle will release the allocated space


Please correct me if am wrong

Thanks

Tom Kyte
December 07, 2009 - 3:02 pm UTC

1) correct, undo is generated for the index maintenance.

2) well, all modifications to the dictionary table will have undo generated - no matter what - for transactional consistency. It may or may not be used during rollback (dictionary modifications are generally done as recursive SQL and committed as they happen - not at the end, the user rollback won't undo the directory operations directly)

A reader, December 13, 2009 - 8:03 pm UTC

Suppose my datafile is 10gb.
9gb is free but I cannot resize it(make it smaller), cos there is an index at the end of datafile.
If I rebuild this index(the one which is end of the datafile) as parallel, does oracle allocate new extend to datafile? or use the existing free extents in the begining of datafile?
So that I can shrink the datafile
Tom Kyte
December 14, 2009 - 9:22 am UTC

cos? I'm not familar with that acronym. It is the airport code for Colorado Springs Colorado - but I'm not sure what that has to do with this?


We'll always using free extents before we autoextend a datafile. It might not go right into the front, there is no assurance of that, but it won't occupy the last extent anymore.

if it is the only thing in the tablespace, probably you should just create new, move it, drop old, rename new (if needed) back to old.

A reader, December 15, 2009 - 2:48 pm UTC

..cos? I'm not familar with that acronym. It is the airport code for Colorado Springs Colorado - but I'm not sure what that has to do with this?

You have great sense of humour sir :-)

But I am rebuilding this index with direct path load.
Does it again use the free extents in the begining or in the middle of the datafile or allocate a new extent to datafile?
Tom Kyte
December 15, 2009 - 3:04 pm UTC


... But I am rebuilding this index with direct path load.
Does it again use the free extents in the begining or in the middle of the
datafile or allocate a new extent to datafile? ...

you can only rebuild an index with alter index rebuild using direct path, there isn't any other way.

so, my answer above stands. just re-read it, it is the same.

A reader, December 15, 2009 - 5:32 pm UTC

Sir, I wish I have only %1 of your oracle knowledge.

How about if I create an index parallel or create table as paralell? ( or other direct path options)
Does your answer change in this case?
Tom Kyte
December 16, 2009 - 7:04 am UTC

...
Sir, I wish I have only %1 of your oracle knowledge.
.....

You can have a lot more than that:
http://www.oracle.com/pls/db112/to_toc?pathname=server.112/e10713/toc.htm


as for parallel or anything "so, my answer above stands. just re-read it, it is the same. "


We will not grow the datafile until all usable free space has been used, then we will autoextend the file if necessary and permitted.

A reader, December 16, 2009 - 7:41 pm UTC

...(1) undo is not generated for table but it would be generated for the index define on the table

...1) correct, undo is generated for the index maintenance

Respected Sir,
As far as I remember from one of your other thread, little undo is generated for table and full undo is genetated for index.

Please correct me if I am wrong

or

no undo for table full undo for index..
Tom Kyte
December 17, 2009 - 7:27 am UTC

undo would be generated for dictionary updates, but not the table data itself.

A little undo is going to be generated - but it is undo used to protect the dictionary modifications that take place to accomplish the direct path load.


ops$tkyte%ORA11GR2> create table t1 as select * from all_objects;

Table created.

ops$tkyte%ORA11GR2> create table t2 as select * from all_objects where 1=0;

Table created.

ops$tkyte%ORA11GR2> create table t3 as select * from all_objects where 1=0;

Table created.

ops$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> insert /*+ append */ into t2 select * from t1;

71838 rows created.

ops$tkyte%ORA11GR2> select used_ublk from v$transaction;

 USED_UBLK
----------
         1

ops$tkyte%ORA11GR2> insert /*+ noappend */ into t3 select * from t1;

71838 rows created.

ops$tkyte%ORA11GR2> select used_ublk from v$transaction;

 USED_UBLK
----------
        36


A reader, December 20, 2009 - 8:24 pm UTC

Thanks Sir

You confirmed that little undo is generated.
Suppose I insert with direct path load and didnt commit.

Now, if some other transaction want to read that row,how can he read the before image of the block from undo tablespace? (since there is no undo)


Tom Kyte
December 21, 2009 - 1:27 pm UTC

they do not have to - a direct path insert always writes ABOVE the high water mark, the data that is inserted, but not yet committed, is NOT intermingled with the other pre-existing data.

And the other sessions know to not read above the old high water mark until the direct path session commits - we don't need the undo for read consistency there - we just know "do not read there"

And this is exactly why a session that does a direct path load cannot query the table it loaded - since

a) it WOULD read up to the new high water mark (a transaction has to be allowed to see its changes)

b) it cannot read up to the new high water mark though, because we cannot do it in a read consistent fashion

ORA-01652 in Direct Path loading.

Rajeshwaran, Jeyabal, December 28, 2009 - 12:58 pm UTC

Tom,

I have a Table 'T' where unique data (For some columns) needs to be pulled from the master table 'adt_ir_proc'.
I know that you would suggest Analytics (Rock & Roll). so i went for the below query.

Database in NOARCHIVELOG mode. Table 'T' in NOLOGGING mode All Indexes on table T is set to UNUSABLE state.

INSERT /*+ APPEND */ INTO T 
 SELECT  c1,
   c2,
   c3,
   c4,
   c5
 FROM (        
 SELECT  /*+ parallel (adt,8) */ c1,
   c2,
   c3,
   c4,
   c5
   ROW_NUMBER() OVER(PARTITION BY c1 ORDER BY c2) AS RN
 FROM adt_ir_proc adt
 ) WHERE rn = 1;
 
 but it failed error ORA-01652 Unable to Extend Temp Segment by 1024 in Tablespace TEMP.
 
 Then i tried to delete Duplicates from mater Table.
 
 DELETE FROM adt_ir_proc t
 WHERE EXISTS
 (
  select 'x'
  from 
  (SELECT ROWID rid,ROW_NUMBER() OVER(PARTITION BY c1 ORDER BY c2) AS RN
  FROM adt_ir_proc adt)
  where rid = t.rowid
 );
 
 but it again  failed up error ORA-01652 Unable to Extend Temp Segment by 1024 in Tablespace TEMP. 
 
 referring to here 
  <b>http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:36859040165792</b>
  
 I checked my Tablespace  TS_IR_AU Still 175 GB Exists
 
  Tablespace Name                         KBytes             Used             Free   Used          Largest
  ----------------------------- ---------------- ---------------- ---------------- ------ ----------------
  TS_IR_AU                      320,143,360      140,436,736      179,706,624    43.9        4,063,232
          ---------------- ---------------- ----------------
  sum                              320,143,360      140,436,736      179,706,624

 desc adt_ir_proc;  
 c1       NOT NULL VARCHAR2(4000)
 c2       NUMBER(10)
 c3       VARCHAR2(10)
 c4       DATE
 c5       VARCHAR2(10)   
 
ir@IR> SELECT c1,
  2          c2,
  3          c3,
  4          c4,
  5          c5
  6  FROM (
  7  SELECT  /*+ parallel (adt,8) */ c1,
  8          c2,
  9          c3,
 10          c4,
 11          c5,
 12          ROW_NUMBER() OVER(PARTITION BY c1 ORDER BY c2) AS RN
 13  FROM adt_ir_proc adt
 14  ) WHERE rn = 1;

Execution Plan
--------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name                | Rows  | Bytes |TempSpc| Cost (%CPU)|    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                     |   365M|   697G|       |    21M  (1)|     |         |            |
|   1 |  PX COORDINATOR                |                     |       |       |       |            |     |         |            |
|   2 |   PX SEND QC (RANDOM)          | :TQ10001            |   365M|   697G|       |    21M  (1)|  Q1,01 | P->S | QC (RAND)  |
|   3 |    VIEW                        |                     |   365M|   697G|       |    21M  (1)|  Q1,01 | PCWP |            |
|   4 |     WINDOW SORT PUSHED RANK    |                     |   365M|   692G|  1856G|    21M  (1)|  Q1,01 | PCWP |            |
|   5 |      PX RECEIVE                |                     |   365M|   692G|       |    21M  (1)|  Q1,01 | PCWP |            |
|   6 |       PX SEND HASH             | :TQ10000            |   365M|   692G|       |    21M  (1)|  Q1,00 | P->P | HASH       |
|   7 |        WINDOW CHILD PUSHED RANK|                     |   365M|   692G|       |    21M  (1)|  Q1,00 | PCWP |            |
|   8 |         PX BLOCK ITERATOR      |                     |   365M|   692G|       |   102K  (3)|  Q1,00 | PCWC |            |
|   9 |          TABLE ACCESS FULL     | adt_ir_proc   |   365M|   692G|       |   102K  (3)|  Q1,00 | PCWP |            |
-------------------------------------------------------------------------------------------------------------------------------- 

ir@IR> SELECT count(*) from adt_ir_proc;
  COUNT(*)
----------
 324892102

Questions
=========
(1) Can you please help me with how to load the data's into table 'T' in an fast & Efficient way.

(2) Regarding Inline views.
a) Will be outer most query will start returning the records (that matches rn =1) as soon as it receives the record from inner most query
(or)
b) The outer most query has to wait untill the inner query completes the execution?

if possible can you please explain me the Question (2) with a clear example. I am really confused with Question(2).
I am working in Oracle 10gR2.

Thank you
Rajesh.
Tom Kyte
January 04, 2010 - 7:50 am UTC

    DELETE FROM adt_ir_proc t
    WHERE EXISTS
    (
        select 'x'
        from 
        (SELECT ROWID rid,ROW_NUMBER() OVER(PARTITION BY c1 ORDER BY c2) AS RN
        FROM adt_ir_proc adt)
        where rid = t.rowid
    );
    


that of course could be shortened to:

truncate table adt_ir_proc;

that is the same as

delete from t where exists (select null from t t2 where t2.rowid = t.rowid);

which is of course the same as

delete from t;

which is probably better done as

truncate table t;




(1) easy and perhaps obvious:

increase temp, it would see you have insufficient temporary tablespace allocated.


(2) it depends, in this case, the optimizer would look at that and go for all rows optimization. Doing it slow by slow by slow by slow would be - well, slow. It would not use any indexes (there are hundreds of millions of rows)

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:6749454952894#6760861174154

Look what happens when you have hundreds of millions of index operations, slowness - it wouldn't need temp, but it would take the rest of your life to complete.






If you want to skip temp, avoid the sort, and load the data that you can load, use dml error logging and SKIP the direct path insert.


The duplicate rows will be logged into an error table, the rows that can be loaded will be loaded. But you must not use direct path in that case.



Alternatively, place a unique constraint on c1,c2 on the stage table using EXCEPTIONS INTO, that'll put the duplicate rowids into an exception table, and you can use that to clean up with.


Or, if the number of duplicates is small (where as the number of non-duplicates is LARGE) do this:

INSERT /*+ APPEND */ INTO T
SELECT c1, c2, c3, c4, c5
FROM t
where rowid NOT IN
( select rid from (select rowid rid,
ROW_NUMBER() OVER(PARTITION BY c1 ORDER BY c2) AS RN
FROM adt_ir_proc adt
where rn > 1 and rid IS NOT NULL )
/

that should create a small set of NOT IN items, and hash anti join to it. You'd be looking for a plan like this:

scott%ORA10GR2> create table t1 ( c1 int, c2 int, c3 int, c4 int, c5 int );

Table created.

scott%ORA10GR2> create table t2 ( c1 int, c2 int, c3 int, c4 int, c5 int );

Table created.

scott%ORA10GR2>
scott%ORA10GR2> exec dbms_stats.set_table_stats( user, 'T2', numrows => 365000000, NUMBLKS => 3650000 );

PL/SQL procedure successfully completed.

scott%ORA10GR2>
scott%ORA10GR2> delete from plan_table;

7 rows deleted.

scott%ORA10GR2> explain plan for
  2  INSERT /*+ APPEND */ INTO T1
  3  SELECT c1, c2, c3, c4, c5
  4    FROM t2
  5   where rowid NOT IN
  6   ( select rid from (select rowid rid,
  7              ROW_NUMBER() OVER(PARTITION BY c1 ORDER BY c2) AS RN
  8                        FROM t2 )
  9      where rn > 1 and rid IS NOT NULL )
 10  /

Explained.

scott%ORA10GR2> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3474554274

--------------------------------------------------------------------------------------
| Id  | Operation             | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT      |      |     1 |   102 |       |  5011K  (2)| 10:15:01 |
|   1 |  LOAD AS SELECT       | T1   |       |       |       |            |          |
|*  2 |   HASH JOIN RIGHT ANTI|      |     1 |   102 |    12G|  5011K  (2)| 10:15:01 |
|*  3 |    VIEW               |      |   365M|  8702M|       |  3845K  (2)| 07:51:53 |
|   4 |     WINDOW SORT       |      |   365M|    12G|    32G|  3845K  (2)| 07:51:53 |
|   5 |      TABLE ACCESS FULL| T2   |   365M|    12G|       |   258K  (3)| 00:31:42 |
|   6 |    TABLE ACCESS FULL  | T2   |   365M|    26G|       |   260K  (4)| 00:31:56 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access(ROWID="RID")
   3 - filter("RN">1)

19 rows selected.

ORA-01652 in Direct Path loading.

Rajeshwaran, Jeyabal, December 29, 2009 - 12:25 am UTC

Sorry The delete Query is like the below.

DELETE FROM adt_ir_proc t
WHERE EXISTS
(
    select 'x'
    from 
    (SELECT ROWID rid,ROW_NUMBER() OVER(PARTITION BY c1 ORDER BY c2) AS RN
    FROM adt_ir_proc adt)
    where rid = t.rowid
    AND  rn > 1
);

Tom Kyte
January 04, 2010 - 8:17 am UTC

which is equivalent to

DELETE FROM adt_ir_proc t
WHERE rowid in
(
select rid
from
(SELECT ROWID rid,ROW_NUMBER() OVER(PARTITION BY c1 ORDER BY c2) AS RN
FROM adt_ir_proc adt)
where rn > 1
);

do not think that "exists" is like a loop and "in" is like a set - they are both able to either "loop" or "big bulky set" operate on the data.

looping would be painfully slow - as mentioned above.

Re-writing as Insert /*+ APPEND */

reader, January 10, 2010 - 4:20 am UTC

Hi Tom,

I have the following script that I am playing around with.
It's basically a test script which order some records with in a key and then picks the first occurrence (after the order by) and inserts into a new table ...

drop table babe;

create table babe(a number, b number, c number, d number);

insert into babe values (1,1,1,5);
insert into babe values (1,1,2,3);
insert into babe values (1,1,3,7);
insert into babe values (1,2,1,6);
insert into babe values (1,2,2,1);
insert into babe values (1,2,3,9);
insert into babe values (1,3,1,2);
insert into babe values (1,3,2,3);
insert into babe values (1,3,3,8);

commit;

drop table another_babe;
create table another_babe
(a number, b number, c number, d number)
/

insert into another_babe values (1,1,1,5);
insert into another_babe values (1,1,2,3);
insert into another_babe values (1,1,3,7);
insert into another_babe values (1,2,1,6);
insert into another_babe values (1,2,2,1);
insert into another_babe values (1,2,3,9);
insert into another_babe values (1,3,1,2);
insert into another_babe values (1,3,2,3);
insert into another_babe values (1,3,3,8);

drop table output;
create table output(a number,b number ,c number, d number);

declare
type a is table of babe.a%type;
type b is table of babe.b%type;
type c is table of babe.c%type;
type d is table of babe.d%type;

g_a a;
g_b b;
g_c c;
g_d d;

cursor x is select a,b from babe group by a,b;
begin

open x;
loop
fetch x bulk collect into g_a, g_b limit 3;

if (g_a.count = 0 ) then exit; end if;

forall element in g_a.first .. g_a.last
insert into output
(a,b,c,d)
select * from
(select a,b,c,d
from another_babe a
where a.a = g_a(element)
and a.b = g_b(element)
order by a ,b , c desc , d )
where rownum = 1;

end loop;

end;
/
commit
/
select * from output
/

I am now dealing with large volumes of data and I would lke to do an insert /*+ APPEND */ into my new table. Is it possible to do the above logic in one single SQL statement?

Any input would be much appreciated!

Tom Kyte
January 18, 2010 - 5:57 am UTC

ops$tkyte%ORA10GR2> insert /*+ append */ into output(a,b,c,d)
  2  select a,b,c,d
  3    from (select a,b,c,d,row_number() over (partition by a,b order by c desc,d) rn
  4            from another_babe
  5           where (a,b) in (select a,b from babe))
  6   where rn = 1
  7   order by a,b,c desc, d
  8  /

3 rows created.

Re-write as Insert /*+APPEND */

reader, January 10, 2010 - 4:28 am UTC

Sorry Tom, I should have added that - I would like to re-write this without using Analytic Functions. This is because when we have huge volumes of data it does huge sorts and goes to TEMP tablespace. So can be do this SQL as a single statement without analytics?
Tom Kyte
January 18, 2010 - 5:59 am UTC

sorry, but you either want

a) bulky and fast

b) slow by slow and - slow.

You have slow by slow down pat, go with bulky and enjoy the speed. Temp is cheap, get more.

And don't blame analytics, blame PHYSICS - could I do this without analytics in a single sql statement? Sure could.

but guess what - it would run forever and likely consume even more temp space.

Analytics

reader, February 15, 2010 - 7:07 am UTC

Hi Tom - this cool thanks!

IOT

Eduardo, April 19, 2010 - 1:14 pm UTC

Hi Tom,

If I use the insert /*+ append */ with a table with IOT (index-organized table). Is there any performance improvement?

Thanks for your help.
Tom Kyte
April 19, 2010 - 1:30 pm UTC

ops$tkyte%ORA11GR2> create table t ( x int primary key, y int );

Table created.

ops$tkyte%ORA11GR2> insert /*+ append */ into t select 1, 1 from dual;

1 row created.

ops$tkyte%ORA11GR2> select * from t;
select * from t
              *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> drop table t;

Table dropped.

ops$tkyte%ORA11GR2> create table t ( x int primary key, y int ) organization index;

Table created.

ops$tkyte%ORA11GR2> insert /*+ append */ into t select 1, 1 from dual;

1 row created.

ops$tkyte%ORA11GR2> select * from t;

         X          Y
---------- ----------
         1          1

ops$tkyte%ORA11GR2>



that just shows that direct path is not something you do with structures like IOT's, clusters.. things that force "location" on individual rows in a segment.

direct pathing mandates we can take a stream of data, format a block, write block, format a block, write block.

In an index - you do not get the rows in order during a load, hence we cannot format block/write block.


And remember, when you ARE able to use direct pathing one of three things will happen:

it'll go faster

it'll go neither faster nor slower

it'll go a lot slower



direct path is NOT fast=true.

Alexander, February 01, 2012 - 9:55 am UTC

Hi Tom,

I'm struggling with a vendor product's ETL job that runs an insert select for 1.5 billion rows. Looking for any advice you can provide. I tried parallelism on the larger tables due to the full scan operations necessary, didn't do much.
create sequence seq_f_object_auth_role
    start with 33067365774
    increment by 1
    nominvalue
    nomaxvalue
    nocycle
    cache 10000000000
    noorder;

create table c$_acl_policy_stg
(
    bl_auth_id number(38,0) not null,
    role_id    number(38,0) not null,
    bl_acl_id  number(38,0) not null,
    is_deleted number(1,0)  not null
);

create table c$_acl_policy_stg2
(
    role_id    number(38,0) not null,
    bl_acl_id  number(38,0)     null,
    bl_auth_id number(38,0) not null
);

create table d_blrole
(
    role_hist_id        number(38,0)  not null,
    role_id             number(38,0)  not null,
    bl_value_id         number(38,0)      null,
    created_by_role_id  number(38,0)      null,
    created_by_user_id  number(38,0)      null,
    modified_by_role_id number(38,0)      null,
    modified_by_user_id number(38,0)      null,
    bl_acl_id           number(38,0)  not null,
    bl_acl_template_id  number(38,0)      null,
    description         varchar2(255)     null,
    name                varchar2(255)     null,
    is_enabled          number(1,0)       null,
    is_system_role      number(1,0)   not null,
    date_created        date              null,
    date_modified       date              null,
    bl_site_id          number(38,0)  not null,
    is_deleted          number(1,0)   not null,
    is_latest_version   number(1,0)   not null,
    start_datetime      timestamp(6)  not null,
    end_datetime        timestamp(6)  not null
);

alter table d_blrole
    add constraint xpkblrole
    primary key (role_hist_id)
    using index;

EXEC dbms_stats.set_table_stats(ownname => ops$tkyte,tabname => 'C$_ACL_POLICY_STG',numrows=>117980864);
EXEC dbms_stats.set_table_stats(ownname => ops$tkyte,tabname => 'C$_ACL_POLICY_STG2',numrows=>5028918);
EXEC dbms_stats.set_table_stats(ownname => ops$tkyte,tabname => 'D_BLROLE',numrows=>4272);

SQL> SELECT
  2          SEQ_F_OBJECT_AUTH_ROLE.NEXTVAL,
  3          DBR.ROLE_HIST_ID,
  4          STG.BL_AUTH_ID,
  5          1.0,
  6          STG.IS_DELETED,
  7          0,
  8          STG.BL_ACL_ID
  9      FROM
 10          C$_ACL_POLICY_STG STG
 11          INNER JOIN
 12          D_BLROLE DBR
 13          ON
 14              DBR.ROLE_ID = STG.ROLE_ID AND
 15              DBR.bl_site_id = 1.0
 16      WHERE NOT EXISTS (SELECT 1
 17                        FROM C$_ACL_POLICY_STG2 F2
 18                        WHERE
 19                            F2.ROLE_ID = STG.ROLE_ID AND
 20                            F2.BL_AUTH_ID = STG.BL_AUTH_ID AND
 21                            F2.BL_ACL_ID = STG.BL_ACL_ID);

1487340774 rows selected.

Elapsed: 06:53:48.38

Execution Plan
----------------------------------------------------------
Plan hash value: 3074245134

---------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name                   | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |                        |    29M|  1357M|       |   231K  (2)| 00:53:59 |
|   1 |  SEQUENCE              | SEQ_F_OBJECT_AUTH_ROLE |       |       |       |            |          |
|*  2 |   HASH JOIN            |                        |    29M|  1357M|       |   231K  (2)| 00:53:59 |
|*  3 |    TABLE ACCESS FULL   | D_BLROLE               |  4272 | 59808 |       |    22   (0)| 00:00:01 |
|*  4 |    HASH JOIN RIGHT ANTI|                        |  1179K|    39M|   134M|   231K  (2)| 00:53:57 |
|   5 |     TABLE ACCESS FULL  | C$_ACL_POLICY_STG2     |  5028K|    76M|       |  2840   (2)| 00:00:40 |
|   6 |     TABLE ACCESS FULL  | C$_ACL_POLICY_STG      |   117M|  2137M|       | 72466   (2)| 00:16:55 |
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("DBR"."ROLE_ID"="STG"."ROLE_ID")
   3 - filter("DBR"."BL_SITE_ID"=1.0)
   4 - access("F2"."ROLE_ID"="STG"."ROLE_ID" AND "F2"."BL_AUTH_ID"="STG"."BL_AUTH_ID" AND
              "F2"."BL_ACL_ID"="STG"."BL_ACL_ID")


Statistics
----------------------------------------------------------
    5971052  recursive calls
    4502910  db block gets
   89820885  consistent gets
     189285  physical reads
 1035078644  redo size
61440199070  bytes sent via SQL*Net to client
 1090717085  bytes received via SQL*Net from client
   99156053  SQL*Net roundtrips to/from client
         50  sorts (memory)
          0  sorts (disk)
 1487340774  rows processed


There are some indexes I created for testing but they were not used and at this time I'm not convinced they should be.
SQL> select distinct count(*), bl_site_id from d_blrole
  2  group by bl_site_id;

  COUNT(*) BL_SITE_ID
---------- ----------
      4272          1


By the way, your preview button seems to be broken.
Tom Kyte
February 01, 2012 - 11:03 am UTC

thank goodness no indexes were used, that would be horrific. See this for an inkling of what would/could happen:

https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:6749454952894#6760861174154



what are your pga settings on this machine?

how fast is your IO - how many Megabytes/second can you transfer from disk to server memory? (use dd to test it out to /dev/null if you don't know offhand). If parallel query didn't do much - it is likely that you don't have the IO bandwidth to read this in and write it back out quickly. That is typically the problem for most people today - insufficiently sized IO bandwidth

plan looks great - exactly what I'd want it to be - a hash anti join for the not exists, and then a normal hash join for the join itself.


if you look at the ASH data for this query - what did it spend most of its time doing?

Alexander, February 01, 2012 - 11:27 am UTC

Using max_memory_target, although Ive had issues with that before spilling to temp prematurely and not using pga enough. I'm trying to figure out how I faked Oracle out into using session memory settings and not the max_memory_target.

What is ASH? ADDM said this:

Summary of Findings
-------------------
   Description            Active Sessions      Recommendations
                          Percent of Activity
   ---------------------  -------------------  ---------------
1  Virtual Memory Paging  1.33 | 100           1
2  Top SQL Statements     1.17 | 87.71         2
3  "User I/O" wait Class  .04 | 2.9            0


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


          Findings and Recommendations
          ----------------------------

Finding 1: Virtual Memory Paging
Impact is 1.33 active sessions, 100% of total activity.
-------------------------------------------------------
Significant virtual memory paging was detected on the host operating system.

   Recommendation 1: Host Configuration
   Estimated benefit is 1.33 active sessions, 100% of total activity.
   ------------------------------------------------------------------
   Action
      Host operating system was experiencing significant paging but no
      particular root cause could be detected. Investigate processes that do
      not belong to this instance running on the host that are consuming
      significant amount of virtual memory. Also consider adding more physical
      memory to the host.


Finding 2: Top SQL Statements
Impact is 1.17 active sessions, 87.71% of total activity.
---------------------------------------------------------
SQL statements consuming significant database time were found. These
statements offer a good opportunity for performance improvement.

   Recommendation 1: SQL Tuning
   Estimated benefit is .63 active sessions, 47.14% of total activity.
   -------------------------------------------------------------------
   Action
      Run SQL Tuning Advisor on the SELECT statement with SQL_ID
      "dm0nh81xd37yx".
      Related Object
         SQL statement with SQL_ID dm0nh81xd37yx.
         SELECT
         SEQ_F_OBJECT_AUTH_ROLE.NEXTVAL,
         DBR.ROLE_HIST_ID,
         STG.BL_AUTH_ID,
         1.0,
         STG.IS_DELETED,
         0,
         STG.BL_ACL_ID
         FROM
         C$_ACL_POLICY_STG STG
         INNER JOIN
         D_BLROLE DBR
         ON
         DBR.ROLE_ID = STG.ROLE_ID
         WHERE NOT EXISTS (SELECT 1
         FROM C$_ACL_POLICY_STG2 F2
         WHERE
         F2.ROLE_ID = STG.ROLE_ID AND
         F2.BL_AUTH_ID = STG.BL_AUTH_ID AND
         F2.BL_ACL_ID = STG.BL_ACL_ID)
   Rationale
      The SQL spent 97% of its database time on CPU, I/O and Cluster waits.
      This part of database time may be improved by the SQL Tuning Advisor.
   Rationale
      Database time for this SQL was divided as follows: 100% for SQL
      execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
      execution.


Finding 3: "User I/O" wait Class
Impact is .04 active sessions, 2.9% of total activity.
------------------------------------------------------
Wait class "User I/O" was consuming significant database time.
The throughput of the I/O subsystem was not significantly lower than expected.
The Oracle instance memory (SGA and PGA) was adequately sized.

   No recommendations are available.



~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

          Additional Information
          ----------------------

Miscellaneous Information
-------------------------
Wait class "Application" was not consuming significant database time.
Wait class "Commit" was not consuming significant database time.
Wait class "Concurrency" was not consuming significant database time.
Wait class "Configuration" was not consuming significant database time.
CPU was not a bottleneck for the instance.
Wait class "Network" was not consuming significant database time.
Session connect and disconnect calls were not consuming significant database
time.
Hard parsing of SQL statements was not consuming significant database time.

Tom Kyte
February 01, 2012 - 4:39 pm UTC

... What is ASH? ...

active session history (ASH) information.

http://www.oracle.com/pls/db112/search?remark=quick_search&word=ash



how fast is your IO - how many Megabytes/second can you transfer from disk to server memory? (use dd to test it out to /dev/null if you don't know offhand). If parallel query didn't do much - it is likely that you don't have the IO bandwidth to read this in and write it back out quickly. That is typically the problem for most people today - insufficiently sized IO bandwidth

Alexander, February 02, 2012 - 8:37 am UTC

Thanks, I will check those out, but did you see this (I wasnt ignoring...)

The throughput of the I/O subsystem was not significantly lower than expected.

Tom Kyte
February 02, 2012 - 9:11 am UTC

what did you expect? what were your expectations.

I saw that, it means nothing to me, it doesn't help me do math.

It could very well be that you do not have the IO bandwidth to accomplish what you want to do in the amount of time you desire to do it in. Happens all of the time, constantly.


was just working with someone trying to tune an update. Of 200gb of data. Given the amount of transfer capacity they had (not the disk ratings, but actual, observed "we can move this many bytes from a to b in this amount of time) - I was surprised it worked as fast as it did. They had to move 200gb from disk to server and then more than 200gb (table blocks, undo blocks when cache filled up, redo) from server back to disk. All at a whopping 70MB per second. It would take them close to an hour just to read it - longer still to write it out (as their cache was less than 200gb - we'd definitely be waiting for that as well)

update was tuned, system wasn't architected to do what they wanted in the time they wanted to do it in.

Alexander, February 03, 2012 - 8:28 am UTC

Is this what you were looking for from ASH? I never used this report before...

Top User Events                DB/Inst: OCP35T/OCP35T  (Feb 02 20:31 to 20:43)

                                                               Avg Active
Event                               Event Class        % Event   Sessions
----------------------------------- --------------- ---------- ----------
CPU + Wait for CPU                  CPU                  80.76       2.10
log file sync                       Commit                8.89       0.23
db file sequential read             User I/O              1.41       0.04
          -------------------------------------------------------------

Top Background Events          DB/Inst: OCP35T/OCP35T  (Feb 02 20:31 to 20:43)

                                                               Avg Active
Event                               Event Class     % Activity   Sessions
----------------------------------- --------------- ---------- ----------
db file async I/O submit            System I/O            3.32       0.09
log file parallel write             System I/O            3.26       0.08
          -------------------------------------------------------------




Tom Kyte
February 03, 2012 - 10:21 am UTC

run an ash report on this sql statement.


ops$tkyte%ORA11GR2> select count(*) from all_objects;

  COUNT(*)
----------
     72334

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> column sql_id new_val sqlid
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select sql_id from v$sql where sql_text = 'select count(*) from all_objects';

SQL_ID
-------------
9tz4qu4rj9rdp

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select * from table( dbms_workload_repository.ash_report_text(
  2  l_dbid => (select dbid from v$database),
  3  l_inst_num =>(select instance_number from v$instance),
  4  l_btime => sysdate-15*1/24/60,
  5  l_etime => sysdate,
  6  l_sql_id => '&SQLID') );

OUTPUT
--------------------------------------------------------------------------------
ASH Report For ORA11GR2/ora11gr2
(1 Report Target Specified)

DB Name         DB Id    Instance     Inst Num Release     RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
ORA11GR2        52489347 ora11gr2            1 11.2.0.2.0  NO  localhost.lo

CPUs           SGA Size       Buffer Cache        Shared Pool    ASH Buffer Size
---- ------------------ ------------------ ------------------ ------------------
   4        511M (100%)       180M (35.2%)       300M (58.7%)        8.0M (1.6%)


          Analysis Begin Time:   03-Feb-12 11:05:20
            Analysis End Time:   03-Feb-12 11:20:20
                 Elapsed Time:        15.0 (mins)
            Begin Data Source:   V$ACTIVE_SESSION_HISTORY
              End Data Source:   V$ACTIVE_SESSION_HISTORY
                 Sample Count:          10
      Average Active Sessions:        0.01
  Avg. Active Session per CPU:        0.00
                Report Target: SQL_ID like '9tz4qu4rj9rdp'
              Target Activity: 34.5% of total database activity

Top User Events            DB/Inst: ORA11GR2/ora11gr2  (Feb 03 11:05 to 11:20)

                                                               Avg Active
Event                               Event Class        % Event   Sessions
----------------------------------- --------------- ---------- ----------
CPU + Wait for CPU                  CPU                  90.00       0.01
db file sequential read             User I/O             10.00       0.00
......


Alexander, February 03, 2012 - 11:03 am UTC

(lx-eaoracl-t03p:oracle)> sqlplus /

SQL*Plus: Release 11.2.0.2.0 Production on Fri Feb 3 11:55:06 2012

Copyright (c) 1982, 2010, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select count(*) from all_objects;

  COUNT(*)
----------
     88108

1 row selected.

SQL> column sql_id new_val sqlid
SQL>
SQL> select sql_id from v$sql where sql_text = 'select count(*) from all_objects';

SQL_ID
-------------
9tz4qu4rj9rdp

1 row selected.

SQL> select * from table( dbms_workload_repository.ash_report_text(
  2  l_dbid => (select dbid from v$database),
  3  l_inst_num =>(select instance_number from v$instance),
  4  l_btime => sysdate-15*1/24/60,
  5  l_etime => sysdate,
  6  l_sql_id => '&SQLID') );
old   6: l_sql_id => '&SQLID') )
new   6: l_sql_id => '9tz4qu4rj9rdp') )

OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
ASH Report For OCP35T/OCP35T
(1 Report Target Specified)

DB Name         DB Id    Instance     Inst Num Release     RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
OCP35T        1371585656 OCP35T              1 11.2.0.2.0  NO  lx-eaoracl-t

CPUs           SGA Size       Buffer Cache        Shared Pool    ASH Buffer Size
---- ------------------ ------------------ ------------------ ------------------
  16     24,466M (100%)     5,568M (22.8%)     9,792M (40.0%)       32.0M (0.1%)


          Analysis Begin Time:   03-Feb-12 11:43:08
            Analysis End Time:   03-Feb-12 11:58:08
                 Elapsed Time:        15.0 (mins)
            Begin Data Source:   V$ACTIVE_SESSION_HISTORY
              End Data Source:   V$ACTIVE_SESSION_HISTORY
                 Sample Count:           2
      Average Active Sessions:        0.00
  Avg. Active Session per CPU:        0.00
                Report Target: SQL_ID like '9tz4qu4rj9rdp'
              Target Activity: .3% of total database activity

Top User Events                DB/Inst: OCP35T/OCP35T  (Feb 03 11:43 to 11:58)

                                                               Avg Active
Event                               Event Class        % Event   Sessions
----------------------------------- --------------- ---------- ----------
CPU + Wait for CPU                  CPU                 100.00       0.00
          -------------------------------------------------------------

Top Background Events          DB/Inst: OCP35T/OCP35T  (Feb 03 11:43 to 11:58)

                  No data exists for this section of the report.
          -------------------------------------------------------------

Top Event P1/P2/P3 Values      DB/Inst: OCP35T/OCP35T  (Feb 03 11:43 to 11:58)

                  No data exists for this section of the report.
          -------------------------------------------------------------

Top Service/Module             DB/Inst: OCP35T/OCP35T  (Feb 03 11:43 to 11:58)

Service        Module                   % Activity Action               % Action
-------------- ------------------------ ---------- ------------------ ----------
SYS$USERS      SQL*Plus                     100.00 UNNAMED                100.00
          -------------------------------------------------------------

Top Client IDs                 DB/Inst: OCP35T/OCP35T  (Feb 03 11:43 to 11:58)

                  No data exists for this section of the report.
          -------------------------------------------------------------

Top SQL Command Types          DB/Inst: OCP35T/OCP35T  (Feb 03 11:43 to 11:58)
-> 'Distinct SQLIDs' is the count of the distinct number of SQLIDs
      with the given SQL Command Type found over all the ASH samples
      in the analysis period

                                           Distinct            Avg Active
SQL Command Type                             SQLIDs % Activity   Sessions
---------------------------------------- ---------- ---------- ----------
SELECT                                            1     100.00       0.00
          -------------------------------------------------------------

Top Phases of Execution        DB/Inst: OCP35T/OCP35T  (Feb 03 11:43 to 11:58)

                                          Avg Active
Phase of Execution             % Activity   Sessions
------------------------------ ---------- ----------
SQL Execution                      100.00       0.00
          -------------------------------------------------------------
Top SQL with Top Events       DB/Inst: OCP35T/OCP35T  (Feb 03 11:43 to 11:58)

                                                        Sampled #
                 SQL ID             Planhash        of Executions     % Activity
----------------------- -------------------- -------------------- --------------
Event                          % Event Top Row Source                    % RwSrc
------------------------------ ------- --------------------------------- -------
          9tz4qu4rj9rdp           2599610842                    1         100.00
CPU + Wait for CPU              100.00 HASH JOIN                          100.00
select count(*) from all_objects

          -------------------------------------------------------------

Top SQL with Top Row Sources   DB/Inst: OCP35T/OCP35T  (Feb 03 11:43 to 11:58)

                                                        Sampled #
                 SQL ID             PlanHash        of Executions     % Activity
----------------------- -------------------- -------------------- --------------
Row Source                               % RwSrc Top Event               % Event
---------------------------------------- ------- ----------------------- -------
          9tz4qu4rj9rdp           2599610842                    1         100.00
HASH JOIN                                 100.00 CPU + Wait for CPU       100.00
select count(*) from all_objects

          -------------------------------------------------------------

Top SQL using literals         DB/Inst: OCP35T/OCP35T  (Feb 03 11:43 to 11:58)

                  No data exists for this section of the report.
          -------------------------------------------------------------

Top Parsing Module/Action      DB/Inst: OCP35T/OCP35T  (Feb 03 11:43 to 11:58)

                  No data exists for this section of the report.
          -------------------------------------------------------------

Top PL/SQL Procedures          DB/Inst: OCP35T/OCP35T  (Feb 03 11:43 to 11:58)

                  No data exists for this section of the report.
          -------------------------------------------------------------

Top Java Workload              DB/Inst: OCP35T/OCP35T  (Feb 03 11:43 to 11:58)

                  No data exists for this section of the report.
          -------------------------------------------------------------

Top Call Types                 DB/Inst: OCP35T/OCP35T  (Feb 03 11:43 to 11:58)

Call Type                                     Count % Activity Avg Active
---------------------------------------- ---------- ---------- ----------
V8 Bundled Exec                                   2     100.00       0.00
          -------------------------------------------------------------
Top Sessions                  DB/Inst: OCP35T/OCP35T  (Feb 03 11:43 to 11:58)
-> '# Samples Active' shows the number of ASH samples in which the session
      was found waiting for that particular event. The percentage shown
      in this column is calculated with respect to wall clock time
      and not total database activity.
-> 'XIDs' shows the number of distinct transaction IDs sampled in ASH
      when the session was waiting for that particular event
-> For sessions running Parallel Queries, this section will NOT aggregate
      the PQ slave activity into the session issuing the PQ. Refer to
      the 'Top Sessions running PQs' section for such statistics.

   Sid, Serial# % Activity Event                             % Event
--------------- ---------- ------------------------------ ----------
User                 Program                          # Samples Active     XIDs
-------------------- ------------------------------ ------------------ --------
     6596,22883     100.00 CPU + Wait for CPU                 100.00
OPS$ORACLE           sqlplus@lx-eao...p (TNS V1-V3)       2/900 [  0%]        0

          -------------------------------------------------------------

Top Blocking Sessions          DB/Inst: OCP35T/OCP35T  (Feb 03 11:43 to 11:58)

                  No data exists for this section of the report.
          -------------------------------------------------------------
Top Sessions running PQs      DB/Inst: OCP35T/OCP35T  (Feb 03 11:43 to 11:58)

                  No data exists for this section of the report.
          -------------------------------------------------------------

Top DB Objects                 DB/Inst: OCP35T/OCP35T  (Feb 03 11:43 to 11:58)

                  No data exists for this section of the report.
          -------------------------------------------------------------

Top DB Files                   DB/Inst: OCP35T/OCP35T  (Feb 03 11:43 to 11:58)

                  No data exists for this section of the report.
          -------------------------------------------------------------

Top Latches                    DB/Inst: OCP35T/OCP35T  (Feb 03 11:43 to 11:58)

                  No data exists for this section of the report.
          -------------------------------------------------------------
Activity Over Time            DB/Inst: OCP35T/OCP35T  (Feb 03 11:43 to 11:58)
-> Analysis period is divided into smaller time slots
-> Top 3 events are reported in each of those slots
-> 'Slot Count' shows the number of ASH samples in that slot
-> 'Event Count' shows the number of ASH samples waiting for
   that event in that slot
-> '% Event' is 'Event Count' over all ASH samples in the analysis period

                         Slot                                   Event
Slot Time (Duration)    Count Event                             Count % Event
-------------------- -------- ------------------------------ -------- -------
11:54:00   (2.0 min)        2 CPU + Wait for CPU                    2  100.00
          -------------------------------------------------------------

End of Report

Tom Kyte
February 03, 2012 - 11:50 am UTC

umm, you should sort of do it for *your* sql statement don't you think?

Alexander, February 03, 2012 - 12:24 pm UTC

Ok you're confusing me a little. You said "run an ash for this sql"? And if you wanted an ASH for my sql, I already have that report I pasted some above, so it that what you want? It takes 5 hours to run so if it's the same output as what you demonstrated, I'd rather use that.
Tom Kyte
February 03, 2012 - 1:25 pm UTC

when I said "run as ash for this sql" - i was talking about your sql and gave you a demo to show how to do that, sorry.



what you posted above was a teeny tiny snippet of something, not sure what, but just a snippet.

did you read the entire report if you ran it for your sql statement?
did it tell you anything?

(you don't need me to read it, you can do that... I wasn't asking for the report, I was telling you to run it for yourself).

going back to square one:

"if you look at the ASH data for this query - what did it spend most of its time doing? "

that's all we wanted to see.

If this is from your target sql and only your target sql:

Top User Events                DB/Inst: OCP35T/OCP35T  (Feb 02 20:31 to 20:43)

                                                               Avg Active
Event                               Event Class        % Event   Sessions
----------------------------------- --------------- ---------- ----------
CPU + Wait for CPU                  CPU                  80.76       2.10
log file sync                       Commit                8.89       0.23
db file sequential read             User I/O              1.41       0.04
          -------------------------------------------------------------


it spends most of it's time on CPU, I'm curious about the log file sync waits - since you said you created the sequence with a huge increment value?

It doesn't look like physical IO is the problem - it just does a lot of work

Alexander, February 03, 2012 - 2:01 pm UTC

Forgive me I've never used this report before, didn't know it existed. I'm just trying to show my work. (By the way how does this differ from AWR? I don't really understand when I should use ASH based on it's output.)

And yes, that was from my sql. Both of the top two events don't make sense to me. I have 16 cores, seems impossible I could be waiting for cpu running 1 select statement (the machine is quiet otherwise.) Also, the log file sync waits, I'm not committing or doing dml at all. What I said about the sequence is that I set the cache number really high to see if that helps.

Since I'm new to ASH I will probably try it your way and see if that yields better results.
Tom Kyte
February 03, 2012 - 2:59 pm UTC

... (By the way how does this differ from AWR? ...

technically, ASH data is a subset of AWR. ASH is sort of like a fuzzy tkprof, not as detailed as a tkprof, but not at the entire system level like most of AWR. It allows you to see what a session, a sql statement, a group of sql statements, etc were doing.


... seems impossible I could be waiting for cpu running 1
select statement ...

it didn't say you were "definitely waiting", just that was the cpu and cpu wait time, we do not really differentiate them. So, if you were not contending for cpu, that is just cpu time.


... What I said about the sequence
is that I set the cache number really high to see if that helps. ...

if you had the 10 default set when you ran this sql before - then that would account for the log file sync waits.

Alexander, February 08, 2012 - 12:44 pm UTC

So as it turns out, all there really is in the ASH is cpu related findings, which as I said I don't understand.

select * from table( dbms_workload_repository.ash_report_text(
  l_dbid => (select dbid from v$database),
  l_inst_num =>(select instance_number from v$instance),
  l_btime => sysdate-300*1/24/60,
  l_etime => sysdate,
  l_sql_id => 'appgrwzty7m32') );


SQL> select * from table( dbms_workload_repository.ash_report_text(
  l_dbid => (select dbid from v$database),
  l_inst_num =>(select instance_number from v$instance),
  l_btime => sysdate-300*1/24/60,
  l_etime => sysdate,
  l_sql_id => 'appgrwzty7m32') );  2    3    4    5    6

OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
ASH Report For OCP35T/OCP35T
(1 Report Target Specified)

DB Name         DB Id    Instance     Inst Num Release     RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
OCP35T        1371585656 OCP35T              1 11.2.0.2.0  NO  lx-eaoracl-t

CPUs           SGA Size       Buffer Cache        Shared Pool    ASH Buffer Size
---- ------------------ ------------------ ------------------ ------------------
  16     24,466M (100%)     5,568M (22.8%)     9,792M (40.0%)       32.0M (0.1%)


          Analysis Begin Time:   07-Feb-12 10:07:27
            Analysis End Time:   07-Feb-12 15:07:27
                 Elapsed Time:       300.0 (mins)
            Begin Data Source:   V$ACTIVE_SESSION_HISTORY
              End Data Source:   V$ACTIVE_SESSION_HISTORY
                 Sample Count:       6,602
      Average Active Sessions:        0.37
  Avg. Active Session per CPU:        0.02
                Report Target: SQL_ID like 'appgrwzty7m32'
              Target Activity: 62.1% of total database activity

Top User Events                DB/Inst: OCP35T/OCP35T  (Feb 07 10:07 to 15:07)

                                                               Avg Active
Event                               Event Class        % Event   Sessions
----------------------------------- --------------- ---------- ----------
CPU + Wait for CPU                  CPU                  99.94       0.37
          -------------------------------------------------------------

Top Background Events          DB/Inst: OCP35T/OCP35T  (Feb 07 10:07 to 15:07)

                  No data exists for this section of the report.
          -------------------------------------------------------------

Top Event P1/P2/P3 Values      DB/Inst: OCP35T/OCP35T  (Feb 07 10:07 to 15:07)

                  No data exists for this section of the report.
          -------------------------------------------------------------

Top Service/Module             DB/Inst: OCP35T/OCP35T  (Feb 07 10:07 to 15:07)

Service        Module                   % Activity Action               % Action
-------------- ------------------------ ---------- ------------------ ----------
SYS$USERS      SQL*Plus                     100.00 UNNAMED                100.00
          -------------------------------------------------------------

Top Client IDs                 DB/Inst: OCP35T/OCP35T  (Feb 07 10:07 to 15:07)

                  No data exists for this section of the report.
          -------------------------------------------------------------

Top SQL Command Types          DB/Inst: OCP35T/OCP35T  (Feb 07 10:07 to 15:07)
-> 'Distinct SQLIDs' is the count of the distinct number of SQLIDs
      with the given SQL Command Type found over all the ASH samples
      in the analysis period

                                           Distinct            Avg Active
SQL Command Type                             SQLIDs % Activity   Sessions
---------------------------------------- ---------- ---------- ----------
SELECT                                            1     100.00       0.37
          -------------------------------------------------------------

Top Phases of Execution        DB/Inst: OCP35T/OCP35T  (Feb 07 10:07 to 15:07)

                                          Avg Active
Phase of Execution             % Activity   Sessions
------------------------------ ---------- ----------
SQL Execution                       95.46       0.35
          -------------------------------------------------------------
Top SQL with Top Events       DB/Inst: OCP35T/OCP35T  (Feb 07 10:07 to 15:07)

                                                        Sampled #
                 SQL ID             Planhash        of Executions     % Activity
----------------------- -------------------- -------------------- --------------
Event                          % Event Top Row Source                    % RwSrc
------------------------------ ------- --------------------------------- -------
          appgrwzty7m32           3074245134                    1         100.00
CPU + Wait for CPU               99.94 SEQUENCE                            63.75
SELECT SEQ_F_OBJECT_AUTH_ROLE.NEXTVAL, DBR.ROLE_HIST_ID,
 STG.BL_AUTH_ID, 1.0, STG.IS_DELETED, 0, STG.BL_
ACL_ID FROM C$_ACL_POLICY_STG STG INNER JOIN D_BLROL
E DBR ON DBR.ROLE_ID = STG.ROLE_ID AND DBR.bl_si

          -------------------------------------------------------------

Top SQL with Top Row Sources   DB/Inst: OCP35T/OCP35T  (Feb 07 10:07 to 15:07)

                                                        Sampled #
                 SQL ID             PlanHash        of Executions     % Activity
----------------------- -------------------- -------------------- --------------
Row Source                               % RwSrc Top Event               % Event
---------------------------------------- ------- ----------------------- -------
          appgrwzty7m32           3074245134                    1         100.00
SEQUENCE                                   63.75 CPU + Wait for CPU        63.75
SELECT SEQ_F_OBJECT_AUTH_ROLE.NEXTVAL, DBR.ROLE_HIST_ID,
 STG.BL_AUTH_ID, 1.0, STG.IS_DELETED, 0, STG.BL_
ACL_ID FROM C$_ACL_POLICY_STG STG INNER JOIN D_BLROL
E DBR ON DBR.ROLE_ID = STG.ROLE_ID AND DBR.bl_si


SELECT STATEMENT                           24.43 CPU + Wait for CPU        24.40


TABLE ACCESS - FULL                         5.29 CPU + Wait for CPU         5.26

          -------------------------------------------------------------

Top SQL using literals         DB/Inst: OCP35T/OCP35T  (Feb 07 10:07 to 15:07)

                  No data exists for this section of the report.
          -------------------------------------------------------------

Top Parsing Module/Action      DB/Inst: OCP35T/OCP35T  (Feb 07 10:07 to 15:07)

                  No data exists for this section of the report.
          -------------------------------------------------------------

Top PL/SQL Procedures          DB/Inst: OCP35T/OCP35T  (Feb 07 10:07 to 15:07)

                  No data exists for this section of the report.
          -------------------------------------------------------------

Top Java Workload              DB/Inst: OCP35T/OCP35T  (Feb 07 10:07 to 15:07)

                  No data exists for this section of the report.
          -------------------------------------------------------------

Top Call Types                 DB/Inst: OCP35T/OCP35T  (Feb 07 10:07 to 15:07)

Call Type                                     Count % Activity Avg Active
---------------------------------------- ---------- ---------- ----------
FETCH                                         6,602     100.00       0.37
          -------------------------------------------------------------
Top Sessions                  DB/Inst: OCP35T/OCP35T  (Feb 07 10:07 to 15:07)
-> '# Samples Active' shows the number of ASH samples in which the session
      was found waiting for that particular event. The percentage shown
      in this column is calculated with respect to wall clock time
      and not total database activity.
-> 'XIDs' shows the number of distinct transaction IDs sampled in ASH
      when the session was waiting for that particular event
-> For sessions running Parallel Queries, this section will NOT aggregate
      the PQ slave activity into the session issuing the PQ. Refer to
      the 'Top Sessions running PQs' section for such statistics.

   Sid, Serial# % Activity Event                             % Event
--------------- ---------- ------------------------------ ----------
User                 Program                          # Samples Active     XIDs
-------------------- ------------------------------ ------------------ --------
     4714,41603     100.00 CPU + Wait for CPU                  99.94
OPS$ORACLE           sqlplus@lx-eao...p (TNS V1-V3)   6,598/18K [ 37%]        0

          -------------------------------------------------------------

Top Blocking Sessions          DB/Inst: OCP35T/OCP35T  (Feb 07 10:07 to 15:07)

                  No data exists for this section of the report.
          -------------------------------------------------------------
Top Sessions running PQs      DB/Inst: OCP35T/OCP35T  (Feb 07 10:07 to 15:07)

                  No data exists for this section of the report.
          -------------------------------------------------------------

Top DB Objects                 DB/Inst: OCP35T/OCP35T  (Feb 07 10:07 to 15:07)

                  No data exists for this section of the report.
          -------------------------------------------------------------

Top DB Files                   DB/Inst: OCP35T/OCP35T  (Feb 07 10:07 to 15:07)

                  No data exists for this section of the report.
          -------------------------------------------------------------

Top Latches                    DB/Inst: OCP35T/OCP35T  (Feb 07 10:07 to 15:07)

                  No data exists for this section of the report.
          -------------------------------------------------------------
Activity Over Time            DB/Inst: OCP35T/OCP35T  (Feb 07 10:07 to 15:07)
-> Analysis period is divided into smaller time slots
-> Top 3 events are reported in each of those slots
-> 'Slot Count' shows the number of ASH samples in that slot
-> 'Event Count' shows the number of ASH samples waiting for
   that event in that slot
-> '% Event' is 'Event Count' over all ASH samples in the analysis period

                         Slot                                   Event
Slot Time (Duration)    Count Event                             Count % Event
-------------------- -------- ------------------------------ -------- -------
10:07:27  (22.6 min)      470 CPU + Wait for CPU                  470    7.12
10:30:00  (30.0 min)      726 CPU + Wait for CPU                  725   10.98
                              direct path read                      1    0.02
11:00:00  (30.0 min)      671 CPU + Wait for CPU                  671   10.16
11:30:00  (30.0 min)      735 CPU + Wait for CPU                  735   11.13
12:00:00  (30.0 min)      670 CPU + Wait for CPU                  670   10.15
12:30:00  (30.0 min)      697 CPU + Wait for CPU                  697   10.56
13:00:00  (30.0 min)      689 CPU + Wait for CPU                  688   10.42
                              SQL*Net message to client             1    0.02
13:30:00  (30.0 min)      731 CPU + Wait for CPU                  730   11.06
                              SQL*Net message to client             1    0.02
14:00:00  (30.0 min)      713 CPU + Wait for CPU                  712   10.78
                              direct path read                      1    0.02
14:30:00  (30.0 min)      500 CPU + Wait for CPU                  500    7.57
          -------------------------------------------------------------

End of Report

APPEND hint while using PARALLEL processing

SK, July 09, 2012 - 2:06 am UTC

Hi TOM,
Currently we are using Oracle 11g R2,we have partition tables on day wise(ONE TABLE for ONE DAY) 
crated using PARALLEL, in which we are loading around 6 million records per day. And we are using 
16 core processes.

But our system's default parallel_automatic_tuning is not set to true

SQL> show parameter automatic;

parallel_automatic_tuning            boolean                          FALSE

But while inserting in to the tables we are using "append" hint

 INSERT /*+ append */  INTO DAILY_TABLE
 
After the above changes, we are facing slowness in the database inserts. Sometimes it takes only 
few seconds but sometimes 15 to 20 mins ....


I am thinking below reasons may be causing this problem

1.  parallel_automatic_tuning is flase,that is why "append" hint is not using PARALLEL processing 
for inserts and causing the slowness.
2.  According to your explanation in one of the threads,
        It would cause all modifications to serialize. No one else could insert/update/delete or 
merge into this table until the transaction that direct paths commits. 
        
        
Please suggest me 
    do I really need to use "append" hint ?
    do I need to set parallel_automatic_tuning before using PARALLEL processing on tables?

Thanks 
SK


Tom Kyte
July 11, 2012 - 12:54 pm UTC

please only post things in ONE PLACE

Conventional Path Insert is much faster than Direct path Insert

Tarun Agrawal, November 23, 2012 - 4:08 am UTC

Hi team,

I'm running one of the test case, found that "Conventional Path Insert is much faster than Direct path Insert"

Could you please tell me, why this is happening?

Below is the code:


alter table temp_recd nologging

insert into temp_recd
select rownum, rownum from dba_source

delete from temp_recd
where mod(start_id,2) =0

commit;

declare
start_time number;
end_time number;
begin

start_time := dbms_utility.get_time();

insert into temp_recd
select rownum, rownum from dba_source, (select * from dual connect by level < 11);

end_time := dbms_utility.get_time();

print('Conventional insert -- ' || sql%rowcount || ' rows inserted in time' || to_char(end_time - start_time));

--execute immediate 'truncate table temp_recd';

start_time := dbms_utility.get_time();

insert /*+ append */ into temp_recd
select rownum, rownum from dba_source, (select * from dual connect by level < 11);

end_time := dbms_utility.get_time();

print('Direct parth insert -- ' || sql%rowcount || ' rows inserted in time' || to_char(end_time - start_time));

exception
when others then
null;
end;

Conventional insert -- 2476300 rows inserted in time 170
Direct parth insert -- 2476300 rows inserted in time 342
Tom Kyte
November 29, 2012 - 7:13 am UTC

Conventional Path Insert is much faster than Direct path Insert


that depends. it really depends. please think about it - think about what is different between the two and when one might be advantageous over the other.

In fact, if you read the original answer from years ago, I wrote:

a) it isn't necessarily faster in general. It does a direct path load to disk - bypassing the buffer cache. There are many cases - especially with smaller sets - where the direct path load to disk would be far slower than a conventional path load into the cache.


you have a teeny table, it probably fits in your buffer cache, hence the convention path insert - which probably flooded your buffer cache - did not have to write to disk in the foreground system (do a bigger table that doesn't fit in the buffer cache and see what happens)



use sql_trace and tkprof to time things in the future, wall clock time is not very "good"



and did you do this:

exception
when others then
null;
end;

just to really ruin my day??????? SERIOUSLY????????


this is the second question in a row I've looked at today with this stupid piece of code. there is no excuse for it (even if you just say "it is only a demo", that doesn't excuse it).

stop it.
asktom.oracle.com/Misc/when-others-then-null-redux.html



Your test case is biased in another way. You did a big insert/delete - allocating space. the conventional path insert did not have to allocate space when you timed it. The direct path would. Not a fair comparison.

You don't see what things might have been waited on in each run either.

here is a less biased approach (and simplier to code!! and no "when others then null" silliness.

ops$tkyte%ORA11GR2> create table t1 (x number, y number) nologging;

Table created.

ops$tkyte%ORA11GR2> create table t2 (x number, y number) nologging;

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> exec dbms_monitor.session_trace_enable( waits => true );

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into t1
  2  select rownum, rownum
  3    from dba_source, (select *
  4                        from dual
  5                     connect by level < 11);

6404960 rows created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert /*+ APPEND */ into t1
  2  select rownum, rownum
  3    from dba_source, (select *
  4                        from dual
  5                     connect by level < 11);

6404960 rows created.




Now, when I did that - my tkprof report showed:

insert into t1
select rownum, rownum
  from dba_source, (select *
                      from dual
                   connect by level < 11)

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.02       0.03          0          0          0           0
Execute      1     24.57      25.24       1708      33003     153699     6404960
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2     24.60      25.27       1708      33003     153699     6404960
********************************************************************************
insert /*+ APPEND */ into t1
select rownum, rownum
  from dba_source, (select *
                      from dual
                   connect by level < 11)

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.02       0.02          0          0          0           0
Execute      1     23.81      24.21       1744       2565      15689     6404960
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2     23.83      24.24       1744       2565      15689     6404960



showing - they took about the same amount of time on my machine.


driving home the "your mileage may vary"

dbms_utility.get_time()

Sabita, December 13, 2012 - 4:13 am UTC

use sql_trace and tkprof to time things in the future, wall clock time is not very "good"

Hi Tom,

Can you please explain the above sentence?

If I want to track how long my procedure runs into some log table inside, what should be the best way to do it?

Presently I track the time like below. Could you show me a better way to do this?

create table t (id number);

create table l (table_name varchar2(30), load_time_sec number);

declare
ln_start_time PLS_INTEGER := DBMS_UTILITY.GET_TIME;
ln_end_time PLS_INTEGER;
begin
insert into t
select rownum from dual connect by level < 2000000;
commit;

ln_end_time := (DBMS_UTILITY.GET_TIME - ln_Start_Time)/100;
insert into l(table_name, load_time_sec)
values('t',ln_end_time);
commit;
end;

select * from l;
TABLE_NAME LOAD_TIME_SEC
------------------------------ ----------------------
t 5


Thanks!
Tom Kyte
December 17, 2012 - 3:48 pm UTC

... Can you please explain the above sentence?
...

for benchmarking - you typically want to see cpu times. wall clock times can vary wildly depending on whether the system decided to checkpoint during one of the runs, or someone decided to run a big query, or whatever.


track both cpu and elapsed time for an audit trail like that - both are relevant. cpu time will tell you about the system resources you used, elapsed time will tell you about the end user experience.

Is this fixable?

AndyP, December 17, 2012 - 8:23 am UTC

Tom, I'm trying in version 11.2.0.1 to generate data sets into flat files for which the recipient wants sequential record numbers starting at 1. For repeatability and audit purposes I'm loading a table with the data and allocating the "record number" from a sequence. I appreciate that this is not guaranteed gap free, but the problem I have is that when I do a direct path load using the append hint, I often (not always it seems) lose the sequence number 1. So for example:
SQL > create sequence testseq minvalue 1 start with 1 cache 10000000;

Sequence created.

SQL > create table testrownum(idcol int,othercol varchar2(30));

Table created.

SQL > insert /*+ append */ into testrownum(select testseq.nextval,object_name from all_objects);

57187 rows created.

SQL > select count(*) from testrownum;
select count(*) from testrownum
                     *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


SQL > commit;

Commit complete.

SQL > select min(idcol) from testrownum;

MIN(IDCOL)
----------
         2

SQL > truncate table testrownum;

Table truncated.

SQL > drop sequence testseq;

Sequence dropped.

SQL > create sequence testseq minvalue 1 start with 1 cache 10000000;

Sequence created.

SQL > insert into testrownum(select testseq.nextval,object_name from all_objects);

57187 rows created.

SQL > select count(*) from testrownum;

  COUNT(*)
----------
     57187

SQL > select min(idcol) from testrownum;

MIN(IDCOL)
----------
         1


Is there something that can be set or specified that would allow this method to work as outlined or is a sequence in fact the wrong approach for the task?

Thanks,

Andy
Tom Kyte
December 18, 2012 - 12:34 pm UTC

I think that was an issue with deferred segment creation.


when you create the table, use "segment creation immediate". Otherwise - there was an issue (fixed) where the query would try to insert the first row - find there was no segment, create it and then try to insert the first row again.

that is why it works the second time, the segment is already there.


why not just use rownum?

@ AndyP from UK

Hoek, December 17, 2012 - 6:59 pm UTC

Did you try creating the table with segment creation immediate?

 SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE    11.2.0.1.0      Production
TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production

SQL> drop sequence testseq;

Sequence dropped.

SQL> create sequence testseq minvalue 1 start with 1 cache 10000000;

Sequence created.

SQL> drop table testrownum purge;

Table dropped.

SQL> create table testrownum(idcol int,othercol varchar2(30));

Table created.

SQL> insert /*+ append */ into testrownum(select testseq.nextval,object_name from all_objects);

71701 rows created.

SQL> commit;

Commit complete.

SQL> select count(*) from testrownum;

  COUNT(*)
----------
     71701

SQL> select min(idcol) from testrownum;

MIN(IDCOL)
----------
         2

SQL> drop sequence testseq;

Sequence dropped.

SQL> create sequence testseq minvalue 1 start with 1 cache 10000000;

Sequence created.

SQL> drop table testrownum purge;

Table dropped.

SQL> create table testrownum(idcol int,othercol varchar2(30)) segment creation immediate;

Table created.

SQL> insert /*+ append */ into testrownum(select testseq.nextval,object_name from all_objects);

71701 rows created.

SQL> commit;

Commit complete.

SQL> select count(*) from testrownum;

  COUNT(*)
----------
     71701

SQL> select min(idcol) from testrownum;

MIN(IDCOL)
----------
         1 


Another example:
https://forums.oracle.com/forums/thread.jspa?messageID=10735645&#10735683 


Something new every day

AndyP, December 18, 2012 - 2:35 am UTC

Thanks very much for that explanation Hoek and for the link. I had missed that new fact about 11g behaviour and I also had misled myself during testing into thinking that the starting from 2 happens even after loading and truncating the table. I see now that this is not the case.

Andy

Seeing the wood for the trees

AndyP, December 19, 2012 - 7:50 am UTC

Tom asks "why not just use rownum?"

As usual, you've cut straight through to the core of the question and provided the perfect solution.

Hadn't thought of it!

sangeetha, January 22, 2013 - 2:17 am UTC

I have a process and it has a statement like this: (Please ignore the table structure).V_TAB_ETOP_NOCOVER_MARKED is a nested table.


For the insert portion I tried all combinations such as

1)
/*+ APPEND PARALLEL(SR_FLYING_DUTIES 4) */ -> Made table nologging+parallel

2) /*+ APPEND */ -> Made the table logging

3) Normal inserts with logging table.

4) Somtimes truncated the table instead of delete to reset HWM.

Note: I rewrote the logic to avoid update and made it always an insert.

In all these appraoches which one is the best. Can u pls suggest.

merge INTO SR_FLYING_DUTIES FD USING
(SELECT FD_NEW.FLIGHT_DATE,
FD_NEW.FLIGHTNO,
FD_NEW.STATION,
FD_NEW.RETURN_FLIGHTNO,
FD_NEW.STD,
FD_NEW.STR,
FD_NEW.MRG_ID,

FD_NEW.CAECODE_ID,
CASE
WHEN NVL(FD_NEW.NOCOVER,0) >0
THEN 'Y'
ELSE 'N'
END NOCOVER,
CASE
WHEN NVL(FD_NEW.ETOPS,0) >0
THEN 'Y'
ELSE 'N'
END ETOPS
FROM TABLE(V_TAB_ETOP_NOCOVER_MARKED) FD_NEW
WHERE MRG_ID IS NOT NULL
AND STD IS NOT NULL
) FS ON (FS.MRg_ID =FD.MRg_ID AND FS.std=FD.std )
WHEN MATCHED THEN
UPDATE
SET FD.NOTREQUIRED =FS.NOCOVER,
FD.STR =FS.STR,
FD.FLIGHTNO =FS.FLIGHTNO,
FD.CAECODE_ID =FS.CAECODE_ID,
FD.ETOPS_REQUIRED =FS.ETOPS ,
when not matched then
insert
/*+ APPEND PARALLEL(SR_FLYING_DUTIES 4) */
(
FLYINGDUTY_ID,
FLIGHT_DATE ,
FLIGHTNO ,
STATION ,
RETURN_FLIGHTNO ,
STD ,
STR ,
CAECODE_ID ,
MRG_ID ,
ETOPS_REQUIRED,
NOTREQUIRED
)
VALUES
(
SR_FLYING_DUTIES_SEQ.NEXTVAL,
FS.FLIGHT_DATE ,
FS.FLIGHTNO ,
FS.STATION ,
FS.RETURN_FLIGHTNO ,
FS.STD ,
FS.STR ,
FS.CAECODE_ID ,
FS.MRg_ID,
FS.etops,
fs.NOCOVER
);
Tom Kyte
January 22, 2013 - 2:48 am UTC

benchmark it - you have your scenarios set up, now in an environment that represents real life (what you expect to see in production) - run the series of tests and evaluate the results.

it is what I would have to do to speak definitively as to the "best way" in this case.

NOTE: if you use delete and do not reset the high water mark, insert /*+ APPEND */ will never reuse that space. append will write above the high water mark. So, doing the delete is probably not the right approach if you empty the table and want to refill it.

NOTE2: if the table is empty, why are you using merge? why not just insert /*+ append */ if the update part never happens?

A reader, January 22, 2013 - 3:50 am UTC

Hi Tom,

Thank you very much for the prompt reply.

No, if the table is empty, my code does not contain merge, it is simple insert only.

Yes in your site I read that appends will not reuse space below HWM, so only I tried to implement the truncate logic.

Actually the process is not exactly truncate the table and do the process. It inserts/updates data from sysdate -10 to sysdate+some limit and delete all data prior to sysdate-90. This could be run as nightly process.But
through screen users can edit some portion of the data and we need to retain that and based on this data, process will do some updates to the same table and user needs to see the output then and there itself. Businesswise processing only the user touch data is also not accpeted.

Since the merge was taking time and I was not able to tune it I thought of truncating ; thought I will retain the user changes in a collection and flush the whole data and insert again.And later do an update.


Now I implemented it with

a) delete past data
b) Merge with append parallel insert option.
c) Table is nologging and Parallel.

Yes I know that append's will not reuse the space below HWM . Is it ok if I do a regular truncate on a nightly basis and reinsert the data. I saw commands to reset the HWM, but I do not have dba priv.

Thank you very much
Sangeetha
Tom Kyte
January 23, 2013 - 12:57 am UTC

sounds like a classic case for partitioning - no deletes, no merges, just a drop of old partition, exchange in new partition.


Sangeetha, January 22, 2013 - 11:34 pm UTC

Hi Tom,

To reset the HWM (continuation of the previuos thread) I put following lines in a nightly job process. Hope now my entire process will be OK.

execute immediate 'alter table SR_FLYING_DUTIES enable row movement';
EXECUTE immediate 'drop index FD_FLIGHTDATE_IDX'; --> I had a functional index.
execute immediate 'alter table SR_FLYING_DUTIES shrink space';
EXECUTE immediate 'CREATE INDEX FD_FLIGHTDATE_IDX ON SR_FLYING_DUTIES (CASE STAFFID WHEN NULL THEN FLIGHT_DATE ELSE NULL END )
NOLOGGING
PARALLEL tablespace QR_TDR_INDEX_TS';

Thank you very much
Sangeetha
Tom Kyte
January 23, 2013 - 1:08 am UTC

why drop the index, why not just shrink it too?

the worry I have is the alter enable works,, the drop index works, the alter shrink fails - scrip exits, you are left missing an index.


just alter shrink table, alter shrink index, done. Never lose a schema object by accident.

Sangeetha P, January 23, 2013 - 5:12 am UTC

Hi Tom,

Thank you very much for the review.

The index is a functional index.
When I tried to shrink the table with that index it is showing this error. Only after dropping that I am able to shrink the table.

Error report:
SQL Error: ORA-10631: SHRINK clause should not be specified for this object
10631. 00000 - "SHRINK clause should not be specified for this object"
*Cause: It is incorrect to issue shrink on the object
*Action: Verify the object name and type and reissue the command

Thanks and Regards
Sangeetha

update

A reader, January 23, 2013 - 9:05 pm UTC

Tom,

can we do direct path update/delete either in serial or parallel mode.

our DBA is telling me, that oracle update in parallel is not direct path. instead data is read into SGA and all the parallel processes update it. The statement does not look right to me. but I am looking for a test case to prove it. Please help
Tom Kyte
January 30, 2013 - 12:43 pm UTC

no, direct path is for reading and loading only.

create table as select - direct path
insert /*+ append */ - direct path
sqlldr direct=y - direct path

update/delete/merge - not direct path, they do not write directly to database blocks on disk, they do not write above the high water mark.

To: Sangeetha P from Qatar

Rajeshwaran, Jeyabal, January 24, 2013 - 4:05 am UTC

When I tried to shrink the table with that index it is showing this error.

Docs says this

http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_3001.htm#CJAHHIBI

Restrictions on the shrink_clause The shrink_clause is subject to the following restrictions:

 Segment shrink is not supported for tables with function-based indexes, domain indexes, or bitmap join indexes.



Sangeetha P, January 27, 2013 - 2:20 am UTC

Thanks for the review. Yes it is not allowing the shrink with functional indexes.

Parallel hint added with Append

DILLIP KUMAR SAHOO, May 09, 2013 - 1:43 pm UTC

Hi Tom,
The question may sound bit funny:

Parallel hint added with Append, does it make any difference.I have seen many developers using:
insert/*+ append parallel(x 4) */.....

I mean , Do we have limit to no of parallel threads that Direct Path insert initiates? Or we can increase the number of parallel threads using parallel hint?

Thanks in advance.
Tom Kyte
May 09, 2013 - 3:10 pm UTC

you may or may not use the parallel hint, up to you.

append hint using memory

Anupam, December 30, 2014 - 8:19 pm UTC

Hi Tom,
I am executing below mentioned query with append hint .

insert /*+ append */ into MYUSER.MONTH_DATA
select 201411 as month_key,
1 as id,
7 as key ,
24 as md,
t3.wd,
t3.mt,
t3.ht,
t3.pi
t3.th ,
decode( t3.td1,'NaN','0',t3.td1) ,
decode( t3.td2,'NaN','0',t3.td2) ,
decode( t3.td3,'NaN','0',t3.td3) ,
decode( t3.td4,'NaN','0',t3.td4) ,
decode( t3.td5 ,'NaN','0',t3.td5) ,
decode( t3.td6 ,'NaN','0',t3.td6) ,
decode( t3.td7 ,'NaN','0',t3.td7)
from MYUSER.STAGE_MONTH t1 ,
XMLTable(xmlnamespaces(default ' http://mydataproc.com/New' ),
'REPORT/TABLE/TBODY/TR'
PASSING t1.object_value
COLUMNS TH VARCHAR2(100) PATH 'TH',
wd VARCHAR2(200) PATH 'TH/@wd',
mt VARCHAR2(200) PATH 'TH/@mt',
ht VARCHAR2(200) PATH 'TH/@ht',
pi VARCHAR2(200) PATH 'TH/@pi',
TD1 VARCHAR2(100) PATH 'TD[1]',
TD2 VARCHAR2(100) PATH 'TD[2]',
TD3 VARCHAR2(100) PATH 'TD[3]',
TD4 VARCHAR2(100) PATH 'TD[4]',
TD5 VARCHAR2(100) PATH 'TD[5]',
TD6 varchar2(100) PATH 'TD[6]',
TD7 VARCHAR2(100) PATH 'TD[7]') t3 ;


But it fails after sometime with below error .
ORA-04030: out of process memory when trying to allocate 55600 bytes (kxs-heap-w,qmemNextBuf:Large Alloc)

I was thinking that with append hint it will take route of directly loading into the datafiles bypassing memory .

I have below doubts ..
1. Is there a way to force to use direct path write and bypass memory

2. Any other way I should rewrite insert statement to avoid memory error. Total number of rows getting loaded here is 150K.



Thanks,
Anupam







How much more longer will insert/*+append*/ take to complete

kls, January 04, 2015 - 10:35 pm UTC

Hi Tom,
I have a table "Y" into which I am loading 100million records from another table "x". The size of table "x" is approximately 12gb in size. I am using insert/*+append*/ to load data from table "x" into table "Y". On table "Y" there are no indexes and it is in "nologging" mode. However all the hunderend million records need to have data validation done on them before loading it into table "Y". I have created check constraints on table "Y" to do the validation before loading the data. I also have two functions which I use in the select clause of the insert /^+append*/ statement to do the database validations before loading it into the database. I can see the validation error details in the "err$_Y" table but dont know how long it is going to take for the insert /*+append*/ to complete. My question is, is there a way to monitor how far along the insert ./*+append*/ has gotten or how much more time is it going to take to complete. My insert /*+append*/ looks like this

insert /*+ append */
into schemaB.dorra_cdd_ng
SELECT dt.ctrno AS CONTRACT_NUMBER,
func1 (TRIM (dt.fieldb)) AS fieldb_code,
TRIM (dt.fieldc) AS fieldc,
dt.p AS PRICE,
TO_DATE (dt.awd_dt, ''YYYYMMDD'') AS AWARD_DATE,
SYSDATE AS ADDED_DATE,
''product'' AS type,
func2 (TRIM (dt.product)) AS CODE,

dt.PO_ITMNO,
dt.PO_NO
FROM dr_temp dt,
( SELECT MIN (row_addr) AS row_address
FROM (SELECT a.ROWID AS row_addr,
ctrno,
fieldc,
p
FROM schemaB.dr_temp a
WHERE a.ctrno IS NOT NULL
AND SUBSTR (a.ctrno, 1, 13) NOT IN
(SELECT SUBSTR (CONTRACT_NUMBER, 1, 13)
FROM schemaB.cdd_ng))
GROUP BY ctrno, fieldc, p) dt_rid
WHERE dt.ROWID = dt_rid.row_address
LOG ERRORS REJECT LIMIT UNLIMITED;

Very useful insight

Karthik Palaniappan, September 22, 2015 - 12:08 pm UTC

Spent the whole day trying to get Append working and later realized it way very slow. Your explanation helped me understand I was in the wrong direction. Thank you.

Thanks,
Karthik
Connor McDonald
September 23, 2015 - 12:00 am UTC

Glad we could help

Index In Direct Path load

Samby, November 18, 2015 - 8:47 am UTC

Hi,

I have one small doubt in oracle direct path load.

We are doing direct path insert for huge records for a table.

Table is having 3-4 Indexes. After load complete Do we need to rebuild the inedex and gather the tables stats or index rebuilding is not required.Please suggest.

Thanks
Samby
Connor McDonald
November 18, 2015 - 11:13 pm UTC

No you do not need to rebuild.

Direct patch is fastest on tables without indexes, or tables where the indexes are marked unusable and rebuilt afterwards, but there are still performance benefits even when the indexes are in place. Its just not a great.


BULK COLLECT VS DIRECT PATH LOAD

Samby, November 19, 2015 - 12:36 pm UTC

Hi Tom,

I am working on data ware house project on oracle exa-data version. From the data ware house , we need to create 2-3 datamart.
We have some business logic to load data from DWH to Datamart with some joining condition with huge fact and dimension table.


I wanted to know in this load , which loading will be best:
1) Using bulk collect for all ,Limit etc
2) Using direct pathload with Append hints


Could you please suggest on this.

Thanks a lot in Advance
Samby
Connor McDonald
November 20, 2015 - 1:17 am UTC

Both are appropriate depending on your requirements.

direct path load
- very fast (especially if no indexes on the target table)
- always loads ABOVE the high water mark, so any existing free space not reclaimed
- locks the object
- you must commit/rollback at completion

bulkcollect/bulkbind
- fast (better than row at a time), but not as fast as direct load
- can utilise existing free space
- does not lock the object
- additional work can be done in same transaction

Index In Direct Path load

Samby, November 23, 2015 - 4:17 am UTC

Thanks a lot for your valuable suggestion.
Have a nice day :)

ARON, Todd, December 28, 2015 - 2:12 pm UTC

Question: Can a table have a FK and/or trigger, but disabled, and still have the direct path work?
Answer: YES!

SQL> alter table Codes enable constraint Cats_FK ;

Table altered.

SQL>
SQL> alter table Codes enable all triggers ;

Table altered.

SQL> insert /*+ append */ into Codes
2 ( PK, ACTIVE, CREATED_DATE, CATS_PK, NAME, NOTE
3 ) select CCI_PK_SEQ.nextval, 1, sysdate as CREATED_DATE, 'debug', OBJECT_NA
ME, SUBOBJECT_NAME
4 from all_objects
5 where rownum <= 10
6 ;

10 rows created.

SQL> select * from Codes where CREATED_DATE > trunc( sysdate ) ;

PK ACTIVE CREATED_D CATS_PK NAME
------------- ---------- --------- ------------- ------------------------------
NOTE
--------------------------------------------------------------------------------

41 1 28-DEC-15 debug C_OBJ#


42 1 28-DEC-15 debug I_OBJ#


43 1 28-DEC-15 debug TAB$


44 1 28-DEC-15 debug CLU$


45 1 28-DEC-15 debug C_TS#


46 1 28-DEC-15 debug I_TS#


47 1 28-DEC-15 debug C_FILE#_BLOCK#


48 1 28-DEC-15 debug I_FILE#_BLOCK#


49 1 28-DEC-15 debug C_USER#


50 1 28-DEC-15 debug I_USER#



10 rows selected.

SQL> rollback
2 ;

Rollback complete.

SQL> alter table Codes disable constraint Cats_FK ;

Table altered.

SQL>
SQL> alter table Codes disable all triggers ;

Table altered.

SQL> insert /*+ append */ into Codes
2 ( PK, ACTIVE, CREATED_DATE, CATS_PK, NAME, NOTE
3 ) select CCI_PK_SEQ.nextval, 1, sysdate as CREATED_DATE, 'debug', OBJECT_NA
ME, SUBOBJECT_NAME
4 from all_objects
5 where rownum <= 10
6 ;

10 rows created.

SQL> select * from Codes where CREATED_DATE > trunc( sysdate ) ;
select * from Codes where CREATED_DATE > trunc( sysdate )
*
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


SQL> rollback ;

Rollback complete.

SQL>
Chris Saxon
December 29, 2015 - 12:00 am UTC

Thanks for your input.

direct path restrictions

A reader, June 19, 2016 - 6:13 am UTC

According to this - https://docs.oracle.com/cd/E18283_01/server.112/e17118/statements_9014.htm#SQLRF01604

Conventional path is done when this restriction is encountered - The target table cannot have any triggers or referential integrity constraints defined on it.

I've found it doesn't seem to apply for reference partitioned tables where the child table has a FK constraint.

create table parent (some_date date,
                     some_val  varchar2(10),
                     f1 number primary key
                    )
  PARTITION BY range (some_date)
  SUBPARTITION BY HASH (some_val) 
  SUBPARTITION template (
    subpartition sp1, 
    subpartition sp2)
  (PARTITION P_201505 VALUES less than (to_date('20150601','yyyymmdd')),
   PARTITION P_201506 VALUES less than (to_date('20150701','yyyymmdd'))
  );

create table child (id number primary key, 
                    f1 number not null,
  CONSTRAINT fk_parent FOREIGN KEY (f1) REFERENCES parent (f1)
)
partition by reference (fk_parent);

SELECT n.name, s.value FROM v$mystat s, v$statname n WHERE s.statistic# = n.statistic# AND n.name = 'physical writes direct';

NAME VALUE
physical writes direct 24

insert /*+ append */ into parent 
select to_date('20150531','yyyymmdd'), level, level
from dual
connect by level <2;

SELECT n.name, s.value FROM v$mystat s, v$statname n WHERE s.statistic# = n.statistic# AND n.name = 'physical writes direct';

NAME VALUE
physical writes direct 25

select * from parent;

ORA-12838: cannot read/modify an object after modifying it in parallel

insert /*+ append */ into child 
select level,level
from dual
connect by level <2;

SELECT n.name, s.value FROM v$mystat s, v$statname n WHERE s.statistic# = n.statistic# AND n.name = 'physical writes direct';

NAME VALUE
physical writes direct 26;

select *
from child;

ORA-12838: cannot read/modify an object after modifying it in parallel


Is my test valid ? Am I checking if direct path occurred correctly?
Connor McDonald
June 19, 2016 - 11:54 am UTC

Yes, I would suspect that is a documentation bug (ie, not updated since reference partitioning was introduced)

Direct path on IOT -12c

Rajeshwaran, Jeyabal, June 29, 2016 - 12:14 pm UTC

that just shows that direct path is not something you do with structures like IOT's, clusters.. things that force "location" on individual rows in a segment.

I was reading through this entire thread and found a response from Tom showing direct path was not possible in IOT (in 11g).

but thing got changed up in 12c. Direct path possible on IOT now

demo@ORA12C> create table t(x int primary key, y int) organization index;

Table created.

demo@ORA12C> insert /*+ append */ into t select 1 x, 1 y from dual;

1 row created.

demo@ORA12C> select * from t;
select * from t
              *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


demo@ORA12C>


Krishna Mishra, October 17, 2017 - 11:02 am UTC


Tables which is loaded using direct insert

Banti, October 19, 2019 - 12:58 pm UTC

Hi,

I have a requirement to find all the table which is being loaded using direct insert in our database. We want to rebuild those tables so that we can reclaim the space.. is there any way to find out?
Regards,
sanjay.
Connor McDonald
October 21, 2019 - 6:21 am UTC

I'm not sure I follow your logic. There isn't really a correlation between a direct load and space reclaiming.

Reclaiming space

Asim, August 24, 2022 - 4:28 am UTC

I think he means that if a table is inserted direct path several times and during this also alot of rows deleted several times, then there is a lot of space empty and one may need to shift the directly loaded rows to that space so that HWM is lowered backed and the space resulted due to shift can be used for further direct or indirect loads. This requirement seems to be logical

One more question, why after direct load one cannot at least read in the same session, why read consistency cannot be provided
Chris Saxon
August 24, 2022 - 1:06 pm UTC

Direct path inserts load data above the high water mark. This is updated when you commit.

Full table scans only read data up to the high water mark. So allowing reads before the commit (and thus high water mark update) could return wrong results.

ORA-12838

Asim, August 24, 2022 - 4:36 am UTC

Is there any other proper way to confirm if direct load occured or not, other than querying and observing ORA-12838?

And does that mean that after each direct load one cannot further SELECT UPDATE DELETE INSERT (conventional) on that table until rollback or commit?
Not even another direct load ie
INSERT /* +APPEND*/
Chris Saxon
August 24, 2022 - 1:10 pm UTC

There are various "direct" stats - you could check for changes in these to see if direct path operations have happened:

create table t (
  c1 int
);

select name, value 
from   v$mystat
join   v$statname
using  ( statistic# )
where  name like '%direct%'
and    value > 0;
  
no rows selected

insert /*+ append */into t 
with rws as (
  select level x from dual
  connect by level <= 1000
)
  select * from rws;
    
select name, value 
from   v$mystat
join   v$statname
using  ( statistic# )
where  name like '%direct%'
and    value > 0;

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
db block gets direct                                                      2
physical writes direct                                                    2
redo size for direct writes                                           16472


And yes, you have to commit/rollback to do any further DML against the table in the session after a direct load.

Drop truncate

Asim, August 24, 2022 - 5:11 am UTC

After direct load, Can we drop or truncate the same table before commit or rollback
Chris Saxon
August 24, 2022 - 1:12 pm UTC

DDL in Oracle Database commits before the statement, so yes you can drop/truncate after direct loads.

(though I'd question why you'd ever want to load data to immediately drop it!)

What in case of ddl fails

Asim, September 12, 2022 - 9:49 am UTC

Does Ddl also commit after successful or failed execution?.

and another question raises, if ddl commits before , then if ddl for some reason fails? The user may remain in impression that commit has not occured or maybe he thinks rolback occured .
This is strange to know that mere initiating a ddl issues commit, regardless ddl succeeds or fails.


Chris Saxon
September 27, 2022 - 3:41 pm UTC

Yes, DDL also commits after it completes. If it fails the database rolls back whatever work the DDL statement did. Everything before is still committed:

https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:7072180788422

Note there is an exception - private temporary tables. These issue NO commits when you create them:

create table t (
  c1 int
);
insert into t values ( 1 );
select count(*) from t;

  COUNT(*)
----------
         1

create private temporary table ora$ptt_t ( c1 int );

rollback;

select count(*) from t;

  COUNT(*)
----------
         0

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.