Skip to Main Content

Breadcrumb

more

Connor and Chris don't just spend all day on AskTOM. You can also catch regular content via Connor's blog and Chris's blog. Or if video is more your thing, check out Connor's latest video and Chris's latest video from their Youtube channels. And of course, keep up to date with AskTOM via the official twitter account.

Question and Answer

Tom Kyte

Thanks for the question, Rob.

Asked: December 15, 2002 - 9:55 pm UTC

Answered by: Tom Kyte - Last updated: January 16, 2014 - 5:50 pm UTC

Category: Database - Version: 9.2.0

Viewed 50K+ times! This question is

Whilst you are here, check out some content from the AskTom team: Multiple partitions

You Asked

Tom:

I was interested in using the multi-table insert feature in 9i but am stumped by the restriction that the subquery can't contain a sequence. I use a sequence to assign a Primary Key in one of the tables that is the target of the multi-table insert. Can you suggest a workaround? Will calling a user-defined function that returns the nextval of the sequence work in the subquery?

Thanks as always,

Rob

and we said...

ops$tkyte@ORA920> create table t1 ( x int primary key, y int );

Table created.

ops$tkyte@ORA920> create table t2 ( x int primary key, y date );

Table created.

ops$tkyte@ORA920> create sequence s;

Sequence created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> insert
2 when mod( object_id, 2 ) = 1 then
3 into t1 ( x, y ) values ( SS, object_id )
4 when mod( object_id, 2 ) = 0 then
5 into t2 ( x, y ) values ( SS, created )
6 select s.nextval SS, object_id, created from all_objects
7 /
select s.nextval SS, object_id, created from all_objects
*
ERROR at line 6:
ORA-02287: sequence number not allowed here


ops$tkyte@ORA920> insert
2 when mod( object_id, 2 ) = 1 then
3 into t1 ( x, y ) values ( s.nextval, object_id )
4 when mod( object_id, 2 ) = 0 then
5 into t2 ( x, y ) values ( s.nextval, created )
6 select object_id, created from all_objects
7 /

29371 rows created.


put the nextval right in the values clause.

and you rated our response

  (111 ratings)

Is this answer out of date? If it is, please let us know via a Review

Reviews

February 06, 2003 - 11:10 pm UTC

Reviewer: Daryl

What about the situtation where the targets of the multi-table insert have a sequence value in common between them? In the following example, I'd like the s_id column value to be identical for each row inserted into both tables:

create table s
    (s_id number not null primary key,
     s_name varchar2(500) not null );
create sequence s_seq;

create table s_sub
    (s_sub_id number not null primary key,
     s_id number not null references s(s_id),
     s_sub_name varchar2(500) not null );
create sequence s_sub_seq;

INSERT ALL
INTO s (s_id, s_name)
    VALUES (id, name)
INTO s_sub (s_sub_id, s_id, s_sub_name)
    VALUES(id2, id, name )
SELECT s_seq.NEXTVAL AS id,
    s_sub_seq.NEXTVAL AS id2,
    object_name AS name
FROM all_objects;

SQL> INSERT ALL
  2     INTO s (s_id, s_name)
  3             VALUES (id, name)
  4     INTO s_sub (s_sub_id, s_id, s_sub_name)
  5             VALUES(id2, id, name )
  6  SELECT s_seq.NEXTVAL AS id,
  7             s_sub_seq.NEXTVAL AS id2,
  8             object_name AS name
  9  FROM all_objects;
                s_sub_seq.NEXTVAL AS id2,
                          *
ERROR at line 7:
ORA-02287: sequence number not allowed here
 

Tom Kyte

Followup  

February 07, 2003 - 7:54 am UTC

Not going to happen -- you'll have to put the subquery into a global temporary table I suppose and then do the multi-table insert.

February 09, 2003 - 9:13 am UTC

Reviewer: Daryl

I put the SELECT seq.NEXTVAL FROM dual into two separate functions (outside of packages), and it lets me use the functions in the subselect.

Tom Kyte

Followup  

February 09, 2003 - 3:24 pm UTC

careful with that -- i tried that as well.  

ps$tkyte@ORA920> INSERT ALL
  2  INTO s (s_id, s_name)
  3      VALUES (id, name)
  4  INTO s_sub (s_sub_id, s_id, s_sub_name)
  5      VALUES(id2, id, name )
  6  SELECT s_seq_func AS id,
  7      s_sub_seq_func AS id2,
  8      object_name AS name
  9  FROM all_objects;
INSERT ALL
*
ERROR at line 1:
ORA-02291: integrity constraint (OPS$TKYTE.SYS_C003798) violated - parent key not found



it is free to execute and re-execute the functions -- id and id2 may be called UNPREDICABLY 

So I've discovered

February 16, 2003 - 4:41 pm UTC

Reviewer: Daryl

In my case, the functions are being called more often than it seems they should. Based on the documentation for the multi-table insert, the subquery is built and then fed into the insert portions of the select. That is, the docs make it sound as if the subquery that provides the source data is materialized before the inserting starts. If that were true, my proposal would have no problems. Oracle's just determined to make me incur the expense of multiple data passes over millions of rows.


Tom Kyte

Followup  

February 16, 2003 - 5:12 pm UTC

we are not determined -- everything in sql is just "set oriented".  you can never count on the order of evaluation in a predicate, how many times "foo" will get called.  The language is 100% declarative, not procedural at all.


what about this tho:

ps$tkyte@ORA920> create table s
  2      (s_id number not null primary key,
  3       s_name varchar2(30) not null );

Table created.

ops$tkyte@ORA920> create sequence s_seq;

Sequence created.

ops$tkyte@ORA920>
ops$tkyte@ORA920>
ops$tkyte@ORA920> create table s_sub
  2      (s_sub_id number not null primary key,
  3       s_id number not null references s(s_id),
  4       s_sub_name varchar2(30) not null );

Table created.

ops$tkyte@ORA920> create sequence s_sub_seq;

Sequence created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> INSERT ALL
  2  INTO s (s_id, s_name) VALUES (s_seq.nextval, name)
  3  INTO s_sub (s_sub_id, s_id, s_sub_name) VALUES(s_sub_seq.nextval, s_seq.currval, name )
  4  SELECT object_name AS name FROM all_objects where rownum <= 10;

20 rows created.

ops$tkyte@ORA920>
ops$tkyte@ORA920>
ops$tkyte@ORA920> select *
  2    from s, s_sub
  3   where s.s_id = s_sub.s_id;

      S_ID S_NAME                           S_SUB_ID       S_ID S_SUB_NAME
---------- ------------------------------ ---------- ---------- ------------------------------
         1 /1005bd30_LnkdConstant                  1          1 /1005bd30_LnkdConstant
         2 /10076b23_OraCustomDatumClosur          2          2 /10076b23_OraCustomDatumClosur
         3 /10297c91_SAXAttrList                   3          3 /10297c91_SAXAttrList
         4 /103a2e73_DefaultEditorKitEndP          4          4 /103a2e73_DefaultEditorKitEndP
         5 /1048734f_DefaultFolder                 5          5 /1048734f_DefaultFolder
         6 /10501902_BasicFileChooserUINe          6          6 /10501902_BasicFileChooserUINe
         7 /105072e7_HttpSessionBindingEv          7          7 /105072e7_HttpSessionBindingEv
         8 /106ba0a5_ArrayEnumeration              8          8 /106ba0a5_ArrayEnumeration
         9 /106faabc_BasicTreeUIKeyHandle          9          9 /106faabc_BasicTreeUIKeyHandle
        10 /10744837_ObjectStreamClass2           10         10 /10744837_ObjectStreamClass2




currval and nextval together should do it I think 

Awesome

February 16, 2003 - 6:12 pm UTC

Reviewer: Daryl

The currval/nextval combo finally got it for us.

The behavior I was seeing with the functions was that the parent table's sequence (s_seq) was getting hit twice, so the child table's references to the parent table were off by 1 from what they should've been.



Excellent Thread but I need to do this going from a 9i db to 8i db

March 19, 2003 - 11:03 am UTC

Reviewer: Ken from NY,NY

I have a scenario that is basicly identical to the one decribed above but the insert has to go ver a db link and insert into two tables in an 8i db.

INSERT ALL
INTO content.items@cm_dev (ITEM_ID,Display_Name,Function_ID,Location)
VALUES (content.seq_item_id.nextval,Function_ID,Location)
INTO content.item_meta_info (ITEM_META_INFO_ID, ITEM_ID, Category_ID,Display_Name,Keywords,Start_Date,Can_Be_Called_out,Can_Be_Displayed,Attractor)
VALUES (content.seq_item_meta_info_id.nextval,content.seq_item_id.currval,Display_Name,Keywords,Start_Date,Can_Be_Called_out,Can_Be_Displayed,Attractor)
SELECT Display_Name,Keywords,Start_Date,Can_Be_Called_out,Can_Be_Displayed,Attractor,Function_ID,Location FROM Content_items;

I recieve ORA-2021.

Any help would be appreciated.

On a side note I think this site is invaluable for any oracle proffesional and your book is great.

Tom Kyte

Followup  

March 19, 2003 - 11:30 am UTC

see

</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_913a.htm#2080119 <code>


You cannot perform a multitable insert into a remote table.


Why duplicating

April 27, 2003 - 7:52 am UTC

Reviewer: Sikandar Hayat from Pindi Bhattian - Pakistan

SCOTT > drop table t1;

Table dropped.

SCOTT > drop table t2
2 ;

Table dropped.

SCOTT > create table t1 as select * from emp where 1=1;

Table created.

SCOTT > create table t2 as select * from emp where 1=1;

Table created.

SCOTT > desc emp
Name
------------------------------------------------------------------------------------------------------
EMPNO
ENAME
JOB
MGR
HIREDATE
SAL
COMM
DEPTNO

SCOTT > ed
Wrote file afiedt.buf

1 insert all
2 into t1 values(EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM,DEPTNO)
3 into t2 values(EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM,DEPTNO)
4* select * from emp
SCOTT > /

28 rows created.

SCOTT > select count(*) from emp;

COUNT(*)
------------
14

SCOTT > select count(*) from t1;

COUNT(*)
------------
28

SCOTT > select count(*) from t2;

COUNT(*)
------------
28

SCOTT > select empno from t1;
EMPNO
------------
7369
7369
7499
7499
7521
7521
7566
7566
7654
7654
7698
7698
7782
7782
7788
7788
7839
7839
7844
7844
7876
7876
7900
7900
7902
7902
7934
7934

28 rows selected.

SCOTT >

I want to know that why the rows are duplicated in both target tables?
How many times select executed in the above code?
I was just expecting 14 rows in each table may be I was expecting wrong.

Your comments please.

Tom Kyte

Followup  

April 27, 2003 - 8:34 am UTC

SCOTT > create table t1 as select * from emp where 1=1;

Table created.

SCOTT > create table t2 as select * from emp where 1=1;

Table created.

you created the table with 14 rows from emp.

and then

1 insert all
2 into t1 values(EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM,DEPTNO)
3 into t2 values(EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM,DEPTNO)
4* select * from emp
SCOTT > /

you inserted those same 14 rows all over again.

Used 1=1 instead of 1=0

April 28, 2003 - 1:27 am UTC

Reviewer: Sikandar Hayat from Pindi Bhattian - Pakistan

You are right TOM.

How to handle exception in multi-table insert?

June 12, 2003 - 6:57 am UTC

Reviewer: Tony from India

1) How to handle exception in multitable insert? For example: Some Records are inserted into 4 different tables. Some exception (ex: value_error or too many rows) is thrown by one of those tables. How do I find the table for which the exception is raised and the row forwhcih the exception is raised?

2) Can't I do multi-table insert without the SELECT statement as below?

insert all into emp1 (empno,ename,deptno,sal) values (333,'anthony', 10, 50000)
into emp2 (empno,ename,deptno,sal) values (333,'anthony', 10, 50000)

ERROR at line 2:
ORA-00928: missing SELECT keyword


Tom Kyte

Followup  

June 12, 2003 - 9:14 am UTC

1) the same as you would with

insert into t select * from t2;

the constraint name will be given to you. as for the "row", well, that is the same as always. you don't get the row

2) use "select :bv1, :bv2, :bv3, :bv4 from dual" for a single row insert.

Duplicate exception

February 03, 2004 - 7:26 pm UTC

Reviewer: Hien Tran from Reston, VA USA

Using multitable insert to load a row from a flat table to up to 30 rows of a single normalized table, is there a way to ignore duplicate new rows exceptions. For instance, 2 rows out of 30 are duplicate to the other 28, is there a way to multitable insert 28 uniques rows and gracefully ignore the 2 duplicates ? I can go back to 30 individual inserts but it would be very nice .... Thanks in advance.

Tom Kyte

Followup  

February 04, 2004 - 7:16 am UTC

in the select statement, use SQL to remove the duplicates -- eg: distinct, group by, row_number() are techniques to remove the duplicates.

Multitable insert is slower than individual insert

March 09, 2004 - 4:24 pm UTC

Reviewer: Arun Panchal from New York, NY

Tom,

I thought multitable insert will be faster than insert into individual tables. But to my surprise I found other way. Any idea why ?


CREATE TABLE M1 (M1 NUMBER);
CREATE TABLE M2 (M2 NUMBER);

declare
type a_type is table of number INDEX BY BINARY_INTEGER;
a_arr a_type;

samay number;
begin
for i in 1..10000 loop
a_arr(i) := i;
end loop;

dbms_output.put_line('Record Count : ' || A_ARR.COUNT);
samay := dbms_utility.get_time;

forall i in 1..a_arr.count
insert
when 1 = 1 then
into m1 values (a_arr(i))
when 2 = 2 then
into m2 values (a_arr(i))
select 1 from dual;

dbms_output.put_line('Multitable Inserts : ' || (dbms_utility.get_time-samay));
rollback;

samay := dbms_utility.get_time;
forall i in 1..a_arr.count
insert into m1(m1) values (a_arr(i));

forall i in 1..a_arr.count
insert into m2(m2) values (a_arr(i));

dbms_output.put_line('Individual Inserts : ' || (dbms_utility.get_time-samay));

rollback;

exception
when others then
dbms_output.put_line( SQLCODE || '||' || SQLERRM );
end;
/

And I got following DBMS output

Record Count : 10000
Multitable Inserts : 233
Individual Inserts : 53

Thanks,
Arun


Tom Kyte

Followup  

March 09, 2004 - 10:33 pm UTC

you added 10,000 "select * from duals" in there that did not exist in the single table inserts. 10,000 select * from duals add up.

this is not really a valid test of this particular feature. It is designed to take the rows from a big query and split them out into 2 or more tables. it is not really intended to be used as you did above -- you make it do MORE work this way by adding the dual query in there 10,000 times.



Then how to insert data into multiple table at same time ?

March 11, 2004 - 11:41 am UTC

Reviewer: Arun Panchal from New York, NY

Agreed. Then what is the best method to insert data in more than one table in single statement?

We have single source and we need to insert data into two diff tables(some fields are loaded in table1 and some are loaded in table2). We are using bulk binding on Oracle 9.0.2.

Can we insert data in two tables using single forall statement ? Or we need to write individual forall statement for each table?

Thanks,
Arun


Tom Kyte

Followup  

March 11, 2004 - 2:08 pm UTC

you did it. two forall loops.

merge logging

April 20, 2004 - 4:15 pm UTC

Reviewer: j.

tom, we 've to "synchronize" contents of tables residing on different schemas. we would like to use merge statements:

merge into TrgTab T
using (select ... from SrcTab) S on (...)
when matched then update ...
when not matched then insert ...

but we have to *LOG* the records that got processed into another, additional table.

is this something that can be achieved with *ONE* multitable DML operation?

Tom Kyte

Followup  

April 21, 2004 - 12:00 pm UTC

triggers can do that. (will have to do that if you want a single statement).

Else, you'll need to two step it.

the "I think" doubt ...

May 27, 2004 - 2:40 pm UTC

Reviewer: Gabe

In a prior reply you provided a test case regarding a multi-table insert into 2 tables having a FK between them ...

INSERT ALL
INTO s (s_id, s_name) VALUES (s_seq.nextval, name)
INTO s_sub (s_sub_id, s_id, s_sub_name) VALUES(s_sub_seq.nextval, s_seq.currval, name )
SELECT object_name AS name FROM all_objects where rownum <= 10;

and said ... <quote>currval and nextval together should do it I think</quote>

I'm not really saying you were incorrect ... just that one could do without the currval ... although maybe a bit counter-intuitive ...

INSERT ALL
INTO s (s_id, s_name) VALUES (s_seq.nextval, name)
INTO s_sub (s_sub_id, s_id, s_sub_name) VALUES(s_sub_seq.nextval, s_seq.nextval, name )
SELECT object_name AS name FROM all_objects where rownum <= 10;

does the same thing (see the s_seq.nextval in both) because of ...

</code> http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/sql_elements6a.htm#4148 <code>

<quote>For a multi-table insert, the reference to NEXTVAL must appear in the VALUES clause, and the sequence is updated once for each row returned by the subquery, even though NEXTVAL may be referenced in multiple branches of the multi-table insert</quote>

Anyway, my question is ... is the insert into s going to always happen before the insert into s_sub? ... does it matter? (is it an atomic operation and hence the FK constraint is not going to be violated !?!?!)

Thanks




Tom Kyte

Followup  

May 27, 2004 - 8:37 pm UTC

atomic statement -- doesn't matter the order.

Multitable insert with FK

June 03, 2004 - 8:30 pm UTC

Reviewer: witten_j from NY-NY

However I am currently debugging a INSERT ALL with several FK in 5 different tables.
If the select returns around 50 rows there is no problem,
but as soon as I tried 200 rows and up the insert fails with FK error, like the s_sub will be inserted before the parent with an empty FK.
I would paste my example but I am still trying to see first if there is something simple I am missing.

Hi

August 18, 2004 - 1:45 pm UTC

Reviewer: A reader

A very basic question on "multi table insert" feature of Oracle 9i.

Say I have a situation where my select statement from a table generates 1 million rows. I have to insert into two different tables.
Say if I insert it serially each may take 10 minutes and totally we end up waiting 20 minutes to finish the insert.

If I use "insert all" does oracle insert the data in both the tables in 10 minutes or will it still take 20 minutes?

Any advantage of this feature in the real world data warehouse environment?

Thanks,

Tom Kyte

Followup  

August 18, 2004 - 7:01 pm UTC

say it takes 10 minutes to run the query.

in a multi-table insert, the query runs once.
in a single table insert -- you'll run the same query twice.

so, you save the time it takes to run the query -- the insertion rate will be "the same", you are inserting into the tables.

say it takes 5 minutes just to do the insert (into each table)

multi-table insert = 10+5+5
single table insert = 10+5+10+5



Hi,

September 16, 2004 - 4:47 pm UTC

Reviewer: A reader

Just trying to understand more about INSERT ALL feature.

I have a situation like, select from fact table.
I have to insert the data from the fact table into 3 different AGG tables, but each agg table has its own 'GROUP BY' clause.

Can I use the INSERT ALL concept and achieve this?

Right now I have 3 different insert statements reading the same 200M fact table.

Thanks,

Tom Kyte

Followup  

September 16, 2004 - 7:53 pm UTC

well, if they each have their own level of aggregation - are you really interested in ALL (seems each table is mutually exclusive?)

ops$tkyte@ORA9IR2> create table emp as select * from scott.emp;
 
Table created.
 
ops$tkyte@ORA9IR2> drop table job_sal;
 
Table dropped.
 
ops$tkyte@ORA9IR2> create table job_sal as select job, sum(sal) sal from emp where 1=0 group by job;
 
Table created.
 
ops$tkyte@ORA9IR2> drop table dept_sal;
 
Table dropped.
 
ops$tkyte@ORA9IR2> create table dept_sal as select deptno, sum(sal) sal from emp where 1=0 group by deptno;
 
Table created.
 
ops$tkyte@ORA9IR2> drop table job_dept_sal;
 
Table dropped.
 
ops$tkyte@ORA9IR2> create table job_dept_sal as select job, deptno, sum(sal) sal from emp where 1=0 group by job, deptno;
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select job, deptno, sum(sal) sal,
  2         grouping(job),
  3             grouping(deptno),
  4             bin_to_num( grouping(job), grouping(deptno) ) b2n
  5    from emp
  6   group by grouping sets((job),(deptno),(job,deptno))
  7  /
 
JOB           DEPTNO        SAL GROUPING(JOB) GROUPING(DEPTNO)        B2N
--------- ---------- ---------- ------------- ---------------- ----------
CLERK             10       1300             0                0          0
MANAGER           10       2450             0                0          0
PRESIDENT         10       5000             0                0          0
CLERK             20       1900             0                0          0
ANALYST           20       6000             0                0          0
MANAGER           20       2975             0                0          0
CLERK             30        950             0                0          0
MANAGER           30       2850             0                0          0
SALESMAN          30       5600             0                0          0
                  10       8750             1                0          2
                  20      10875             1                0          2
                  30       9400             1                0          2
ANALYST                    6000             0                1          1
CLERK                      4150             0                1          1
MANAGER                    8275             0                1          1
PRESIDENT                  5000             0                1          1
SALESMAN                   5600             0                1          1
 
17 rows selected.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert
  2    when (b2n=0) then
  3      into job_dept_sal ( job, deptno, sal ) values ( job, deptno, sal )
  4    when (b2n=2) then
  5      into dept_sal (deptno,sal) values( deptno,sal)
  6    when (b2n=1) then
  7      into job_sal (job,sal) values ( job, sal )
  8  select job, deptno, sum(sal) sal,
  9         grouping(job),
 10             grouping(deptno),
 11             bin_to_num( grouping(job), grouping(deptno) ) b2n
 12    from emp
 13   group by grouping sets((job),(deptno),(job,deptno))
 14  /
 
17 rows created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select * from job_dept_sal;
 
JOB           DEPTNO        SAL
--------- ---------- ----------
CLERK             10       1300
MANAGER           10       2450
PRESIDENT         10       5000
CLERK             20       1900
ANALYST           20       6000
MANAGER           20       2975
CLERK             30        950
MANAGER           30       2850
SALESMAN          30       5600
 
9 rows selected.
 
ops$tkyte@ORA9IR2> select * from dept_sal;
 
    DEPTNO        SAL
---------- ----------
        10       8750
        20      10875
        30       9400
 
ops$tkyte@ORA9IR2> select * from job_sal;
 
JOB              SAL
--------- ----------
ANALYST         6000
CLERK           4150
MANAGER         8275
PRESIDENT       5000
SALESMAN        5600
 
 

September 17, 2004 - 11:19 am UTC

Reviewer: A reader

Wow!!!!!!! that's wonderful. I am grateful to you for sharing your knowledge.
Grouping is something which I haven't used that often. Amazing feature.

Thanks a lot.

fk violation for 1000 rows

December 07, 2004 - 6:31 pm UTC

Reviewer: Rima parikh from MN USA

I took the example you gave, copied below. It worked as is. Then I just changed the number of rows to insert from 10 to 1000. Getting a foreign key violation :

create table s
      (s_id number not null primary key,
       s_name varchar2(30) not null );

create sequence s_seq;
create sequence s_sub_seq;

create table s_sub
  (s_sub_id number not null primary key,
   s_id number not null references s(s_id),
   s_sub_name varchar2(30) not null );

SQL> INSERT ALL
  2      INTO s (s_id, s_name) VALUES (s_seq.nextval, name)
  3      INTO s_sub (s_sub_id, s_id, s_sub_name) VALUES(s_sub_seq.nextval, 
  4  s_seq.currval, name )
  5      SELECT object_name AS name FROM all_objects where rownum <= 1000;
INSERT ALL
*
ERROR at line 1:
ORA-02291: integrity constraint (POOL.SYS_C00103986) violated - parent key not
found

What am I doing wrong? 

Tom Kyte

Followup  

December 07, 2004 - 8:59 pm UTC

confirmed, guess it is equally unsafe in this case. I'll have to research this further to see if it is a bug or not - for me it took lots more rows and a quick trace showed it "should have worked".

thanks for the followup -- i'll have to play more with this one.

Reply to "fk violation for 1000 rows" by Rima parikh

December 18, 2004 - 1:28 am UTC

Reviewer: Oren Nakdimon from Israel

We had the same problem and found that it was a known bug. Here is an excerpt from Note 265826.1 on Metalink:

...
CAUSE

This issue is an open bug:
<bug:2891576> MULTI TABLE INSERT (INSERT ALL) FAILS WITH ORA-2291
Based on the above:
"The order of the tables into which Oracle inserts data is not determinate. Therefore, before issuing a multitable insert statement, you should defer any constraints and disable any triggers that depend on a particular table order for the multitable insert operation."

FIX

WORKAROUND:
1. Disable the foreign key when run such MultiPath Inserts.
2. Use DEFERRED CONSTRAINTS so the checkout happens only at Commit time.
About Deferred Constraints check Metalink <Note:73647.1> "Deferred Constraints Example"
...

So "insert all" is not atomic after all (at least de facto).

Tom Kyte

Followup  

December 18, 2004 - 8:53 am UTC

thanks for the followup, interesting thread -- it is an in progress bug -- the goal is to make it atomic, but for now the workaround is to manually defer the constraints.

Multi-table insert with user-defined functions

December 18, 2004 - 8:47 am UTC

Reviewer: Oren Nakdimon from Israel

Tom, in your followup from 18-Aug-04 in this thread you wrote:

<quote>
in a multi-table insert, the query runs once.
in a single table insert -- you'll run the same query twice.
</quote>

I've got a problem that makes me suspect that in a multi-table insert the query may run more than once.
Consider the following:

create or replace package p
as
procedure init;
function f return number;
function get_counter return number;
end;
/

create or replace package body p
as
-- number of times the function f was executed (since init)
g_counter number := 0;

procedure init
is
begin
g_counter := 0;
end;

function get_counter return number
is
begin
return g_counter;
end;

function f return number
is
begin
g_counter := g_counter+1;
return g_counter;
end;

end;
/

create table t1 (id number,x number);
create table t2 (id number,x number);

I have 38 records in ALL_USERS:

ops$orenn@TIRAMISU> select count(*) from all_users;

COUNT(*)
----------
38

I'll reset my counter, and fill t1 and t2 using an "insert all" with a query from all_users that uses the function p.f:

ops$orenn@TIRAMISU> exec p.init

PL/SQL procedure successfully completed.

ops$orenn@TIRAMISU> insert all
2 into t1 (id,x) values (id,value)
3 into t2 (id,x) values (id,value)
4 select user_id id,p.f value from all_users;

76 rows created.

I expected that the query would be executed once, meaning that p.f would be executed 38 times. But checking the counter shows it has been executed 76 times:

ops$orenn@TIRAMISU> select p.get_counter from dual;

GET_COUNTER
-----------
76

More specifically, p.f was executed once for each record inserted into t1 and once for each record inserted into t2:

ops$orenn@TIRAMISU> select t1.id,t1.x x1,t2.x x2
2 from t1,t2
3 where t1.id=t2.id
4 order by t1.x;

ID X1 X2
---------- ---------- ----------
0 1 2
5 3 4
19 5 6
11 7 8
21 9 10
...
83 73 74
84 75 76

38 rows selected.

I find it strange. What I find even stranger, is that by adding "rownum" to the select list (without even referencing it in the "into" clauses) it does work as expected:

ops$orenn@TIRAMISU> rollback;

Rollback complete.

ops$orenn@TIRAMISU> exec p.init

PL/SQL procedure successfully completed.

ops$orenn@TIRAMISU> insert all
2 into t1 (id,x) values (id,value)
3 into t2 (id,x) values (id,value)
4 select rownum,user_id id,p.f value from all_users;

76 rows created.

ops$orenn@TIRAMISU> select p.get_counter from dual;

GET_COUNTER
-----------
38

ops$orenn@TIRAMISU> select t1.id,t1.x x1,t2.x x2
2 from t1,t2
3 where t1.id=t2.id
4 order by t1.x;

ID X1 X2
---------- ---------- ----------
0 1 1
5 2 2
19 3 3
11 4 4
21 5 5
...
83 37 37
84 38 38

38 rows selected.

This inconsistent behavior troubles me because my "real life" p.f is kind of a sequence manager, and its return value must be the same for all the "into" clauses.

Tom, am I missing here something, or you find it weird as well?

Many thanks,
Oren.

Tom Kyte

Followup  

December 18, 2004 - 9:02 am UTC

functions can be run over and over and over. I can show you similar queries where the function runs once per query or once per row per query or many numbers in between or numbers greatly exceeding the number of rows.


you are ascribing procedural "things" to SQL which is inheritly non-procedural.


In your case, 'value' is probably getting replaced with p.f so your statement is really:

ops$orenn@TIRAMISU> insert all
  2     into t1 (id,x) values (id,p.f)
  3     into t2 (id,x) values (id,pf.)
  4     select rownum,user_id id,p.f value from all_users;



 

December 18, 2004 - 9:17 am UTC

Reviewer: Oren Nakdimon from Israel

Tom, is there a way to make sure the function is called only once? In my tests I succeeded doing it by adding the "rownum" to the query, but I don't know if it's a deterministic solution.

Thanks.

Tom Kyte

Followup  

December 18, 2004 - 9:25 am UTC

I've never seen it *not* work, I use it to tune with, but I will not commit to calling it deterministic :)

Using it to tune a query is one thing, if they change the behavior later -- so be it. All I've done is suffered a performance hit, the data is OK.

Using it to predict the order of operations and rely on that and they change the behavior, I'm in a world of hurt.

bulk collect, forall i inserts it what I would recommend for you if you have an explicit order of operation that is necessary.

What about sys_guid?

December 19, 2004 - 4:21 am UTC

Reviewer: Mike Friedman from Shanghai

All of these answers are in reference to Sequences.

I'm beginning to lean in the direction of obsoleting sequences and relying 100% on sys_guid(). Would this be one of the advantages?

Tom Kyte

Followup  

December 19, 2004 - 11:31 am UTC

i wouldn't use a sys_guid() unless I had a truly really good technical reason.

they are 16 bytes
they are raw
they will probably therefore be 32 bytes
they are long
they look unusual to end users (therefore they are frightening to them)


would "what" be one of the advantages?

Extract dimensions and facts from one single flat file

April 29, 2005 - 2:08 pm UTC

Reviewer: A reader

Hi Tom,

I have one big flat file which contains both fact data and many dimension data as well, which is highly denormalized.

What I need to do is to extract the data from this big file to both fact table and dimension tables as fast as possible.

Do you know what is the best way for this kind of problem?

The flat file looks like:
ORDERNO, LINEITEMNO, PRODNAME, PRODDESC, CUSTOMERNO, CUSTOMERNAME, TRANSTYPE, TRANSTYPEDESC, ....


It contains 10M rows and probably 100 products, 1000 customers, 10 transtypes. Am I able to read this file (external table) once and generate all fact/dimensions?



Thanks,


Tom Kyte

Followup  

April 29, 2005 - 6:34 pm UTC

what are you going to use for keys?

Remapping FK Values Using Multi-Table Inserts and Sequences

May 17, 2005 - 5:44 am UTC

Reviewer: John Gilmore from Worcester, United Kingdom

Hi Tom,

We're doing a data migration and have to insert several million rows into a pair of existing tables which are related by a foreign key. For example, invoice and invoice_lines. Note that the target tables are already populated with several million rows of data.

The primary key of the parent source table needs to be remapped and will be derived from a sequence. The question is what is the most efficient way to populate the new FK value in invoice_lines which references the new PK value in invoices.

I expect that it might be most efficient to use a multi-table insert for the PK table which populates a new lookup table (probably an index organised table) containing the old and new PK values called, say, invoice_pks. This new table can then be joined to the source invoice_lines table so as to get the new FK value when inserting into the target invoice_lines table.

For example, we would have a multi-table insert to populate the parent table and the lookup table as follows.

insert all
into invoice_pks (
old_inv_pk,
new_inv_pk)
values (
old_inv_num,
inv_seq.nextval)
into new_invoices (
inv_num,
cust_num,
inv_date)
values (
inv_seq.nextval,
old_cust_num,
old_inv_date)
select
inv_num old_inv_num,
cust_num old_cust_num,
inv_date old_inv_date
from old_invoices;

We then use the lookup table to populate the child table with the new FK value as follows.

insert into new_invoice_lines (
inv_num,
line_num,
sku,
qty,
price)
select
ip.new_inv_pk,
il.line_num,
il.sku,
il.qty,
il.price
from old_invoice_lines il, invoice_pks ip
where il.inv_num = ip.old_inv_pk;

What are your views on this approach? Can you see a more efficient way to achieve this?

Tom Kyte

Followup  

May 17, 2005 - 9:08 am UTC

yes, that would work and be the safe way to do it.

need some sequence values repeated, and some new

August 04, 2005 - 2:03 pm UTC

Reviewer: Darin from USA

Hey Tom,

Beautiful explainations above.
I have a scenario which seems to be (maybe) the only one not touched upon above.

I have very denormalized table 'flat'. I want to select from it and insert into 2 separate tables (t1 - parent; t2 - child, FKed to t1); however a row in flat can contain mutliple entries for t2.
I am using sequences to generate PK IDs for both tables, and for t2 I need a new ID for each entry, but reference the same ID from t1.

Here is a simplified set up:
lackey> drop table flat;
drop table t2;
drop table t1;
drop sequence s1;
drop sequence s2;

Table dropped.
Table dropped.
Table dropped.
Sequence dropped.
Sequence dropped.

create table flat(name varchar2(2), --product name
bp1 number, --price break point 1
pd1 number, --price discount 1
bp2 number, --price break point 2
pd2 number, --price discount 2
bp3 number, --price break point 3
pd3 number); --price discount3
Table created.

insert into flat(name, bp1, pd1, bp2, pd2, bp3, pd3)
values('aa', 10, 100, 20, 200, 30, 300);
insert into flat(name, bp1, pd1, bp2, pd2)
values('bb', 40, 400, 50, 500);
insert into flat(name, bp1, pd1)
values('cc', 60, 600);
insert into flat(name)
values('dd');
1 row created.
1 row created.
1 row created.
1 row created.

select * from flat;
NA BP1 PD1 BP2 PD2 BP3 PD3
-- --- --- --- --- --- ---
aa 10 100 20 200 30 300
bb 40 400 50 500
cc 60 600
dd


create table t1(t1_id number primary key, name varchar2(2));
Table created.
create sequence s1;
Sequence created.

create table t2(t2_id number primary key, bp number, pd number, t1_id references t1(t1_id));
Table created.
create sequence s2;
Sequence created.

-*-*-*-*-
attempting to use nextval/currval to get the appropriate IDs gives a unique constraint violation:
-*-*-*-*-

INSERT ALL
WHEN (1 = 1) then
INTO t1 (t1_id, name) values(s1.nextval, name)
WHEN (bp1 is not null and pd1 is not null) then
INTO t2 (t2_id, bp, pd, t1_id) values(s2.nextval,bp1,pd1,s1.currval)
WHEN (bp2 is not null and pd2 is not null) then
INTO t2 (t2_id, bp, pd, t1_id) values(s2.nextval,bp2,pd2,s1.currval)
WHEN (bp3 is not null and pd3 is not null) then
INTO t2 (t2_id, bp, pd, t1_id) values(s2.nextval,bp3,pd3,s1.currval)
SELECT name, bp1, pd1, bp2,pd2,bp3,pd3 from flat;

2 3 4 5 6 7 8 9 10 INSERT ALL
*
ERROR at line 1:
ORA-00001: unique constraint (MEDECAT.SYS_C00102007) violated

-*-*-*-*-
I assume this is because using s2.nextval for each of the 3 inserts into t2 gives me the same value from the sequence.
However, I can put the s2.nextval for t2_id in a function vd_id and call it in the select clause, and it performs as I want...
(also, I could use either s1.currval or s1.nextval in all the INTO statements for the same ID, yes?)

[all sequences and tables dropped and recreated]
-*-*-*-*-

create or replace function vd_id return number is
id number := 0;
begin
select s2.nextval into id from dual;
return id;
end;
/
Function created.

INSERT ALL
WHEN (1 = 1) then
INTO t1 (t1_id, name)
values(s1.nextval, name)
WHEN (bp1 is not null and pd1 is not null) then
INTO t2 (t2_id, bp, pd, t1_id)
values(id2,bp1,pd1,s1.currval)
WHEN (bp2 is not null and pd2 is not null) then
INTO t2 (t2_id, bp, pd, t1_id)
values(id2,bp2,pd2,s1.currval)
WHEN (bp3 is not null and pd3 is not null) then
INTO t2 (t2_id, bp, pd, t1_id)
values(id2,bp3,pd3,s1.currval)
SELECT name, bp1, pd1, bp2, pd2, bp3, pd3, vd_id as id2 from flat;

10 rows created.

select * from t1;

T1_ID NA
----- --
1 aa
2 bb
3 cc
4 cc

select * from t2 order by t1_id;

T2_ID BP PD T1_ID
----- -- --- -----
1 10 100 1
2 20 200 1
3 30 300 1
4 40 400 2
5 50 500 2
6 60 600 3

-*-*-*-*-
Is this the only way to achieve this? Are there any declarative/procedural concerns using this approach?

Thanks in advance.

Tom Kyte

Followup  

August 04, 2005 - 2:25 pm UTC

can you model t1/t2 like this:


t1( x int primary key, ..... );
t2( x references t1, y int, ...., primary key(x,y) );

instead? else -- nextval/currval return constants and calling plsql from sql is not long term predicable -- there is no reason we couldn't just decide to call that function ONCE (i'm hoping that happens, that vd_id gets correctly optimized - thus changing your behaviour, as far as I AM concerned, the above behavior "isn't correct", it should not do that)

Question about using nextval and currval

August 17, 2005 - 8:40 pm UTC

Reviewer: Juan Velez from Bellevue, WA, USA

In this scenario (taken from one of Tom's responses)

ps$tkyte@ORA920> create table s
2 (s_id number not null primary key,
3 s_name varchar2(30) not null );

Table created.

ops$tkyte@ORA920> create sequence s_seq;

Sequence created.

ops$tkyte@ORA920>
ops$tkyte@ORA920>
ops$tkyte@ORA920> create table s_sub
2 (s_sub_id number not null primary key,
3 s_id number not null references s(s_id),
4 s_sub_name varchar2(30) not null );

Table created.

ops$tkyte@ORA920> create sequence s_sub_seq;

Sequence created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> INSERT ALL
2 INTO s (s_id, s_name) VALUES (s_seq.nextval, name)
3 INTO s_sub (s_sub_id, s_id, s_sub_name) VALUES(s_sub_seq.nextval,
s_seq.currval, name )
4 SELECT object_name AS name FROM all_objects where rownum <= 10;


Which one of s_seq.nextval and s_seq.currval is executed first? It seems that when both nextval and currval are present, currval is "ALWAYS?" executed last?

Tom Kyte

Followup  

August 18, 2005 - 3:31 pm UTC

currval and nextval would be the same in this context.

Question about using nextval and currval

August 18, 2005 - 5:55 pm UTC

Reviewer: Juan Velez from Bellevue, WA, USA

Does "currval and nextval are the same in this context" mean, that nextval is ALWAYS executed first? I am really asking about the order in which they are evaluated.

Tom Kyte

Followup  

August 18, 2005 - 7:06 pm UTC

nextval would be "first", they are the same in that statement.

Error?

August 18, 2005 - 6:09 pm UTC

Reviewer: Bob B from Albany, NY

Using the sequences and tables above...

I've found that the following doesn't work:
INSERT
WHEN RN = 1 THEN INTO s (s_id, s_name) VALUES (s_seq.nextval, VAL)
WHEN 1 = 1 THEN INTO s_sub (s_sub_id, s_id, s_sub_name) VALUES(s_sub_seq.nextval, s_seq.currval, RN )
WITH T AS (
SELECT MOD( ROWNUM, 3 ) VAL
FROM DUAL
CONNECT BY LEVEL <= 15
)
SELECT
T.VAL,
ROW_NUMBER() OVER (
PARTITION BY T.VAL
ORDER BY 1
) RN
FROM T
ORDER BY T.VAL, RN

However, the next one does work:
INSERT
WHEN RN = 1 THEN INTO s (s_id, s_name) VALUES (s_seq.nextval, VAL)
WHEN 1 = 1 THEN INTO s_sub (s_sub_id, s_id, s_sub_name) VALUES(s_sub_seq.nextval, s_seq.currval - RN + 1, RN )
WITH T AS (
SELECT MOD( ROWNUM, 3 ) VAL
FROM DUAL
CONNECT BY LEVEL <= 15
)
SELECT
T.VAL,
ROW_NUMBER() OVER (
PARTITION BY T.VAL
ORDER BY 1
) RN
FROM T
ORDER BY T.VAL, RN

I'm not brave enough to trust that this behavior won't change (plus the need of an order by to do an insert would baffle anyone).

Hypothetically, lets say I have a 10 million row external table. Each row goes into exactly one of 10 different tables and the distribution is statistically even. Each target table has a sequence. Based on the above observations, a multi-table insert that uses 1 sequence per table insert will do 10 million sequence calls on 10 sequences (100 million sequence calls) instead of 10 million sequence calls. This is consistent with the declarative nature of SQL; however, it seems more like a bug (or feature) that could be fixed.

Multi-table insert and sequences

August 19, 2005 - 12:59 pm UTC

Reviewer: Jay

One important point to be noted with multi-table insert and sequences is that even though a sequence is used in some of the branches of the multi-table insert, it is still incremented for every row returned by the subquery. This could result in big gaps in the sequence values inserted into the table.

This is mentioned here:
</code> http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/sql_elements6a.htm#4148 <code>

"For a multi-table insert, the reference to NEXTVAL must appear in the VALUES clause, and the sequence is updated once for each row returned by the subquery, even though NEXTVAL may be referenced in multiple branches of the multi-table insert."

Put it another way:
For a multi-table insert, the reference to NEXTVAL must appear in the VALUES clause, and the sequence is updated once for each row returned by the subquery, even though NEXTVAL may be referenced in only some of the branches of the multi-table insert.

Is there any way to avoid the big gaps in the sequence values inserted into the table (if the sequence is used in some of the branches of the multi-table insert) ?

Tom Kyte

Followup  

August 20, 2005 - 4:32 pm UTC

big gaps are fine, you have gaps regardless (no is the answer). Why care if they are 1 or 100000000000 away?

September 14, 2005 - 7:16 am UTC

Reviewer: Ravi

Why does this cause

drop table ty;
create table ty (a number(10) primary key);
insert all
when (b>0 and not exists (Select null
from ty where b=a)) then
into ty values(b)
(Select 1 b from user_objects)
/

A
*
ERROR at line 1:
ORA-00001: unique constraint (Z001698.SYS_C00423297) violated

Its as if the Not exists does not work?

Thanks


Tom Kyte

Followup  

September 14, 2005 - 8:54 am UTC

read consistency, the table TY is empty at the beginning of the insert and will remain that way from the perspective of the read component.

Else, the statement would be entirely "non-deterministic", same data - two different databases and different results come out.

September 14, 2005 - 7:37 am UTC

Reviewer: A reader

Did the previous posting, I guess its because the Not Exists runs with data as it should have existed BEFORE the Insert BEGAN?

Because the following fails as well:
insert into ty
Select 1 b from user_objects
where not exists (select null
from ty where a != 1)
/

Further on sequences on multi-table inserts

September 30, 2005 - 3:55 am UTC

Reviewer: Scott Wesley from Perth, Western Australia

G'day Tom,

Further on sequences on multi-table inserts, in regard to Jay's comment about the evaluation of NEXTVAL for each row returned in the query, this causes problems with the parent child situation, which I think has been touched briefly here but only on a one-to-one basis, not one-to-many

sw>create table swp (p number);

Table created.

sw>create table swc (c number,p number);

Table created.

sw>create sequence swps;

Sequence created.

sw>create sequence swcs;

Sequence created.

sw>insert all
2 when mod(rownum+2,3)=0 then -- insert into p every third row, starting now
3 into swp values (swps.nextval)
4 when 1=1 then -- insert a child each time
5 into swc values(swcs.nextval,swps.currval)
6 select * from all_objects where rownum<5
7 ;

6 rows created.

sw>
sw>select * from swp;

P
----------
1
4

2 rows selected.

sw>select * from swc;

C P
---------- ----------
1 1
2 2
3 3
4 4

4 rows selected.

swps.nextval is being evaluated each time, leaving gaps in the insert into p, which is fine, but it causes the problem on insert into child as currval is being incremented.
If there was a foreign key between these tables, it would be violated. The result set for swc should be
C P
---------- ----------
1 1
2 1
3 1
4 4

We've been playing around looking for a solution to this issue, do you have any thoughts?

Tom Kyte

Followup  

September 30, 2005 - 9:34 am UTC

this is not safe, sql is inheritly not a procedural language in this case, the assignment of these surrogates won't be safely done this way.

Insert

October 14, 2005 - 5:15 pm UTC

Reviewer: Anne from MN, USA

Hi Tom,

In a cursor loop, I need to do something like a nested insert :

insert into table t1 (t1_ID, t2_id)
values (t1_id
,(SELECT t2_id
FROM TABLE t2
where ......
if row not found in table t2 then
insert into t2 (t2_id)
values(t2_id_seq.nextval)
returning t2_id_seq.currval to be inserted into t1.

Could you please advise on how to do this in one statement. Can I use Merge ? Thanks for the help!

Tom Kyte

Followup  

October 14, 2005 - 6:06 pm UTC

no, i don't see merge applying here.


You'd more or less need to query t2 first - no data found, do the insert

and then insert what you retrieved or what you just inserted in to t1.

Parallel and Append

December 19, 2005 - 11:34 am UTC

Reviewer: naresh from cyprus

HI Tom,

Is there any way to put in APPEND and PARALLEL modes into this feature?

Thanks,
NAresh

Tom Kyte

Followup  

December 19, 2005 - 12:20 pm UTC

yes.

nologgin

January 27, 2006 - 2:53 pm UTC

Reviewer: A reader from KY

Tom,

Can we use "nologging" option in multi table insert like in single table "insert into t nologging select ....".
I couldn't find anything in documentation also. Please advise.

Tom Kyte

Followup  

January 28, 2006 - 12:47 pm UTC

yes, it works with merge too (for the insert part)

ops$tkyte@ORA10GR1> create table t1 ( x int );

Table created.

ops$tkyte@ORA10GR1> create table t2 ( x int );

Table created.

ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> insert  /*+ APPEND */
  2    when (dummy='X') then into t1 (x) values (y)
  3    when (dummy='Y') then into t2 (x) values (y)
  4  select dummy, 1 y from dual;

1 row created.

ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> select * from t1;
select * from t1
              *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


ops$tkyte@ORA10GR1> select * from t2;
select * from t2
              *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel



the fact we cannot read the table after the insert - shows that direct path was "done".  But, if any of the tables "preclude" direct path - none of them will use direct path (eg: add a foreign key from T2 to some other table T3 - neither T1 nor T2 will be direct pathed) 

January 30, 2006 - 3:31 pm UTC

Reviewer: A reader from KY

Thanks for the great explanation.

Nextval issue on Multi-table insert

February 02, 2006 - 9:54 am UTC

Reviewer: CK

Tom,
I getting a 'unique constraint' error when using a sequence in the 'multi-table' insert (actually inserting into one table). See test script below.

What is the issue and is there a workaround for it?

Thanks

TEST@TEST> DROP TABLE T;
DROP TABLE T
*
ERROR at line 1:
ORA-00942: table or view does not exist


TEST@TEST>
TEST@TEST> CREATE TABLE T (ID NUMBER PRIMARY KEY, OBJECT_NAME VARCHAR2(50));

Table created.

TEST@TEST>
TEST@TEST> DROP SEQUENCE T_SEQ;
DROP SEQUENCE T_SEQ
*
ERROR at line 1:
ORA-02289: sequence does not exist


TEST@TEST>
TEST@TEST> CREATE SEQUENCE T_SEQ;

Sequence created.

TEST@TEST>
TEST@TEST> INSERT ALL
2 INTO T (ID, OBJECT_NAME)
3 VALUES (T_SEQ.NEXTVAL, OBJECT_NAME)
4 INTO T (ID, OBJECT_NAME)
5 VALUES (T_SEQ.NEXTVAL, SUBOBJECT_NAME)
6 SELECT OBJECT_NAME, SUBOBJECT_NAME
7 FROM ALL_OBJECTS
8 WHERE ROWNUM < 20;
INSERT ALL
*
ERROR at line 1:
ORA-00001: unique constraint (TEST.SYS_C0082206) violated


TEST@TEST>

Tom Kyte

Followup  

February 02, 2006 - 12:09 pm UTC

because the nextval is evaluated (as documented) once per row from the rowsource.

so nextval is exactly the same for both "inserts"

Multi Table insert

February 02, 2006 - 12:35 pm UTC

Reviewer: CK

Tom,
Thanks for the quick response. Is there another way of performing the same functionality using the multi table statement or will I just have 2 separate insert statements?

Thanks

Tom Kyte

Followup  

February 02, 2006 - 2:22 pm UTC

create the sequence to increment by twos and add one to one of them would be one way - but makes the sequence creation pretty specific to this particular sql statement.

Using function instead of sequence nextval?

February 02, 2006 - 2:05 pm UTC

Reviewer: CK

Tom,
Could I use the a function instead of the 'T_SEQ.NEXTVAL', see below? Is there any issues using this method?

TEST@TEST> DROP TABLE T;

Table dropped.

TEST@TEST>
TEST@TEST> CREATE TABLE T (ID NUMBER PRIMARY KEY, OBJECT_NAME VARCHAR2(50));

Table created.

TEST@TEST>
TEST@TEST> DROP SEQUENCE T_SEQ;

Sequence dropped.

TEST@TEST>
TEST@TEST> CREATE SEQUENCE T_SEQ;

Sequence created.

TEST@TEST>
TEST@TEST> INSERT ALL
2 INTO T (ID, OBJECT_NAME)
3 VALUES (T_SEQ.NEXTVAL, OBJECT_NAME)
4 INTO T (ID, OBJECT_NAME)
5 VALUES (T_SEQ.NEXTVAL, SUBOBJECT_NAME)
6 SELECT OBJECT_NAME, SUBOBJECT_NAME
7 FROM ALL_OBJECTS
8 WHERE ROWNUM < 20;
INSERT ALL
*
ERROR at line 1:
ORA-00001: unique constraint (TEST.SYS_C0082611) violated


TEST@TEST>
TEST@TEST>
TEST@TEST> CREATE OR REPLACE FUNCTION T_NEXT_SEQ
2 RETURN NUMBER IS
3 v_t_next_seq NUMBER;
4 BEGIN
5 SELECT T_SEQ.NEXTVAL
6 INTO v_t_next_seq
7 FROM DUAL;
8
9 RETURN V_T_NEXT_SEQ;
10 END;
11 /

Function created.

TEST@TEST>
TEST@TEST> INSERT ALL
2 INTO T (ID, OBJECT_NAME)
3 VALUES (T_NEXT_SEQ, OBJECT_NAME)
4 INTO T (ID, OBJECT_NAME)
5 VALUES (T_NEXT_SEQ, SUBOBJECT_NAME)
6 SELECT OBJECT_NAME, SUBOBJECT_NAME
7 FROM ALL_OBJECTS
8 WHERE ROWNUM < 20;

38 rows created.


TEST@TEST> SELECT * FROM T ORDER BY ID;

ID OBJECT_NAME
---------- --------------------------------------------------
20 ICOL$
21
22 I_USER1
23
24 CON$
25
26 UNDO$
27
28 C_COBJ#
29
30 I_OBJ#
31
32 PROXY_ROLE_DATA$
33
34 I_IND1
35
36 I_CDEF2
37
38 I_PROXY_ROLE_DATA$_1
39
40 FILE$
41
42 UET$
43
44 I_FILE#_BLOCK#
45
46 I_FILE1
47
48 I_CON1
49
50 I_OBJ3
51
52 I_TS#
53
54 I_CDEF4
55
56 IND$
57

38 rows selected.

TEST@TEST>



Tom Kyte

Followup  

February 02, 2006 - 2:35 pm UTC

sure, if you would like it to be really slow.

At that point, I'd rather do two sql statements.

possibly this?

February 02, 2006 - 4:48 pm UTC

Reviewer: Tyler from Edmonton

Not Entirely sure this is the approach i'd take, but it would perform better than the custom function. Increment the sequence by 2 and use sequence.nextval and sequence.nextval + 1.


DROP TABLE T;

CREATE TABLE T (ID NUMBER PRIMARY KEY, OBJECT_NAME VARCHAR2(50));

DROP SEQUENCE T_SEQ;

CREATE SEQUENCE T_SEQ START WITH 1 INCREMENT BY 2;


INSERT ALL
INTO T (ID, OBJECT_NAME)
VALUES (T_SEQ.NEXTVAL, OBJECT_NAME)
INTO T (ID, OBJECT_NAME)
VALUES (T_SEQ.NEXTVAL + 1, SUBOBJECT_NAME)
SELECT OBJECT_NAME, SUBOBJECT_NAME
FROM ALL_OBJECTS
WHERE ROWNUM < 20;


Tom Kyte

Followup  

February 03, 2006 - 1:38 pm UTC

that's what I said above :)

....
create the sequence to increment by twos and add one to one of them would be one
way - but makes the sequence creation pretty specific to this particular sql
statement. ....


Thank

February 03, 2006 - 6:15 am UTC

Reviewer: CK

Tom/Tyler,
Thanks for that. I think I'll go with Tyler's suggestion.

Idea Thief

February 03, 2006 - 2:23 pm UTC

Reviewer: Tyler from Edmonton

I suppose I should have read above a little more, missed Tom's reply earlier, and then ended up offering the same solution. :(

I do have 1 question for you Tom, why do you say
......
but makes the sequence creation pretty specific to this particular sql
statement. ....

I love this site and visit often. I agree with you that a sequence is / never will be gap free, so how would this particular solution limit / hinder an application?

Thanks!


Tom Kyte

Followup  

February 03, 2006 - 5:26 pm UTC

because in general, that sequence likely doesn't need to go two by two - we made it two by two specifically for that sql statement (hopefully someone doesn't wonder why 3 years from now and "fix" the sequence on us!)

Single row INSERT ALL within LOOP

April 05, 2006 - 6:03 pm UTC

Reviewer: Duke Ganote from the western edge of Clermont County, on the eastern edge of metro Cincinnati, Ohio USA

Am I correct in understanding from previously
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:6915127515933#10361262450881 <code>
that we can't use INSERT ALL within a LOOP such as:

FOR rec IN ( select 1 as X from dual ) LOOP
INSERT ALL INTO X (x) VALUES (rec.x)
INTO Y (x) VALUES (rec.x);
END LOOP;

? Thank you.

Tom Kyte

Followup  

April 06, 2006 - 9:53 am UTC

when you tried it, what happened?

INSERT ALL within loop

April 06, 2006 - 11:07 am UTC

Reviewer: Duke Ganote from lovely western Clermont County, Ohio USA where a hawk sometimes circles the trees...

1 BEGIN
2 FOR rec IN ( select '1' as X from dual ) LOOP
3 INSERT ALL INTO FIELD (FIELD) VALUES (rec.x)
4 INTO FIELD (FIELD) VALUES (rec.x);
5 END LOOP;
6* END;
XPLT9173:GR2\dganote> /
INTO FIELD (FIELD) VALUES (rec.x);
*
ERROR at line 4:
ORA-06550: line 4, column 49:
PL/SQL: ORA-00928: missing SELECT keyword
ORA-06550: line 3, column 5:
PL/SQL: SQL Statement ignored

Tom Kyte

Followup  

April 07, 2006 - 3:58 pm UTC

there you go :)


of course, you would never ever do that, it would be

insert all into ....
into ....
select '1' as x from dual;


no loop. and if you needed the loop, you'd be selecting from dual still in the insert all - since the insert all needs a select.

INSERT ALL within loop

April 07, 2006 - 4:09 pm UTC

Reviewer: Duke Ganote from the warm and sunny western edge of Clermont County, Ohio USA

You mean like this?

SQL> ed
Wrote file afiedt.buf

  1  BEGIN
  2    FOR rec IN ( select '1' as X from dual ) LOOP
  3      execute immediate '
  4      INSERT ALL INTO FIELD (FIELD) VALUES (:bv)
  5                 INTO FIELD (FIELD) VALUES (:bv)
  6      select :bv from dual' using rec.x, rec.x, rec.x;
  7    END LOOP;
  8* END;
SQL> /

PL/SQL procedure successfully completed.

SQL> select * from field;

F
-
1
1
 

Tom Kyte

Followup  

April 08, 2006 - 9:38 am UTC

that would be 'one way' - but if you ask me the wrong way


I don't see the point of the LOOP in the first place.

INSERT ALL within LOOP using record

April 07, 2006 - 4:53 pm UTC

Reviewer: Duke Ganote from the western, already getting wet & soggy edge of Clermont County, Ohio USA

I guess we can use DUAL as a pure DUMMY :) and just use 2 bind variables:

BEGIN
FOR rec IN ( select '1' as X from dual ) LOOP
execute immediate '
INSERT ALL INTO FIELD (FIELD) VALUES (:bv)
INTO FIELD (FIELD) VALUES (:bv)
select dummy from dual'
using rec.x, rec.x;
END LOOP;
END;

Where is the doc ?

June 26, 2006 - 10:11 am UTC

Reviewer: Robert from CT

Tom, this (MTI) does not seem to be in 9i SQL Ref
(A96624-01)
You know where it is ?
thanks

oops

June 26, 2006 - 10:14 am UTC

Reviewer: Robert from CT

oops never mind Tom,
I searched on "multi-table" instead of "multi_table"
Thanks

Multi-Row Insert with Parallel hint

August 01, 2006 - 11:16 am UTC

Reviewer: A Reader from Boston,MA

Hi Tom,

if i am not mistaken - to give the parallel hint you have to specify it in terms of table alias,

however in mutti-row insert statement if i give an alias for the target tables i get ORA-00928.

is there a way to specify a parallel hint for multi-row inserts ? I have large table to build using multi-row insert and any parallelism i think will greatly aid performance.

thanks in advance

create table t1 ( x int );
create table t2 ( x int );

insert /*+ APPEND parallel(a1,5) parallel (a2,5) */
when (dummy='X') then into t1 a1 (x) values (y)
when (dummy='Y') then into t2 a2 (x) values (y)
select dummy, 1 y from dual;


ORA-00928: missing SELECT keyword



Tom Kyte

Followup  

August 01, 2006 - 7:02 pm UTC

the parallel hint would go down into the select.

you would use APPEND on the insert part.



August 02, 2006 - 5:50 am UTC

Reviewer: Alberto Dell'Era from Milan, Italy

Tom, you scanned too fast - the reader was asking how to activate parallelism insert-side :)

The answer is that you can use the table names directly, it's not necessary to give an alias to the table (10.2.0.1):

SQL> alter session enable parallel dml;

Session altered.

SQL> explain plan for
  2  insert  /*+ APPEND */
  3  when (dummy='X') then into t1 (x) values (y)
  4  when (dummy='Y') then into t2 (x) values (y)
  5  select dummy, 1 y from dual;

Explained.

SQL> select * from table (dbms_xplan.display);
----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | INSERT STATEMENT    |      |     1 |     2 |     2   (0)| 00:00:01 |
|   1 |  MULTI-TABLE INSERT |      |       |       |            |          |
|   2 |   DIRECT LOAD INTO  | T1   |       |       |            |          |
|   3 |   DIRECT LOAD INTO  | T2   |       |       |            |          |
|   4 |    TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------

11 rows selected.

SQL> explain plan for
  2  insert  /*+ APPEND parallel(t1,4) parallel(t2,4) */
  3  when (dummy='X') then into t1 (x) values (y)
  4  when (dummy='Y') then into t2 (x) values (y)
  5  select dummy, 1 y from dual;

Explained.

SQL> select * from table (dbms_xplan.display);
------------------------------------------------
| Id  | Operation                 | Name     |  (cut)
------------------------------------------------
|   0 | INSERT STATEMENT          |          |
|   1 |  PX COORDINATOR           |          |
|   2 |   PX SEND QC (RANDOM)     | :TQ10001 |
|   3 |    MULTI-TABLE INSERT     |          |
|   4 |     DIRECT LOAD INTO      | T1       |
|   5 |     DIRECT LOAD INTO      | T2       |
|   6 |      BUFFER SORT          |          |
|   7 |       PX RECEIVE          |          |
|   8 |        PX SEND ROUND-ROBIN| :TQ10000 |
|   9 |         TABLE ACCESS FULL | DUAL     |
------------------------------------------------

-------------------------------
 |    TQ  |IN-OUT| PQ Distrib |
-------------------------------
 |        |      |            |
 |        |      |            |
 |  Q1,01 | P->S | QC (RAND)  |
 |  Q1,01 | PCWP |            |
 |  Q1,01 |      |            |
 |  Q1,01 |      |            |
 |  Q1,01 | PCWC |            |
 |  Q1,01 | PCWP |            |
 |        | S->P | RND-ROBIN  |
 |        |      |            |
------------------------------- 

Tom Kyte

Followup  

August 02, 2006 - 11:41 am UTC

thanks :)

alias and multitable INSERT

August 16, 2006 - 4:05 am UTC

Reviewer: abz

9i Release 2, 9.2.0.5

Can't I use table alias in multi table INSERT?

INSERT ALL
INTO t1 a (a.c1) values (c.cc1)
INTO t2 b (b.c1) values (c.cc2)
SELECT c.cc1, c.cc2
FROM t3 c

doesnt work and gives ORA-00928.

Accoding to SQL reference manual this is correct syntax.


Tom Kyte

Followup  

August 16, 2006 - 8:33 am UTC

It is entirely unambigous which columns you are refering to. The correlation names are not even remotely necessary.

According to the SQL reference manual:

</code> http://docs.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_9014.htm#i2095116 <code>

it is explicity prohibited to use correlation names:

<quote>
Table aliases are not defined by the select list of the subquery. Therefore, they are not visible in the clauses dependent on the select list.
</quote>

still not clear

August 16, 2006 - 8:57 am UTC

Reviewer: abz

1)
ok it is not ambigous, but in single table insert
i have alot of times used aliases and it worked fine,
even there it is not ambigous, but oracle allow it there.

2)
Does that paragraph says that aliases cannot be used
in SELECT clause, what if I dont use aliases in
the subquery but do use aliases in INSERT
like INSERT INTO t1 a (....) VALUES (...) SELECT ... FROM
t2



Also, from 9i Rel.2 Documentaion.

</code> http://download-east.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_913a.htm#2133161

there is nothing mentioned about it.

Also the syntax diagram in the 9i rel 2 documentation
shows that it is allowed to use aliases. See the following
links for this.

http://download-east.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_913a.htm#2079820


and for the into clause

http://download-east.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_913a.htm#2079870 <code>


So in view of above can I file it as documentation
bug on metalink.


Tom Kyte

Followup  

August 16, 2006 - 9:02 am UTC

1) it is documented that the correlation names are not available. don't know what else to say?

2) that is correct.


Are you and I reading the same stuff??? I clicked on your link:

</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_913a.htm#2133161 <code>

and it is sitting right there, big and "noted" (in a note)

<quote>
Note:

Table aliases are not defined by the select list of the subquery. Therefore, they are not visible in the clauses dependent on the select list.
</quote>

direct cut and paste.....

I would not bother filing a bug - it is there, big and bold.

multi table insert

August 16, 2006 - 9:17 am UTC

Reviewer: abz

Sorry I missed the note.

But if (2) is correct then whats wrong with this
query, and because of this query giving error, I
send you the other two links which shows multi table insert
syntax. You can see in this query that aliases are only
used in INSERT not in SELECT FROM.


  1  INSERT /*+ APPEND */
  2  ALL
  3  INTO customer_base a
  4  (
  5  a.unit_code,
  6  a.customer_no,
  7  a.customer_name,
  8  a.address1,
  9  a.meterno,
 10  a.customer_type,
 11  a.customer_active,
 12  a.due_date,
 13  a.last_reading_date,
 14  a.last_index,
 15  a.date_connection,
 16  a.osc_billing_month,
 17  a.osc_amount
 18  )
 19  VALUES
 20  (
 21  unit_code,
 22  customer_no,
 23  customer_name,
 24  address1,
 25  meterno,
 26  customer_type,
 27  customer_active,
 28  due_date,
 29  last_reading_date,
 30  last_index,
 31  data_connection,
 32  osc_billing_month,
 33  osc_amount
 34  )
 35  INTO customer_history b
 36  (
 37  b.unit_code,
 38  b.customer_no,
 39  b.meter_no,
 40  b.bill_amt,
 41  b.amt_pay_within_duedate,
 42  b.amt_pay_after_duedate,
 43  b.bill_month,
 44  b.customer_active,
 45  b.arrears,
 46  b.due_date,
 47  b.last_closing_date,
 48  b.arrears_month
 49  )
 50  VALUES
 51  (
 52  unit_code,
 53  customer_no,
 54  meterno,
 55  amount,
 56  osc_amount,
 57  osc_amount,
 58  osc_billing_month,
 59  customer_active,
 60  arrears,
 61  due_date,
 62  net_bal_dt,
 63  osmth
 64  )
 65  SELECT
 66   CASE WHEN TRIM(unit_cd) IN ('11', '12', '13', '14') THEN 'A'
 67    WHEN TRIM(unit_cd) IN ('21', '22', '23')  THEN 'B'
 68    WHEN TRIM(unit_cd) IN ('31', '32', '33')  THEN 'Q'
 69   END unit_code,
 70   con9 customer_no,
 71   nam customer_name,
 72   ad1||ad2||ad3||ad4||ad5 address1,
 73   mno meterno,
 74   CASE WHEN TRIM(unit_cd) IN ('11', '21', '31')  THEN 'D'
 75    WHEN TRIM(unit_cd) IN ('13', '23', '33')  THEN 'I'
 76    WHEN TRIM(unit_cd) IN ('12', '22', '32', '14') THEN 'C'
 77   END customer_type,
 78   '9' customer_active,
 79   TO_DATE(dis_dt, 'DDMMYYYY') due_date,
 80   TO_DATE(dis_dt, 'DDMMYYYY') last_reading_date,
 81   '0' last_index,
 82   ADD_MONTHS(TO_DATE(dis_dt, 'DDMMYYYY'), -1 * TO_NUMBER(TRIM(osmth))) data_connection,
 83   bmth osc_billing_month,
 84   TO_NUMBER(TRIM(net_bal)) osc_amount,
 85  -----------------------------------------------
 86   TRIM(amount) amount,
 87   TO_DATE(TRIM(net_bal_dt), 'DDMMYYYY') net_bal_dt,
 88   0 arrears,
 89   osmth
 90  FROM
 91*  ardata
SQL> /
INTO customer_base a
                   *
ERROR at line 3:
ORA-00928: missing SELECT keyword


SQL>  

Tom Kyte

Followup  

August 16, 2006 - 10:17 am UTC

no correlation name, drop it, lose it, get rid of it. It never worked there on a multi-table insert.

BUG?

August 16, 2006 - 1:22 pm UTC

Reviewer: abz

So, can this be filed as a documentation bug?

Tom Kyte

Followup  

August 16, 2006 - 3:51 pm UTC

It says "no correlation names"

You may feel free to file as a documentation bug if you would like, it would be a clarification at most.

software bug?

August 16, 2006 - 1:22 pm UTC

Reviewer: abz

or should it be filed as software bug?

rows created

August 22, 2006 - 7:26 am UTC

Reviewer: abz

I used multitable INSERT to insert rows in 3 different tables from a subquery, it executed successfully, and the message appeard 8448 rows created.

My question is, what is this 8448, is it the total
number of rows created in each table? should'nt the message
be like this:-

xxx rows created in table aaa
xxx rows created in table bbb
....


Tom Kyte

Followup  

August 27, 2006 - 2:14 pm UTC

no, a DML statement tells you simply the work performed, you get back a single number.

so whats this number then

August 28, 2006 - 2:34 am UTC

Reviewer: abz

Ok, but what no. is this?
for example 10 rows created in t1, 10 in t2 , 10 in t3.

So will this no. be 10 or 30?


Tom Kyte

Followup  

August 28, 2006 - 9:08 am UTC

ops$tkyte%ORA10GR2> insert ALL
  2  into t1 ( x ) values ( rn )
  3  into t2 ( x ) values ( rn )
  4  into t3 ( x ) values ( rn )
  5  select rownum rn from all_users where rownum <= 10;
 
30 rows created.
 

INSERT ALL

August 29, 2006 - 4:50 pm UTC

Reviewer: A reader

Hi Tom,
As APPEND hint does not work with VALUES clause. Can INSERT ALL be used along with APPEND hint i.e., without using values clause?
Regards,

Tom Kyte

Followup  

August 29, 2006 - 5:39 pm UTC

the multi-table insert is a "insert as select" statement (bulk).  The values in there is sort of a "red herring" -- it follows the insert as select form:

ops$tkyte%ORA10GR2> insert /*+ APPEND */ ALL
  2  into t2  (username, user_id, created) values ( username, user_id, created )
  3  into t3  (username, user_id, created) values ( username, user_id, created )
  4  select * from t1;

60 rows created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select * from t2;
select * from t2
              *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


ops$tkyte%ORA10GR2> select * from t3;
select * from t3
              *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


that shows the append hint was used. 

little confusing message

August 30, 2006 - 2:10 am UTC

Reviewer: abz

ORA-12838 is a little bit confusing, as it says PARALLEL, but we havnt specified any parallel in the statement.


Tom Kyte

Followup  

August 30, 2006 - 8:13 am UTC

agreed, but it is what it is. It should really say "direct path" instead of parallel.

INSERT ALL

August 30, 2006 - 7:03 am UTC

Reviewer: A reader

Hi Tom,
Thanks a lot!
Regards,

Really good stuff

September 01, 2006 - 2:59 pm UTC

Reviewer: Elaine H from Atlanta, GA

This is great stuff. I have a question though. What if the columns are being populated by multiple selects? Can you refer to them "AS new_col1"?

that is:
how would you code the following -- here in regular english, not necessarily SQL?

insert into table A (
col1,col2,col3,col4,col5,col6)
values
constant_col1,
new_col2,
new_col3

where new_col2 is derived from select x from table b where some condition.
where new_col3 is derived from select y from table c where
some condition
where new_col4 and new_col5 are derived from select c1, c2 from table d where some condition
and finally
new_col6 is new_col3 divided by new_col5

is this possible to code in the insert or do you do the first insert of data and then go back and keep updating the table from the other queries?

Tom Kyte

Followup  

September 01, 2006 - 3:35 pm UTC

you just refer to the aliases:

ops$tkyte%ORA10GR2> insert all
  2  into t1(x,y,z) values (my_x,my_y,my_z)
  3  into t2(x,y,z) values (my_x,my_y,my_z)
  4  select object_id/data_object_id my_x,
  5         substr(object_name,1,15)||substr(object_type,1,15) my_y,
  6             created-object_id+25 my_z
  7    from all_objects
  8   where rownum <= 5;

10 rows created.

is that what you mean? 

RE: really good stuff

September 01, 2006 - 4:54 pm UTC

Reviewer: Duke Ganote from Batavia Twp, Clermont Cty, Ohio USA

Is Elaine referring to scalar subqueries in the INSERT, perhaps something like this?

INSERT INTO t
SELECT tbl_cnt, col_cnt, tbl_cnt+col_cnt AS OBJ_CNT
from (
SELECT
(select count(*) from user_tables) as TBL_CNT
, (select count(*) from user_tab_columns) as COL_CNT
FROM dual
)
/


Explicit vs Implicit %ROWCOUNT: rows AFFECTED vs FETCHED

December 13, 2006 - 10:51 am UTC

Reviewer: Duke Ganote from http://maps.google.com/maps?q=39.056690,-84.245720+(Near+here)

Just tripped over this when working with multi-table insert
</code> http://download-east.oracle.com/docs/cd/B14117_01/appdev.101/b10807/06_ora.htm <code>

Overview of Explicit Cursor Attributes
...
%ROWCOUNT Attribute: How Many Rows Fetched So Far?
When its cursor or cursor variable is opened, %ROWCOUNT is zeroed. Before the first fetch, %ROWCOUNT yields 0. Thereafter, it yields the number of rows fetched so far. The number is incremented if the last fetch returned a row. The following example uses %ROWCOUNT to test if more than ten rows have been fetched:

DECLARE
CURSOR c1 IS SELECT last_name FROM employees WHERE ROWNUM < 11;
name employees.last_name%TYPE;
BEGIN
OPEN c1;
LOOP
FETCH c1 INTO name;
EXIT WHEN c1%NOTFOUND;
dbms_output.put_line(c1%ROWCOUNT || '. ' || name);
IF c1%ROWCOUNT = 5 THEN
dbms_output.put_line('--- Fetched 5th record ---');
END IF;
END LOOP;
CLOSE c1;
END;
/

Overview of Implicit Cursor Attributes
...
%ROWCOUNT Attribute: How Many Rows Affected So Far?
%ROWCOUNT yields the number of rows affected by an INSERT, UPDATE, or DELETE statement, or returned by a SELECT INTO statement. %ROWCOUNT yields 0 if an INSERT, UPDATE, or DELETE statement affected no rows, or a SELECT INTO statement returned no rows. In the following example, you use %ROWCOUNT to take action if more than ten rows have been deleted:

DELETE FROM emp WHERE ...
IF SQL%ROWCOUNT > 10 THEN -- more than 10 rows were deleted
...
END IF;

******************************************
This is important for multi-table inserts:

create table t as select * from user_objects where 1=0;

Table created.

create table u as select * from user_objects where 1=0;

Table created.

BEGIN
INSERT ALL
when 0 < 1
then into t
when mod(object_id,1) = 0
then into u
select *
from user_objects
where rownum = 1
;
DBMS_OUTPUT.PUT_LINE('# of rows = '||SQL%ROWCOUNT);
END;
/
# of rows = 2

PL/SQL procedure successfully completed.


Multi Table Insert for Different Group By columns

March 31, 2008 - 5:02 pm UTC

Reviewer: Harikrishnan from Philadelphia, PA USA

Tom,

When I use the INSERT ALL statement to insert the data into multiple tables with "n" GROUP BY clause using "GROUPING SETS" method, am I reading the base tables "n" times to aggregate and populate different tables?
I have a scenario in which i need to aggregate the data using the below key columns.

INSERT ALL
WHEN (condition) THEN
INTO TABLE_A
.....
WHEN (condition) THEN
INTO TABLE_B
SELECT col1, col2, col3, sum(col3)
from fact
group by grouping sets((col1, col2), (col2, col3));

When I read through some documentation, it says GROUPING SETS method reads the base tables as many number of times as the different GROUP BY clause. If I use cube/rollup, I would avoid reading multiple times. Cube might help me in resolving the issue, but it calculates all different combinations which are not necessary for me.

Is there any way to resolve this issue?
Thanks
Tom Kyte

Followup  

March 31, 2008 - 5:44 pm UTC

... am I reading the base tables "n" times to aggregate and populate different tables? ..

no the query is executed once and the results are inserted into 0, 1 or more tables.


you misread the documentation I believe - grouping sets is more efficient than group by cube for example (group by cube creates ALL aggregates, grouping sets - just the ones you need). Grouping sets was added as an enhancement to make a better group by cube when you didn't need EVERY combo.

Multi Table Insert for Different Group By columns

March 31, 2008 - 6:13 pm UTC

Reviewer: Harikrishnan from Philadelphia, PA USA

Tom,

Thanks a lot for your quick response. When I run the individual INSERT statements with GROUP BY clauses, one statement got completed in 3 minutes and 50 Seconds and the other one in 6 minutes and 40 seconds.
I ran the INSERT ALL with GROUPING SETS and it took almost 20 minutes to complete. Even I checked the explain plan also and it shows the query would be executing in parallel as base table is defined in PARALLEL mode.
As I am scanning the base table only once (instead of twice with separate INSERT statements), i thought the query would run faster.

Can you please help me understand why the INSERT ALL running longer than individual INSERT statements?

Thanks
Tom Kyte

Followup  

April 01, 2008 - 7:17 am UTC

one would need to see a plan or something - else one would be "guessing"

Conention

April 01, 2008 - 8:59 am UTC

Reviewer: Alex Borges from Cabreuva, BRAZIL

I have the Oracle 10g installed, and want to connect with BDE 5.20. When I insert the SERVERNAME, It appears the error
Alias error ORA-03114 not connected tp ORACLE...: corporeRM
you it could help me?
Tom Kyte

Followup  

April 01, 2008 - 9:41 am UTC

no clue what "bde 5.20" is. Although it looks like it might be a really old borland product, part of dBase maybe.

What version is the client "bde 5.20" using, it cannot be using 10g client (if you are trying to use 10g client, that is the problem).

rows created

April 01, 2008 - 10:23 am UTC

Reviewer: Srikanth Sathya from WI

Tom

In an earlier posting under this article, you mentioned that the INSERT ALL will only return the no of rows affected by the statement even though multiple tables were inserted. Is there a workaround to get the number of rows inserted into each of the table?

Thanks
Srikanth
Tom Kyte

Followup  

April 01, 2008 - 12:01 pm UTC

no, the DML operation returns the number of rows processed by the DML statement.

Multi table Insert with Different Group by Columns

April 03, 2008 - 8:35 pm UTC

Reviewer: A reader

Tom,

Hereby I attached the explain plans that I got from two different queries one with GROUPING SETS and other with CUBE method.

GROUPING SETS Query and Plan:
SELECT /*+ NOPARALLEL(A) */ col1, col2, col3, col4,col5, col6,
            SUM (col7) col7, SUM (col8) col8, SUM (col9) col9,
            SUM (col10) col10, SUM (col11) col11, SUM (col12) col12,
            GROUPING(col2) gid_col2,
            GROUPING(col3) gid_col3
FROM t1 a, 
     t2 p
WHERE 
     a.col13 = p.col13
GROUP BY GROUPING SETS ((col1, col4, col5, col6, col2),
                        (col1, col4, col5, col6, col3))
                        
----------------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name                        | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |                             |    20M|  3331M|  6876  (28)| 00:00:52 |
|   1 |  TEMP TABLE TRANSFORMATION |                             |       |       |            |          |
|   2 |   LOAD AS SELECT           |                             |       |       |            |          |
|*  3 |    HASH JOIN               |                             |    20M|  1032M|  6868  (28)| 00:00:52 |
|   4 |     TABLE ACCESS FULL      | T2             |    10 |   140 |     2   (0)| 00:00:01 |
|   5 |     PARTITION RANGE ALL    |                             |    20M|   759M|  6459  (24)| 00:00:49 |
|   6 |      TABLE ACCESS FULL     | T1                   |    20M|   759M|  6459  (24)| 00:00:49 |
|   7 |   LOAD AS SELECT           |                             |       |       |            |          |
|   8 |    HASH GROUP BY           |                             |     1 |   132 |     3  (34)| 00:00:01 |
|   9 |     TABLE ACCESS FULL      | SYS_TEMP_0FD9D673A_17034638 |     1 |   132 |     2   (0)| 00:00:01 |
|  10 |   LOAD AS SELECT           |                             |       |       |            |          |
|  11 |    HASH GROUP BY           |                             |     1 |   138 |     3  (34)| 00:00:01 |
|  12 |     TABLE ACCESS FULL      | SYS_TEMP_0FD9D673A_17034638 |     1 |   138 |     2   (0)| 00:00:01 |
|  13 |   VIEW                     |                             |     1 |   171 |     2   (0)| 00:00:01 |
|  14 |    TABLE ACCESS FULL       | SYS_TEMP_0FD9D673B_17034638 |     1 |   158 |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------
                      


CUBE Query and Plan:
                        
SELECT /*+ NOPARALLEL(A) */ col1, col2, col3, col4,col5, col6,
            SUM (col7) col7, SUM (col8) col8, SUM (col9) col9,
            SUM (col10) col10, SUM (col11) col11, SUM (col12) col12,
            GROUPING(col2) gid_col2,
            GROUPING(col3) gid_col3
FROM t1 a, 
     t2 p
WHERE 
     a.col13 = p.col13
GROUP BY col1, col4, col5, col6, CUBE(col2, col3)

------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name               | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |                    |    20M|  1032M|       |   103K  (7)| 00:12:51 |
|   1 |  SORT GROUP BY          |                    |    20M|  1032M|       |   103K  (7)| 00:12:51 |
|   2 |   GENERATE CUBE         |                    |    20M|  1032M|       |   103K  (7)| 00:12:51 |
|   3 |    SORT GROUP BY        |                    |    20M|  1032M|  3121M|   103K  (7)| 00:12:51 |
|*  4 |     HASH JOIN           |                    |    20M|  1032M|       |  6868  (28)| 00:00:52 |
|   5 |      TABLE ACCESS FULL  | T2        |    10 |   140 |       |     2   (0)| 00:00:01 |
|   6 |      PARTITION RANGE ALL|                    |    20M|   759M|       |  6459  (24)| 00:00:49 |
|   7 |       TABLE ACCESS FULL | T1              |    20M|   759M|       |  6459  (24)| 00:00:49 |
------------------------------------------------------------------------------------------------------


If you look at the GROUPING SETS querie's explain plan there are three tables SYS_TEMP_0FD9D673A_17034638, SYS_TEMP_0FD9D673A_17034638 and SYS_TEMP_0FD9D673B_17034638 have been used by Oracle. But the same Query with CUBE, doesn't have any other tables that I had mentioned in the Query. What are all those tables?

Please help me in Understanding the plan differences. When I run the two Queries with different Group by clause, they are running faster than GROUPING SETS query.

Tom Kyte

Followup  

April 04, 2008 - 10:16 am UTC

the temporary tables are in memory things (until they exceed the workarea and spill to disk - just like a hash join will)

you have a special case here - usually you are comparing

group by cube(a,b,c,d,e,f)

vs

group by grouping sets((a,b,c,d,e),(a,b,c,d,f))


In your case:
GROUP BY col1, col4, col5, col6, CUBE(col2, col3)

I would guess that col1,col4,col5,col6 generate the bulk of the combinations and the col2/col3 'cube' is relatively small.

Multitale insert when using CURSOR

July 25, 2008 - 5:11 pm UTC

Reviewer: Jayan from CA US

Hi, Tom
I want to use the multi-table insert feature when data is derived from a CURSOR. How to use cursor variable into values clause along with WHEN clause. Please give one example with CURSOR syntax. Thanks

Tom Kyte

Followup  

July 29, 2008 - 10:19 am UTC

that does not make sense.

You cannot even "insert into a single table" from a cursor


a select cursor is used to retrieve data from the database to a client, if your intention was to insert the data into a table, your statement would have to start with insert, not select.

If it starts with select, you need to have a program fetch and then insert the data.

Enable parallel DML

August 04, 2009 - 2:29 am UTC

Reviewer: Viku from INDIA

Hi Tom,

Enabling the Parallel DML on the multi-insert causes the whole query to run in serial mode.

If we see the explain plan using dbms_xplan.display() we get to see the PX activity happening however as the query runs and we get to see the actual plan using display_cursor() we see that no PX activity is associated to any step.

The query runs in serial mode and the Inserts to both tables also happens in serial mode. When this is runing I also noticed the query didn't even spawned any parallel slaves for the co-ordinator.

Why it is behaving like this? any ideas will be appreciated.

If (alter session enable parallel dml) = 'Y' then
Query runs in serial mode
we see parallel dml enabled ENABLED for coordinator
else
Query runs in parallel mode
we see parallel slaves with PDML_status as DISABLED.

We are on 10.2.0.4

If you need the queries/explain plans I can send them across.

Thanks for all your help

RE: Normalization ETL

August 11, 2009 - 1:40 am UTC

Reviewer: bakunian from OC, CA

Tom,

I have to load flat structure extracted from Lotus Notes into normalized tables. However since there some many different date types there is dedicated date_types table and I am not sure how to achieve this. Could you suggest how this should be done? I include sudo below but I don't think it's correct, what do you think?

Thank you in advance

create flat_table
(
emp_key integer,
emp_id varchar2(10),
emp_name varchar2(10),
manager_name varcahr2(10),
department varcahr2(10),
date_hired date,
date_resigned date,
date_promoted date,
date_suspended date,
address varchar2(10),
city varchar2(10),
zip_code number
)

create table employee
(
emp_key integer,
emp_id varchar2(10),
address varchar2(10),
city varchar2(10),
zip_code number,
manager_key integer
)

create table manager
(
manager_key integer,
manager_name varchar2(10),
department varchar2(10)
)

create table date_types
(
date_key integer,
date_hired date,
date_resigned date,
date_promoted date,
date_suspended date,
emp_key integer
)

< sudo >
for i in (select * from flat_table)
loop

insert into manager
select manger_seq.nextval, manager_name, ...
from flat_table

insert into employees (emp_key, manager_key,date_key )
value ( emp_seq.nextval, manager_seq.currval, ... )

insert into date_types
select date_seq.nextval, emp_seq.currval, date_hired, ...
from flat_table

end loop;

Tom Kyte

Followup  

August 13, 2009 - 9:05 am UTC

I do not understand why there would be more than one table here????

I don't know what the purpose of the "data_types" table would be?

managers are employees, they would not be stored separately.

Normalization ETL

August 13, 2009 - 10:48 am UTC

Reviewer: bakunian from OC, CA

This is not actuall data model I just gave this as an example. In reality we have one wide denormalized table and we have to populate multiple tables that normalized.

"I don't know what the purpose of the "data_types" table would be?"

It's not data_types but date_types for types of days such as date hired, date promoted and etc there can be about 10 different types of dates. Do you think these date types should be stored inline with employee?

Say for exmaple if employee has multiple skills which will be stored in a separate table. I can also do INSERT ALL, but I don't know how to handle if particular employee has no skills in the source table then I don't want to create record in skills. Can you please advise if there is better aproach?

INSERT ALL
INTO employee VALUES (emp_seq.nextval, name ...
INTO skills VALUES (skills_seq.nextval,
emp_seq.currval, skill ...
INTO ... VALUES (...
INTO ... VALUES (...
SELECT * FROM source_table

Tom Kyte

Followup  

August 13, 2009 - 12:39 pm UTC

... This is not actuall data model I just gave this as an example. In reality we ....

examples need to "make sense", yours turned common sense upside down, that is a problem.

...
It's not data_types but date_types for types of days such as date hired, date
promoted and etc there can be about 10 different types of dates. Do you think
these date types should be stored inline with employee?...

I think you best give us a real world example. Sounds like you are maybe trying to populate a FACT table and maintain DIMENSION tables in one statement?

multitable insert sequence

August 17, 2009 - 10:07 pm UTC

Reviewer: Albert Nelson A from Singapore

Hi Tom,

In your followup long 5 years back you have mentioned:

<quote>
Followup   December 7, 2004 - 8pm US/Eastern:

confirmed, guess it is equally unsafe in this case.  I'll have to research this further to see if 
it is a bug or not - for me it took lots more rows and a quick trace showed it "should have 
worked".

thanks for the followup -- i'll have to play more with this one. 
</quote>

It seems that selecting from sequence to create master detail records cannot guranteed to be run successfully always.

See below my test case:
create table s
      (s_id number not null primary key,
       s_name varchar2(30) not null );

create sequence s_seq;
create sequence s_sub_seq;

create table s_sub
  (s_sub_id number not null primary key,
   s_id number not null references s(s_id),
   s_sub_name varchar2(30) not null );


SQL> insert all
  2  when mn = 1 then
  3    into s values (s_seq.nextval, nam)
  4  else
  5    into s_sub values (s_sub_seq.nextval,s_seq.nextval,sub_nam)
  6  with mas as
  7  (select 1 as s, 'x' as nam  from dual
  8  union all
  9  select 2 as s, 'y' as nam from dual),
 10  dtl as (
 11  select 1 as ds, 1 as s, 'ax' as sub_nam from dual
 12  union all
 13  select 2 as ds, 1 as s, 'bx' as sub_nam from dual
 14  union all
 15  select 3 as ds, 2 as s, 'ay' as sub_nam from dual
 16  union all
 17  select 4 as ds, 2 as s, 'by' as sub_nam from dual)
 18  select row_number() over (partition by m.s order by 1) as mn, m.nam, d.sub_nam from mas m join dtl d on m.s = d.s
 19  /
insert all
*
ERROR at line 1:
ORA-02291: integrity constraint (FMSCCPL.SYS_C0056781) violated - parent key
not found


Can we conclude that sequence selection to achieve master detail realtionship in multi table insert is not possible?

Regards,

Albert Nelson A.

Tom Kyte

Followup  

August 24, 2009 - 4:33 pm UTC

I would say:

... Can we conclude that sequence selection to achieve master detail realtionship
in multi table insert is not possible?
...

is a true statement, yes.

Dynamic Multi-table insert failing

August 22, 2009 - 2:07 pm UTC

Reviewer: Mani from Jersey City, NJ

Am in 10GR2.

select * from v$version;

Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
"CORE 10.2.0.4.0 Production"
TNS for IBM/AIX RISC System/6000: Version 10.2.0.4.0 - Productio
NLSRTL Version 10.2.0.4.0 - Production

when trying to do a multi table using dynamic sql execute immediate am getting the following error.

Error report:
ORA-03001: unimplemented feature
ORA-06512: at line 158
03001. 00000 - "unimplemented feature"
*Cause: This feature is not implemented.
*Action: None.

In the chain i saw that you said using insert all in dynamic sql is one of the ways. Not sure whether its desupported recently or something else am missing.

In my insert all i used to one when clause and a else clause.

Thanks
Mani
Tom Kyte

Followup  

August 25, 2009 - 9:06 am UTC

you have some bug in your code, and we'd need to see your code (make it small, take out EVERYTHING that is not relevant to the problem at hand, it should be TINY by the time you are done - and you might recognize then what you have done incorrectly yourself)

It works in general:

ops$tkyte%ORA10GR2> create table t1 ( x int primary key, y int );

Table created.

ops$tkyte%ORA10GR2> create table t2 ( x int primary key, y date );

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> begin
  2  execute immediate '
  3  insert
  4   when mod( object_id, 2 ) = :x then
  5      into t1 ( x, y ) values ( SS, object_id )
  6   when mod( object_id, 2 ) = :y then
  7      into t2 ( x, y ) values ( SS, created )
  8  select object_id SS, object_id, created from all_objects
  9  ' using 1, 0;
 10  end;
 11  /

PL/SQL procedure successfully completed.


insert all when

September 14, 2009 - 10:01 am UTC

Reviewer: reader from UK

What's the maximum number of when clauses you can have in a multitable insert? When I get to 129 (when clauses) I get ORA-00913 too many values ...
Tom Kyte

Followup  

September 14, 2009 - 2:00 pm UTC

http://docs.oracle.com/docs/cd/B28359_01/server.111/b28286/statements_9014.htm#sthref9018


it would be when you get to 128

Multi Table insert

September 14, 2009 - 6:16 pm UTC

Reviewer: anon from UK

Hi Tom - cheers for that! One quick question - we're on Oracle 10.2 and I am doing a multi table insert into a single table with about 200 when clauses (split into two insert all when statements). It's created dynamically and parses fine (using dbms_sql.parse). When I come to executing I get:

ORA-24335: cannot support more than 1000 columns

Is this a known bug in Oracle 10.2 or is there a fix/workaround. Thanks once again
Tom Kyte

Followup  

September 15, 2009 - 9:25 am UTC

... nd I am
doing a multi table insert into a single table with about 200 when clauses ...

no, you are not, read right above. We just discussed how the limit on when clauses is....

127


help me reproduce please, modify my example which uses 2,000 total columns to match your experience.


ops$tkyte%ORA10GR2> drop table t1;

Table dropped.

ops$tkyte%ORA10GR2> drop table t2;

Table dropped.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> declare
  2          l_sql long := 'create table t1 ( c1 int';
  3  begin
  4          for i in 2 .. 1000
  5          loop
  6                  l_sql := l_sql || ', c' || i || ' int';
  7          end loop;
  8          execute immediate l_sql || ')';
  9  end;
 10  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> create table t2 as select * from t1;

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> declare
  2          l_sql long := 'insert when (1=1) then into t1';
  3          l_cols long := ' values ( c1';
  4  begin
  5          for i in 2 .. 1000
  6          loop
  7                  l_cols := l_cols || ', c' || i;
  8          end loop;
  9          l_cols := l_cols || ')';
 10          l_sql := l_sql || l_cols ||  ' when (1=1) then into t1 ' || l_cols || ' select 1 c1';
 11          for i in 2 .. 1000
 12          loop
 13                  l_sql := l_sql || ', 1 c' || i;
 14          end loop;
 15          l_sql := l_sql || ' from dual';
 16          execute immediate l_sql;
 17  end;
 18  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select c1 from t1;

        C1
----------
         1
         1

ops$tkyte%ORA10GR2> select c2 from t2;

no rows selected

ops$tkyte%ORA10GR2> select table_name, count(*) from user_tab_columns where table_name in ( 'T1', 'T2' ) group by table_name;

TABLE_NAME                       COUNT(*)
------------------------------ ----------
T1                                   1000
T2                                   1000

Multi Table insert

September 14, 2009 - 6:16 pm UTC

Reviewer: anon from UK

Hi Tom - cheers for that! One quick question - we're on Oracle 10.2 and I am doing a multi table insert into a single table with about 200 when clauses (split into two insert all when statements). It's created dynamically and parses fine (using dbms_sql.parse). When I come to executing I get:

ORA-24335: cannot support more than 1000 columns

Is this a known bug in Oracle 10.2 or is there a fix/workaround. Thanks once again

Bhagat Singh

February 26, 2010 - 11:25 pm UTC

Reviewer: Bhagat Singh from India, Delhi

Your inputs are valuable in this case say for

drop table emp1;
drop table emp2;
drop table emp3;

create table emp1 as select * from scott.emp where 1=0;
create table emp2 as select * from scott.emp where 1=0;
create table emp3 as select deptno,sum(sal) sale from scott.emp where 1=0 group by deptno;


insert first
when ( job='CLERK')
then
INTO emp1
(EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO)
when ( job='MANAGER')
then
INTO emp2
(EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO)
ELSE
INTO
EMP3 --But emp3 should have summary of sal on deptno (select deptno,sum(sal) from emp.scott group by deptno)
(deptno,sum(sal) group by deptno )
select (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO from scott.emp )


Any way out on this because in single read I will conditionally insert the data and summarize also in
one table

Regards
Tom Kyte

Followup  

March 01, 2010 - 11:38 am UTC

then emp3 should be a materialized view of emp1 and emp2 OR of scott.emp.

you wouldn't be inserting into emp3, you'd have to be updating it by the way - but, use a materialized view if you need it.

this might help

March 01, 2010 - 12:20 pm UTC

Reviewer: VS

Tom,
the below might help for the previous post.
insert first 
when ( job='CLERK')
then
INTO emp1
(EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO)
values
(EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO)
when ( job='MANAGER')
then
INTO emp2
(EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values
(EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO)
INTO 
EMP3 (deptno,sale) values(deptno,sale)
select EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO,sum(sal) over(partition by deptno) sale from emp

Tom Kyte

Followup  

March 02, 2010 - 6:36 am UTC

how does that aggregate values into emp3?

... --But emp3 should have summary of sal on deptno (select deptno,sum(sal) from emp.scott group
by deptno)
.....



using analytical function

March 02, 2010 - 11:30 am UTC

Reviewer: VS

Tom,
good day to you. I may be wrong but using analytical function we are getting sum of salary for each department.

select EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO,sum(sal) over(partition by deptno) sale from emp


If I am missing something please point out.


Tom Kyte

Followup  

March 02, 2010 - 12:50 pm UTC

they want a table with ONE ROW per department, with the sum of salaries for that deptno.

read the example again - or just re-read my last posting where I put it all in italics.

test case for above

March 03, 2010 - 9:22 pm UTC

Reviewer: VS

Dear Tom,
good day to you, please find below the test case.

user@orcl>drop table emp1;

Table dropped.

user@orcl>drop table emp2;

Table dropped.

user@orcl>drop table emp3;

Table dropped.

user@orcl>create table emp1 as select * from emp where 1=0;

Table created.

user@orcl>create table emp2 as select * from emp where 1=0;

Table created.

user@orcl>create table emp3 as select deptno,sal from emp where 1=0 ;

Table created.

user@orcl>select count(*) from emp1;

COUNT(*)
----------
0

user@orcl>select count(*) from emp2;

COUNT(*)
----------
0

user@orcl>select count(*) from emp3;

COUNT(*)
----------
0

user@orcl>insert first
2 when ( job='CLERK')
3 then
4 INTO emp1
5 (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO)
6 values
7 (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO)
8 when ( job='MANAGER')
9 then
10 INTO emp2
11 (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values
12 (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO)
13 INTO
14 EMP3 (deptno,sal) values(deptno,sum_sal)
15 select EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO,sum(sal) over(partition by deptno) sum_sal from emp;

10 rows created.

user@orcl>select * from emp3;

DEPTNO SAL
---------- ----------
10 8750
20 10875
30 9400

user@orcl>select empno,job from emp1;

EMPNO JOB
---------- ---------
7934 CLERK
7876 CLERK
7369 CLERK
7900 CLERK

user@orcl>select empno,job from emp2;

EMPNO JOB
---------- ---------
7782 MANAGER
7566 MANAGER
7698 MANAGER

Tom Kyte

Followup  

March 04, 2010 - 9:30 am UTC

No, that doesn't work, you got LUCKY because of the data (however, you did point me to an obvious solution - similar but different)

Let me rewrite your insert using indentation:

ops$tkyte%ORA11GR2> insert first
  2    when ( job='CLERK') then INTO emp1 (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO)
  3    values (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO)<b>
  4    when ( job='MANAGER') then INTO emp2 (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO)
  5    values (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO)
  6                             INTO EMP3 (deptno,sal) values(deptno,sum_sal)</b>
  7  select EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO,
  8         sum(sal) over(partition by deptno) sum_sal
  9    from emp;

10 rows created.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select * from emp3;

    DEPTNO        SAL
---------- ----------
        10       8750
        20      10875
        30       9400



Looks good so far -right? Well, that is purely by ACCIDENT - because there happens to be ONE MANAGER per department!

Watch (i truncated table emp3 first):

ops$tkyte%ORA11GR2> insert first
  2    when ( job='MANAGER') then INTO emp2 (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO)
  3    values (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO)<b>
  4    when ( job='CLERK') then INTO emp1 (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO)
  5    values (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO)
  6                             INTO EMP3 (deptno,sal) values(deptno,sum_sal)</b>
  7  select EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO,
  8         sum(sal) over(partition by deptno) sum_sal
  9    from emp;

11 rows created.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select * from emp3;

    DEPTNO        SAL
---------- ----------
        10       8750
        20      10875
        20      10875
        30       9400




see how the values for 20 are duplicated? there are TWO CLERKS in deptno=20.

If you take away a manager record from your first example, you'll lose a deptno. If you have more than one manager per deptno, you'll get duplicates. Your insert does not work - the second INTO is covered under the preceding WHEN clause.

However, this will work:

ops$tkyte%ORA11GR2> insert <b>ALL</b>
  2    when ( job='MANAGER') then INTO emp2 (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO)
  3    values (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO)
  4    when ( job='CLERK') then INTO emp1 (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO)
  5    values (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO)
  6    <b>when ( rn = 1 ) then </b> INTO EMP3 (deptno,sal) values(deptno,sum_sal)
  7  select EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO,
  8         sum(sal) over(partition by deptno) sum_sal ,<b>
  9             row_number() over (partition by deptno order by empno) rn</b>
 10    from emp;

10 rows created.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select * from emp3;

    DEPTNO        SAL
---------- ----------
        10       8750
        20      10875
        30       9400


thanks for taking your time on this and giving more details

March 04, 2010 - 11:22 am UTC

Reviewer: VS

Dear Tom,

thanks for your time on this and also for giving more details on the problem. I am happy that I am learning from your site and was able to frame some part of the solution.

11G no longer allows sequences in multi-table inserts?

October 28, 2010 - 10:58 am UTC

Reviewer: Stew Ashton from Paris, France


Tom, the 11g R2 documentation has thrown me for a loop.

The 10g R2 documentation says: "The subquery of the multitable insert statement cannot use a sequence."

The 11g R2 documentation says: "You cannot specify a sequence in any part of a multitable insert statement. A multitable insert is considered a single SQL statement. Therefore, the first reference to NEXTVAL generates the next number, and all subsequent references in the statement return the same number."

Wait a minute: this is exactly the behavior I want, and that I get in 10g and 11g! If this is what I want and it works, why did it become illegal in 11g?

Would I really be taking a chance if I did this in production code?

Thanks in advance for any light you can shed on this. Here are the links to the documentation:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_9014.htm#i2080134
http://download.oracle.com/docs/cd/E11882_01/server.112/e17118/statements_9014.htm#i2080134
Tom Kyte

Followup  

October 28, 2010 - 12:25 pm UTC

that would be a documentation bug.

ops$tkyte%ORA11GR2> create table t1 ( x int primary key, y int );

Table created.

ops$tkyte%ORA11GR2> create table t2 ( x int primary key, y date );

Table created.

ops$tkyte%ORA11GR2> create sequence s;
create sequence s
                *
ERROR at line 1:
ORA-00955: name is already used by an existing object


ops$tkyte%ORA11GR2> insert
  2   when mod( object_id, 2 ) = 1 then
  3      into t1 ( x, y ) values ( SS, object_id )
  4   when mod( object_id, 2 ) = 0 then
  5      into t2 ( x, y ) values ( SS, created )
  6  select s.nextval SS, object_id, created from all_objects
  7  /
select s.nextval SS, object_id, created from all_objects
         *
ERROR at line 6:
ORA-02287: sequence number not allowed here


ops$tkyte%ORA11GR2> insert
  2   when mod( object_id, 2 ) = 1 then
  3      into t1 ( x, y ) values ( s.nextval, object_id )
  4   when mod( object_id, 2 ) = 0 then
  5      into t2 ( x, y ) values ( s.nextval, created )
  6  select object_id, created from all_objects
  7  /

71783 rows created.


I posted a comment to the doc writer about it.

Great! Thanks!

October 28, 2010 - 12:46 pm UTC

Reviewer: Stew Ashton from Paris, France


This will be a big help at work. Just another example of the unique added value of asktom. What would we all do without you?

Multi-inserts & sequences

November 01, 2010 - 10:34 am UTC

Reviewer: Mihail Bratu from Romania

Hello Tom,
I followed this thread about multi-insert and I thought I’d share, for situations when the referred values have a small cardinality, as in the following test case:

drop table objects
/
drop table owners
/
drop table object_types
/
create table owners
(ID NUMBER PRIMARY KEY
,OWNER VARCHAR2(30) NOT NULL
)
/
create table object_types
(ID NUMBER PRIMARY KEY
,OBJECT_TYPE VARCHAR2(19) NOT NULL
)
/
create table objects
(ID NUMBER NOT NULL
,OWNER_ID NUMBER NOT NULL --REFERENCES owners(ID)
,OBJECT_NAME VARCHAR2(30) NOT NULL
,SUBOBJECT_NAME VARCHAR2(30)
,OBJECT_ID NUMBER NOT NULL
,DATA_OBJECT_ID NUMBER
,OBJECT_TYPE_ID NUMBER --REFERENCES object_types(ID)
)
/
alter table objects
add constraint objects__OWNER_ID__fk FOREIGN KEY (OWNER_ID)
REFERENCES owners(ID) DEFERRABLE INITIALLY DEFERRED
/
alter table objects
add constraint objects__OBJECT_TYPE_ID__fk FOREIGN KEY (OBJECT_TYPE_ID)
REFERENCES object_types(ID) DEFERRABLE INITIALLY DEFERRED
/
drop sequence objects__ID__seq
/
drop sequence owners__ID__seq
/
drop sequence object_types__ID__seq
/
create sequence objects__ID__seq
/
create sequence owners__ID__seq
/
create sequence object_types__ID__seq
/
create or replace
function get_owners__ID__seq(p_dummy varchar2) return number
is
begin
return owners__ID__seq.nextval;
end;
/
create or replace
function get_object_types__ID__seq(p_dummy varchar2) return number
is
begin
if p_dummy is null
then
return null;
else
return object_types__ID__seq.nextval;
end if;
end;
/
insert all
when (owner_rn = 1)
then into owners
values
(owner_id
,OWNER
)
when (object_type_rn = 1 and object_type is not null)
then into object_types
values
(object_type_id
,OBJECT_TYPE
)
when (1 = 1)
then into objects
values
(objects__ID__seq.nextval
,OWNER_ID
,OBJECT_NAME
,SUBOBJECT_NAME
,OBJECT_ID
,DATA_OBJECT_ID
,OBJECT_TYPE_ID
)
select OWNER
,row_number() over(partition by OWNER order by OBJECT_NAME) owner_rn
,(select get_owners__ID__seq(OWNER) from dual) owner_id
,OBJECT_NAME
,SUBOBJECT_NAME
,OBJECT_ID
,DATA_OBJECT_ID
,OBJECT_TYPE
,row_number() over(partition by OBJECT_TYPE order by OBJECT_NAME) object_type_rn
,(select get_object_types__ID__seq(OBJECT_TYPE) from dual) object_type_id
from all_objects
/
commit
/
select * from owners order by id
/
select * from object_types order by id
/
This query is based on Oracle’s caching of correlated subqueries feature ! ?

Regards
Tom Kyte

Followup  

November 01, 2010 - 3:47 pm UTC

.... This query is based on Oracle’s caching of correlated subqueries feature ! ?
....

hugely bad idea - don't you think?

I sure do - what happens with collisions - the hash table is very small - collisions can and will happen - what then?

November 02, 2010 - 8:58 am UTC

Reviewer: Mihail Bratu from Romania

I do not understand; please elaborate.

Thenk you
Tom Kyte

Followup  

November 02, 2010 - 11:32 am UTC

you are assuming we'll cache your data.

we might
we might not

what happens when we DON'T

you get the wrong "nextval" values.


Your code is assuming a certain specific assured order of operation. You never have that with SQL. A change in plan, a change in data values - anything can and will defeat the 'trick' you are trying to employ.

Your code is very unsafe. Hugely so.


consider:

ops$tkyte%ORA11GR2> create table t
  2  as
  3  select OWNER
  4     ,row_number() over(partition by OWNER order by OBJECT_NAME) owner_rn
  5    ,(select get_owners__ID__seq(OWNER) from dual) owner_id
  6     ,OBJECT_NAME
  7     ,SUBOBJECT_NAME
  8     ,OBJECT_ID
  9     ,DATA_OBJECT_ID
 10     ,OBJECT_TYPE
 11     ,row_number() over(partition by OBJECT_TYPE order by OBJECT_NAME) object_type_rn
 12    ,(select get_object_types__ID__seq(OBJECT_TYPE) from dual) object_type_id
 13  from all_objects
 14  /

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select owner, count(distinct owner_id) from t group by owner order by owner;

OWNER                          COUNT(DISTINCTOWNER_ID)
------------------------------ -----------------------
APEX_030200                                          1
APPQOSSYS                                            1
...
WMSYS                                                1
XDB                                                  1

34 rows selected.



ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select object_type, count(distinct object_type_id) from t group by object_type order by object_type;

OBJECT_TYPE         COUNT(DISTINCTOBJECT_TYPE_ID)
------------------- -----------------------------
CLUSTER                                         1
CONSUMER GROUP                                  1
...
PACKAGE                                         1
PACKAGE BODY                                    1
PROCEDURE                                       1<b>
PROGRAM                                        11</b>
RULE                                            1
RULE SET                                        1
..
WINDOW                                          1
XML SCHEMA                                      1

41 rows selected.




I got 'lucky' with my owner data - we were able to cache it nicely, however, my object_type data ran into some hash collisions - I got 11 different values for one of my object types :(


very unsafe - anytime you have sql that calls PLSQL and you RELY ON THE number of times the function is called (you cannot, we might call it many more or less times than you anticipate or think we should), or the ORDER in which it is called (you cannot again) or some internal undocumented optimization (scalar subquery caching) for correct results - you will get severely burned

Multi-inserts & sequences

November 03, 2010 - 3:32 am UTC

Reviewer: Mihail Bratu from Romania

Thank you very much, now it’s crystal clear. It was a very good lesson about caching.

Regards

"Eppur si muove"

November 05, 2010 - 3:04 am UTC

Reviewer: Mihail Bratu from Romania

Hello Tom,

We have to add one safety net:

create or replace
type varchar2_number AS OBJECT
(string VARCHAR2(30)
,seq_number NUMBER
)
/
create or replace
type list_varchar2_number AS TABLE OF varchar2_number
/

create or replace
package pac_hashing_sequences
as
/* VARIABLES */
g_ListOwnerSeq list_varchar2_number;
g_ListObjectTypeSeq list_varchar2_number;

/* FUNCTIONS */
function get_owners__ID__seq(p_dummy varchar2) return number;
function get_object_types__ID__seq(p_dummy varchar2) return number;
end pac_hashing_sequences;
/


create or replace
package body pac_hashing_sequences
as
function get_owners__ID__seq(p_dummy varchar2) return number
is
l_SeqNumber NUMBER;
begin
begin
select t.seq_number
into l_SeqNumber
from TABLE(g_ListOwnerSeq) t
where t.string = p_dummy;
exception
when no_data_found
then
l_SeqNumber := owners__ID__seq.nextval;
g_ListOwnerSeq.EXTEND(1);
g_ListOwnerSeq(g_ListOwnerSeq.COUNT) := varchar2_number(p_dummy,l_SeqNumber);
end;
return l_SeqNumber;
end get_owners__ID__seq;

function get_object_types__ID__seq(p_dummy varchar2) return number
is
l_SeqNumber NUMBER;
begin
dbms_application_info.set_client_info(userenv('client_info')+1);
if p_dummy is null
then
return null;
else
begin
select t.seq_number
into l_SeqNumber
from TABLE(g_ListObjectTypeSeq) t
where t.string = p_dummy;
exception
when no_data_found
then
l_SeqNumber := object_types__ID__seq.nextval;
g_ListObjectTypeSeq.EXTEND(1);
g_ListObjectTypeSeq(g_ListObjectTypeSeq.COUNT) := varchar2_number(p_dummy,l_SeqNumber);
end;
return l_SeqNumber;
end if;
end get_object_types__ID__seq;

end pac_hashing_sequences;
/

drop sequence objects__ID__seq
/
drop sequence owners__ID__seq
/
drop sequence object_types__ID__seq
/
create sequence objects__ID__seq
/
create sequence owners__ID__seq
/
create sequence object_types__ID__seq
/

drop table all__objects;
begin
dbms_application_info.set_client_info(0);
pac_hashing_sequences.g_ListOwnerSeq := list_varchar2_number();
pac_hashing_sequences.g_ListObjectTypeSeq := list_varchar2_number();
end;
/
create table all__objects
as
select OWNER
,row_number() over(partition by OWNER order by OBJECT_NAME) owner_rn
,(select pac_hashing_sequences.get_owners__ID__seq(OWNER) from dual) owner_id
,OBJECT_NAME
,SUBOBJECT_NAME
,OBJECT_ID
,DATA_OBJECT_ID
,OBJECT_TYPE
,row_number() over(partition by OBJECT_TYPE order by OBJECT_NAME) object_type_rn
,(select pac_hashing_sequences.get_object_types__ID__seq(OBJECT_TYPE) from dual) object_type_id
from all_objects
/

select userenv('client_info') ncall from dual
/
select OBJECT_TYPE, count(distinct object_type_id) n from all__objects group by OBJECT_TYPE
/
select OWNER, count(distinct owner_id) n from all__objects group by OWNER
/

set serveroutput on
begin
dbms_output.put_line('OWNERS:');
dbms_output.put_line('*******');
for q in (
select * from table(pac_hashing_sequences.g_ListOwnerSeq)
)
loop
dbms_output.put_line(q.string || ': ' || q.seq_number);
end loop;
dbms_output.put_line(' ');
dbms_output.put_line('OBJECT TYPES:');
dbms_output.put_line('*************');
for q in (
select * from table(pac_hashing_sequences.g_ListObjectTypeSeq)
)
loop
dbms_output.put_line(q.string || ': ' || q.seq_number);
end loop;
end;
/

Regards
Tom Kyte

Followup  

November 05, 2010 - 9:15 am UTC

safety net for what precisely ?

I have no idea what you are referring to?
I have no idea what you are trying to solve?

less code, more explanation

Multi-insert & sequences

November 05, 2010 - 9:40 am UTC

Reviewer: Mihail Bratu from Romania

The safety net (g_ListOwnerSeq and g_ListObjectTypeSeq) is for avoiding the generation of new call to nextval when collisions happen!
When cashing not happens the functions get_owners__ID__seq and get_object_types__ID__seq are getting values from the two global variables, g_ListOwnerSeq and g_ListObjectTypeSeq.

Many thanks for your patience and Regards
Tom Kyte

Followup  

November 06, 2010 - 7:12 pm UTC

so - you want it to be SUPER slow and inefficient???

this would be a bad idea. (even if you optimized it to use an associated array (index by table of varchar2) this would be a bad idea)

Multi-insert & sequences

November 10, 2010 - 4:44 am UTC

Reviewer: Mihail Bratu from Romania

Hello Tom,
Let's see how slow is the "SUPER slow and inefficient" code in the context of multi-insert with keys generated based on sequences!
For this, we will measure the time of the entire process and the time of the "SUPER slow and inefficient" code and then we will calculate the percentage of the "SUPER slow" code from the entire process.

mihai@ORCL11G> drop sequence objects__ID__seq
  2  /

Sequence dropped.

Elapsed: 00:00:00.03
mihai@ORCL11G> drop sequence owners__ID__seq
  2  /

Sequence dropped.

Elapsed: 00:00:00.01
mihai@ORCL11G> create sequence objects__ID__seq
  2  /

Sequence created.

Elapsed: 00:00:00.01
mihai@ORCL11G> create sequence owners__ID__seq
  2  /

Sequence created.

Elapsed: 00:00:00.01
mihai@ORCL11G> drop table objects
  2  /

Table dropped.

Elapsed: 00:00:00.06
mihai@ORCL11G> drop table owners
  2  /

Table dropped.

Elapsed: 00:00:00.07
mihai@ORCL11G> create table owners
  2     (ID    NUMBER        PRIMARY KEY
  3     ,OWNER VARCHAR2(30)  NOT NULL UNIQUE
  4     )
  5  /

Table created.

Elapsed: 00:00:00.06
mihai@ORCL11G> create table objects
  2     (ID             NUMBER       NOT NULL
  3     ,OWNER_ID       NUMBER       NOT NULL
  4     ,OBJECT_NAME    VARCHAR2(30) NOT NULL
  5  )
  6  /

Table created.

Elapsed: 00:00:00.01
mihai@ORCL11G> alter table objects
  2  add constraint objects__OWNER_ID__fk FOREIGN KEY (OWNER_ID)
  3      REFERENCES owners(ID) DEFERRABLE INITIALLY DEFERRED
  4  /

Table altered.

Elapsed: 00:00:00.01
mihai@ORCL11G> create or replace
  2  package pac_hashing_sequences
  3  as
  4  /*  VARIABLES  */
  5      g_ListOwnerSeq list_varchar2_number;
  6
  7  /*  FUNCTIONS  */
  8      function get_owners__ID__seq(p_dummy varchar2) return number;
  9  end pac_hashing_sequences;
 10  /

Package created.

Elapsed: 00:00:00.00
mihai@ORCL11G> create or replace
  2  package body pac_hashing_sequences
  3  as
  4      function get_owners__ID__seq(p_dummy varchar2) return number
  5      is
  6          l_time NUMBER;
  7     l_SeqNumber NUMBER;
  8      begin
  9  /*
 10  "SUPER slow and inefficient" code is executed few times due
 11  to the scalar subquery caching feature. This SUPER slow and
 12  inefficient code runs in only two situations: when collisions
 13  happen in the hash table (and the code acts as a safety net)
 14  and when an owner is selected from the source table for the
 15  first time.
 16  */
 17          l_time := dbms_utility.get_time;
 18     begin
 19              select t.seq_number
 20              into l_SeqNumber
 21              from TABLE(g_ListOwnerSeq) t
 22              where t.string = p_dummy;
 23          exception
 24              when no_data_found
 25              then
 26                  l_SeqNumber := owners__ID__seq.nextval;
 27                  g_ListOwnerSeq.EXTEND(1);
 28                  g_ListOwnerSeq(g_ListOwnerSeq.COUNT) := varchar2_number(p_dummy,l_SeqNumber);
 29          end;
 30          dbms_application_info.set_client_info(userenv('client_info')+dbms_utility.get_time-l_time);
 31          return l_SeqNumber;
 32      end get_owners__ID__seq;
 33
 34  end pac_hashing_sequences;
 35  /

Package body created.

Elapsed: 00:00:00.06
mihai@ORCL11G> begin
  2      dbms_application_info.set_client_info(0);
  3      pac_hashing_sequences.g_ListOwnerSeq := list_varchar2_number();
  4  end;
  5  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
mihai@ORCL11G> variable n number
mihai@ORCL11G> exec :n := dbms_utility.get_time

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
mihai@ORCL11G>
mihai@ORCL11G> insert all
  2      when (owner_rn = 1)
  3      then into owners
  4      values
  5         (owner_id
  6         ,OWNER
  7         )
  8      when (1 = 1)
  9      then into objects
 10      values
 11         (objects__ID__seq.nextval
 12         ,OWNER_ID
 13         ,OBJECT_NAME
 14         )
 15  select OWNER
 16     ,row_number() over(partition by OWNER order by OBJECT_NAME) owner_rn
 17    ,(select get_owners__ID__seq(OWNER) from dual) owner_id
 18     ,OBJECT_NAME
 19  from all_objects
 20  /

66525 rows created.

Elapsed: 00:00:13.40
mihai@ORCL11G> commit
  2  /

Commit complete.

Elapsed: 00:00:00.01
mihai@ORCL11G> exec :n := dbms_utility.get_time - :n

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
mihai@ORCL11G>
mihai@ORCL11G> select TO_NUMBER(userenv('client_info')) t, :n n, round(TO_NUMBER(userenv('client_info'))/ :n * 100, 2) percent from dual
  2  /

         T          N    PERCENT
---------- ---------- ----------
        39       1344        2.9

Elapsed: 00:00:00.00

I hope you know the hare and the tortoise fable; the hare is very fast but eventually loses the race because the tortoise has many instances.  So the "SUPER slow and inefficient" code does not execute for each row from all_objects thanks to the scalar subquery caching feature. I think that code that represents < 5% of the entire process duration does not need to be optimized.
For me this thread is very useful because I learned from you that the scalar subqurey caching may not happen even though theoretically the hash table has enough space. I added the "SUPER slow" code to prevent the generation of new nextval values for the same owner when collisions happen in the hash table, as this code rarely runs.

Regards

Tom Kyte

Followup  

November 11, 2010 - 1:47 pm UTC

do it for something of some size please.

lots of values - not a tiny number of values.

That is - make it real world.

If you have a handful of owners, perhaps.

Go for thousands or (more realistically in the year 2010) millions of distinct owners.

then let us know - when you blow that cache sizewise - see what happens.


to make the code faster, you should just use a correlated array by the way.


type myarray is table of number index by varchar2(30);


lookup becomes:

seq_number := your_table( owner_value );


put away becomes


your_table(owner_value) := seq_number;




Multi-inserts & sequences

November 12, 2010 - 8:50 am UTC

Reviewer: Mihail Bratu from Romania

Hello Tom,

You’re talking about a scope that I have excluded in my initial post in this thread;

.. for situations when the referred values have a small cardinality ..

In the real world there are also situations of tiny number of values. The focus was not on the the pl/sql code optimization, but on packing the sequences in pl/sql functions that are called from scalar subqueries from dual in order to generate one single nextval for the same referenced value . You are right in terms of the optimization of the function code. Clearly your intention is for this site to be useful to as many readers as possible.

Thanks you and Regards


P.S.
I made an error in my previous post; I wrote
(select get_owners__ID__seq(OWNER) from dual) owner_id

but I meant
(select pac_hashing_sequences.get_owners__ID__seq(OWNER) from dual) owner_id

December 07, 2010 - 4:04 pm UTC

Reviewer: Oleksandr Alesinskyy

It's funny, but the bug <bug:2891576> MULTI TABLE INSERT (INSERT ALL) FAILS WITH ORA-2291
mentioned in this discussion is not fixed till now (in more then 7 years!). I hit it today - on 11.1.0.7.

bulk insert with scalar subquery

March 17, 2011 - 1:52 pm UTC

Reviewer: volker from germany

Hi Tom,

searching for "scalar subquery cashing" on your site, I came here and I think this is a bug, related to this topic, or do you have another opinion?

(Version 11.1.0.7.0)
create table a$tst
(grp_id integer not null
,seq_no integer not null
,text varchar2(100)
)
/
alter table a$tst
add constraint a$tst_uk
unique (grp_id
,seq_no )
/
create or replace trigger a$tst_trg
before insert on a$tst for each row
begin
dbms_output.put_line(:new.grp_id||' '||:new.seq_no||' '||:new.text);
end;
/
create or replace
package a$pck
is
type tp_arr_tst is
table of a$tst%rowtype
index by binary_integer;
procedure ins_bulk
(pa_tst in tp_arr_tst)
;
procedure ins_norm
(pa_tst in tp_arr_tst)
;
end a$pck;
/

create or replace
package body a$pck
is
procedure ins_bulk
(pa_tst in tp_arr_tst)
is
begin
forall tst in pa_tst.first..pa_tst.last
insert
into a$tst
(select pa_tst(tst).grp_id
,nvl(max(seq_no),0) + 1
,pa_tst(tst).text
from a$tst
where grp_id = pa_tst(tst).grp_id
);
end ins_bulk;
procedure ins_norm
(pa_tst in tp_arr_tst)
is
begin
for tst in pa_tst.first..pa_tst.last loop
insert
into a$tst
(select pa_tst(tst).grp_id
,nvl(max(seq_no),0) + 1
,pa_tst(tst).text
from a$tst
where grp_id = pa_tst(tst).grp_id
);
end loop;
end ins_norm;
end a$pck;
/

You see: two procs, one inserts forall, the other inside a loop, both with same scalar subquery (I think it is one, if not, please tell me why).

With empty table I insert 3 rows for grp_id 1 "normal":
declare
a_tst a$pck.tp_arr_tst;
begin
a_tst(1).grp_id := 1;
a_tst(1).text := 'N1';
a_tst(2).grp_id := 1;
a_tst(2).text := 'N3';
a_tst(3).grp_id := 1;
a_tst(3).text := 'N3';

a$pck.ins_norm(a_tst);
end;
/
and trigger output looks fine:
1 1 N1
1 2 N3
1 3 N3
so scalar subquery works on row-level, producing values 1,2,3.

Then 3 additonal rows for grp_id 1 "bulk":
declare
a_tst a$pck.tp_arr_tst;
begin
a_tst(1).grp_id := 1;
a_tst(1).text := 'B1';
a_tst(2).grp_id := 1;
a_tst(2).text := 'B3';
a_tst(3).grp_id := 1;
a_tst(3).text := 'B3';

a$pck.ins_bulk(a_tst);
end;
/
looks fine too:
1 4 B1
1 5 B3
1 6 B3
so scalar subquery works on row-level too, producing values 4,5,6.

Now
rollback;

to get an empty table again and restart the test but this time "bulk" first:
declare
a_tst a$pck.tp_arr_tst;
begin
a_tst(1).grp_id := 1;
a_tst(1).text := 'B1';
a_tst(2).grp_id := 1;
a_tst(2).text := 'B3';
a_tst(3).grp_id := 1;
a_tst(3).text := 'B3';

a$pck.ins_bulk(a_tst);
end;
/
ERROR at line 1:
ORA-00001: unique constraint (XXX.A$TST_UK) violated
ORA-06512: at "XXX.A$PCK", line 7
ORA-06512: at line 11

and output
1 1 B1
1 1 B3
shows, that now scalar subquery now works on statement-level.

Why? And can I avoid this?

Tom Kyte

Followup  

March 17, 2011 - 3:34 pm UTC

It is not a scalar subquery. A scalar subquery is a query that returns either zero or one records and a single column - and it is invoked in the context of some other query.


select dname, (select count(*) from emp where emp.deptno = dept.deptno) DCNT
from dept;


the column DCNT is a scalar subquery.


You just have an insert as select - a regular 'select'.


And I would say it is a product issue that is causing this, I cannot reproduce in 11.2.0.2 at all:

ops$tkyte%ORA11GR2> rollback;

Rollback complete.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> declare
  2    a_tst a$pck.tp_arr_tst;
  3  begin
  4    a_tst(1).grp_id := 1;
  5    a_tst(1).text := 'B1';
  6    a_tst(2).grp_id := 1;
  7    a_tst(2).text := 'B3';
  8    a_tst(3).grp_id := 1;
  9    a_tst(3).text := 'B3';
 10  
 11    a$pck.ins_bulk(a_tst);
 12  end;
 13  /
1 1 B1
1 2 B3
1 3 B3

PL/SQL procedure successfully completed.




scalar subquery

March 21, 2011 - 1:11 pm UTC

Reviewer: volker from germany

Thank you for your explanation, it is not scalar obviously.
And I have to wait for 11gR2...

INSERT ALL , BULK COLLECT

May 25, 2011 - 7:58 am UTC

Reviewer: A reader

INSERT ALL
INTO emp1
(empno, ename, job, mgr, hiredate, sal, comm, deptno
)
VALUES (seq_emp.NEXTVAL, ename, job, mgr, hiredate, sal, comm,deptno
)
INTO scott.emp2
( old_emp_id, new_emp_id
)
VALUES ( empno, seq_emp.currval
)
SELECT e.empno, e.ename, e.job, e.mgr, e.hiredate, e.sal, e.comm, e.deptno
FROM scott.emp e;

Even emp table contains millions of records this statement will not create any problem?

When bulk collect is used? If in this case bulk collect is used how columns are referenced in each table.

Tom Kyte

Followup  

May 25, 2011 - 11:34 am UTC

even if the emp table has billions of records, that statement will work fine.

You'd have to define what your concept of a problem is here - we don't care if there is 1, 100, 1 billion, 100 billion, ... rows.


I'm not sure what the reference to bulk collect is about?

INSERT FIRST with PARALLEL hint on 2 node RAC

June 03, 2011 - 11:25 am UTC

Reviewer: Mahesh from Chennai, INDIA

Hi Tom,

Multi-table INSERT statement with parallelism set to 5, works fine and spawns multiple parallel servers to execute. Its just that it sticks on to only one instance of a 2 node RAC. The code I used is what is given below (as used before in this thread).

create table t1 ( x int );
create table t2 ( x int );

insert /*+ APPEND parallel(t1,5) parallel (t2,5) */
when (dummy='X') then into t1(x) values (y)
when (dummy='Y') then into t2(x) values (y)
select dummy, 1 y from dual;

I can see multiple sessions using the below query, but on only one instance only. This happens not only for the above statement but also for a statement where real time table(as in table with more than 20 million records) are used.

select p.server_name,ps.sid,ps.qcsid,ps.inst_id,ps.qcinst_id,degree,req_degree,
sql.sql_text
from Gv$px_process p, Gv$sql sql, Gv$session s , gv$px_session ps
WHERE p.sid = s.sid
and p.serial# = s.serial#
and p.sid = ps.sid
and p.serial# = ps.serial#
and s.sql_address = sql.address
and s.sql_hash_value = sql.hash_value
and qcsid=945

Could you please clarify if "parallel servers won't be spawned across instances for multi-table insert with parallelism on RAC"?

Thanks,
Mahesh

Have things changed with regard to use of sequencer?

June 23, 2011 - 12:40 pm UTC

Reviewer: Charlie from UT USA

Doesn't look like sequencer can be relied upon to change it's value based on number of rows in source table in multi-table insert.

From http://www.oracle-developer.net/display.php?id=209:
SQL> INSERT ALL
  2      --<>--
  3      INTO t1 (owner, object_id)
  4       VALUES  (owner, multi_table_seq.NEXTVAL)
  5       --<>--
  6       INTO t1 (owner, object_id)
  7       VALUES  (owner, multi_table_seq.NEXTVAL)
    --<>--
  8    9      INTO t1 (owner, object_id)
 10      VALUES  (owner, multi_table_seq.NEXTVAL)
 11      --<>--
 12       INTO t1 (owner, object_id)
 13       VALUES  (owner, multi_table_seq.NEXTVAL)
 14       --<>--
 15    SELECT owner
 16     ,      object_type
 17     ,      object_name
 18     ,      object_id
 19     ,      created
 20     FROM   all_objects
 21     WHERE  ROWNUM <= 50;

200 rows created.

SQL> SELECT COUNT(*)                   AS record_count
    ,      COUNT(DISTINCT(object_id)) AS sequence_numbers_assigned
    FROM   t1;
  2    3
RECORD_COUNT SEQUENCE_NUMBERS_ASSIGNED
------------ -------------------------
         200                        50

Tom Kyte

Followup  

June 23, 2011 - 12:52 pm UTC

big page - which part of it exactly are you referring to? Post a "bookmark" link or give us enough text to find to which bit you are pointing to.

BTW - Thanks for the great advice

June 23, 2011 - 12:55 pm UTC

Reviewer: Charlie Smith

Just wanted thank you for all you do. I've found your posts and reviews to be very useful.

Very clear answers and comments.

Thanks,
Charlie

Have things changed with regards to use of sequencer?

June 23, 2011 - 1:02 pm UTC

Reviewer: Charlie Smith

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:6915127515933#7871897954847

How could the s_sub_id values be different using this example?

Have things changed regarding use of sequencers?

June 23, 2011 - 1:13 pm UTC

Reviewer: Charlie Smith

Not sure how to use the bookmark. Sorry. Reference I am referring to is " So I've discovered February 16, 2003 - 4pm Central time zone TopReviewer: Daryl "

Have things changed regarding use of sequencers?

June 23, 2011 - 2:15 pm UTC

Reviewer: Charlie from UT USA

Oops. Sorry, I was not reading the query carefully enough. I was thinking that the test query was returing only one sequence distinct values and not 50. So, nevermind. :)

Charlie

multi table insert performance

August 02, 2011 - 11:29 pm UTC

Reviewer: A reader

With an insert all statement using the when clause, is it advisable to filter out the rows also with a where clause?

insert all
when cond1 then insert...
when cond2 then insert...
when cond3 then insert...
select *
from big_tab
where cond1
or cond2
or cond3?

The reason I ask is that the "conditions" makes calls to PL/SQL functions, so I'm concerned with it doing context switching twice?
Tom Kyte

Followup  

August 03, 2011 - 7:32 am UTC

you cannot remove the need for plsql there?

it would benefit you greatly - even if you think it makes the sql "ugly"...


the ultimate answer is "it depends". If the use of the conditions allows for an access path that was not otherwise available - it could be useful to include them.

Multitable insert with compression

June 05, 2013 - 11:37 am UTC

Reviewer: Zilvinas Vidmantas from Lithuania

I have table Table1 with basic compression on.
In order to compress data table must be direct-loaded.
Load looks like this:

INSERT --+ append
WHEN 1=1 THEN
INTO Table1 VALUES(va1, Seq1.Nextval)
INTO Table2(filed1, field2, ... , field7)
VALUES(Seq1.Nextval, field2, ... , field7)
SELECT field1, ... , field7
FROM TableSrc1

After load:

CREATE TABLE Test COMPRESSED AS
SELECT * Table1

SELECT SUM(Bytes)
FROM User_Segments
WHERE Segment_Name IN ('TABLE1', 'TEST')

Result is that TEST takes 2 times less space.
What is wrong?
I just noticed that "WHEN 1=1 THEN" can be replaced by "ALL". But can this make any difference?

And one more question.
Is it possible to direct load to one table, while do conventional inserts to another using multi insert? Something like:

INSERT ALL
/*append*/ INTO Table1 ...
/*noappend*/ INTO Table2 ...
...

Thanks

Tom Kyte

Followup  

June 05, 2013 - 2:23 pm UTC

I cannot reproduce your findings, please take my test case and make it reproduce your findings:

ops$tkyte%ORA11GR2> create table t1 as select * from all_objects where 1=0;

Table created.

ops$tkyte%ORA11GR2> create table t2 as select * from all_objects where 1=0;

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> alter table t1 compress;

Table altered.

<b>only T1 is compressed - t2 is 'normal'</b>

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert /*+ append */
  2          all
  3      into t1 (
  4  OWNER, OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID, DATA_OBJECT_ID,
  5  OBJECT_TYPE, CREATED, LAST_DDL_TIME, TIMESTAMP, STATUS, TEMPORARY,
  6  GENERATED, SECONDARY, NAMESPACE, EDITION_NAME
  7  ) values (
  8  OWNER, OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID, DATA_OBJECT_ID,
  9  OBJECT_TYPE, CREATED, LAST_DDL_TIME, TIMESTAMP, STATUS, TEMPORARY,
 10  GENERATED, SECONDARY, NAMESPACE, EDITION_NAME
 11   )
 12      into t2 (
 13  OWNER, OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID, DATA_OBJECT_ID,
 14  OBJECT_TYPE, CREATED, LAST_DDL_TIME, TIMESTAMP, STATUS, TEMPORARY,
 15  GENERATED, SECONDARY, NAMESPACE, EDITION_NAME
 16  ) values (
 17  OWNER, OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID, DATA_OBJECT_ID,
 18  OBJECT_TYPE, CREATED, LAST_DDL_TIME, TIMESTAMP, STATUS, TEMPORARY,
 19  GENERATED, SECONDARY, NAMESPACE, EDITION_NAME
 20   )
 21  select * from all_objects
 22  /

145880 rows created.

<b>now we'll verify direct path was used, if we get this error - it shows direct path was used:</b>

ops$tkyte%ORA11GR2> select count(*) from t1;
select count(*) from t1
                     *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


ops$tkyte%ORA11GR2> select count(*) from t2;
select count(*) from t2
                     *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel

<b>definitely direct path - now we'll create a compressed copy of T1</b>

ops$tkyte%ORA11GR2> commit;

Commit complete.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create table test compress as select * from t1;

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select segment_name, sum(bytes)
  2    from user_segments
  3   where segment_name in ( 'TEST', 'T2', 'T1' )
  4   group by segment_name
  5   order by segment_name;

SEGMENT_NAME                   SUM(BYTES)
------------------------------ ----------
T1                                3145728
T2                                9437184
TEST                              3145728



and the sizes there show that T1 and TEST are the same size, both are compressed and 1/3 the size of the uncompressed image.


So, you'll need to supply all of the steps to reproduce your issue for us. My guess is that your T1 is not getting direct path loaded - maybe it has a trigger, maybe it has a referential integrity constraint - those would prevent direct paths (the query after the load before the commit will verify that for you...)

Sorry. I've found myself what makes the in size difference

June 05, 2013 - 12:11 pm UTC

Reviewer: Zilvinas Vidmantas from Lithuania

It was
EXECUTE IMMEDIATE 'ALTER SESSION FORCE PARALLEL QUERY PARALLEL 4';
EXECUTE IMMEDIATE 'ALTER SESSION FORCE PARALLEL DML PARALLEL 4';


Difference between Insert when and Insert All when

July 10, 2013 - 8:41 pm UTC

Reviewer: Raghu from NY

Hi Tom,

Is there a difference between INSERT WHEN and INSERT ALL WHEN? Looked online and did a couple of tests, but both inserts seem to be doing the same, be it multi table insert or checking for the conditions in the when clause.

If there is a difference, would one be preferable over the other performance wise while inserting large amounts of data (about 500K rows every 30 mins)

Thanks,
Raghu
Tom Kyte

Followup  

July 16, 2013 - 3:43 pm UTC

it is the same as the difference between:


select ALL * from t;
select * from t;


All is just the default. No difference between one statement with ALL and one without.


http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_9014.htm#sthref9017

insert when and insert select

August 19, 2013 - 8:21 pm UTC

Reviewer: Raghu from NY

Tom,

Would

insert
when a=b
then into t values ('error1'..)
when c=d
then into t values ('error2'..)
...
select columns from some_table_join;

AND

insert into t
select
case
when a=b then 'error1'
when c=d then 'error2'
....
end
from some_table_join;

perform the same? The query is the same and the table into which the data is being inserted is the same (although I'm using insert when clause) and there is no order by clause.

Thanks again for all the help.
Raghu
Tom Kyte

Followup  

August 28, 2013 - 5:28 pm UTC

one would insert a record per error and the other inserts one record for all errors.

I would presume that in general that the more records you generate, the more resources you will require.


it would likely be very small - any difference in performance.

Is it a bug in multitable insert

January 15, 2014 - 2:04 pm UTC

Reviewer: Alexander from Moscow

Hello Tom.
I tried to read the whole conversation as carefully as I could and seems like this particular question has not been highlighted yet.
For me it seems like a bug in Multitable Insert implementation, though I agree that SQL is not a procedural language.
Please have a look at below example:

CREATE TABLE t1 (id number);
CREATE TABLE t2 (id number);
CREATE SEQUENCE s START WITH 1 INCREMENT BY 1;
-- A function with a parameter which is not used
CREATE OR REPLACE FUNCTION f(n NUMBER) RETURN NUMBER
IS
BEGIN
RETURN s.NEXTVAL;
END;

-- Calling the function 2 times passing same parameter values
INSERT ALL
INTO t1 (id) VALUES (id1)
INTO t2 (id) VALUES (id2)
SELECT f(1) id1, f(1) id2
FROM dual
CONNECT BY level<4;
--
SELECT * FROM t1;
1
3
5
--
SELECT * FROM t2;
1
3
5

ROLLBACK;

-- Calling the function 2 times passing different parameter valus
INSERT ALL
INTO t1 (id) VALUES (id1)
INTO t2 (id) VALUES (id2)
SELECT f(1) id1, f(2) id2
FROM dual
CONNECT BY level<4;
--
SELECT * FROM t1
7
9
11
--
SELECT * FROM t2
8
10
12

In each case the function was called twice per row - that I can see from gaps in sequence values. But in first case it inserted same values into both tables, where in second case it inserted idfferent values.
Appreciate your feedback.
Tom Kyte

Followup  

January 15, 2014 - 9:04 pm UTC

first and foremost - calling a statement level non-deterministic function (a function that does not return the same outputs given the same inputs) is tricky. A change in plan will change the outcome.

I see no bug here. You have no control, no promise of any control, over how often and in what order F() will be called.

It is an issue with non-deterministic functions (imagine what would happen if SYSDATE was not statement level deterministic!) in general.


I see no bugs here - and the behavior you see might change from release to release. It is not multi-table inserts, it is just "sql" in general. invoking a function that is not deterministic for the duration of a statement is "dangerous" (unpredicable, not deterministic itself)


just for fun :) add a rollback to your script and run this:

ops$tkyte%ORA11GR2> INSERT ALL
  2    INTO t1 (id) VALUES (id1)
  3    INTO t2 (id) VALUES (id2)
  4    SELECT (select f(1) from dual) id1, (select f(2) from dual) id2
  5      FROM dual
  6      CONNECT BY level<4;

6 rows created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> SELECT * FROM t1;

        ID
----------
        13
        13
        13

ops$tkyte%ORA11GR2> SELECT * FROM t2;

        ID
----------
        14
        14
        14

ops$tkyte%ORA11GR2> ROLLBACK;

Rollback complete.




http://www.oracle.com/technetwork/issue-archive/2011/11-sep/o51asktom-453438.html

Is it a bug in multitable insert

January 16, 2014 - 8:16 am UTC

Reviewer: Alexander from Moscow

Tom, thank you for reply.
Just a small comment to add. It seemed to me a bug in Multitable Insert because I didn't notice such behaviour with usual insert (running on 11.2.0.3):

CREATE TABLE t(id1 NUMBER, id2 NUMBER);
--
INSERT INTO t SELECT f(1), f(1) FROM dual CONNECT BY level<4;
--
SELECT * FROM t;
ID1 ID2
----- -----
23 24
25 26
27 28

ROLLBACK;
--
INSERT INTO t SELECT f(1), f(2) FROM dual CONNECT BY level<4;
--
SELECT * FROM t;
ID1 ID2
----- -----
29 30
31 32
33 34

But I agree that in real life should avoid calling non deterministic functions from within SQL.
Tom Kyte

Followup  

January 16, 2014 - 5:50 pm UTC

doesn't matter if you don't observe it, that proves nothing. You cannot "prove" something is entirely true with a positive test (you have shown that under a very specific set of circumstances, something might be true), you can disprove something however by showing a case where it is not true.

which is what we have here, we have a test case that shows that "how you think it should work" is not correct.



In sql, if you have a function that is not deterministic for the duration of the statement executing it, all bets are off as to the results. A simple change in plan can and will change the number of times the function is called, the order in which it is called and therefore the entire outcome of the query.


Imagine if SYSDATE where not statement level deterministic. What result would:

select * from t where dt = sysdate;

return??? If it were a full scan with a filter, it would likely return increasing dates (dt would not be a constant in the output) since the database would likely evaluate SYSDATE once per row.

If it were an index range scan, it would likely return a constant DT output, since the database would likely evaluate sysdate once and use that as the start/stop key for an index range scan.


same with your plsql function. all you showed was "you cannot predict when/where/why/how many times/in what order/etc SQL will invoke your function".

You showed that sometimes it appears you can - but other times you cannot - meaning - you cannot in general.



And it is "statement level non-deterministic" that are things to be wary of. Non-deterministic is fine (sysdate is not deterministic, but it is statement level deterministic).

"statement level deterministic"?

January 17, 2014 - 7:30 am UTC

Reviewer: Kim Berg Hansen from Middelfart, Denmark

Tom

Is it only Oracle built-in functions that can have this "statement level deterministic" characteristic?

User defined functions as far as I know we can define deterministic or not. We developers cannot state that a function is "statement level deterministic", right?

If we create a non-deterministic function which is non-deterministic only because we use SYSDATE within the function, will Oracle recognize that such a function is in fact "statement level deterministic" because SYSDATE is it?

Thanks

Is it a bug in multitable insert

January 17, 2014 - 7:49 am UTC

Reviewer: Alexander

But Tom, from above examples we don't see a problem with the number of times the function called or the calling order. There is no issue with SELECT part of the statement, it returns what it returns considering all the danger and unpredictability of using non stalement-level deterministic functions within SQL.
The problem which I'm trying to highlight - Oracle inserts different values than returned by underlying SELECT statement. It happens with multitable insert only and doesn't happen with regular insert.
When the function call in SELECT part looks same (...f(1) as id1, f(1) as id2... from above example) it inserts same values into both columns, even though we can see from gaps in sequence numbers that the function WAS called separately for both columns and returned different values. That looks inconsistent. Oracle did execute the function but ignored its return value for second column and inserted same value as it got for first column.
When I just slightly change the function call to make it look different for two columns ((...f(1) as id1, f(2) as id2...) it starts inserting exactly the values returned by each function call. Also it works like this for regular INSERT... SELECT. Why should it be different in multitable case?

Guarantee of execution of conditions in a WHEN clause?

March 11, 2014 - 11:34 am UTC

Reviewer: Chadders from UK

Hi Tom,

I know that, when using INSERT ALL, you cannot guarantee the order of execution of individual WHEN clauses, but is there a guarantee of order of conditions within a WHEN clause?

For example (pseudo-code alert!), can we guarantee that "a" is executed before "b" etc.:

INSERT ALL
WHEN a AND b AND c THEN
...

The reason I ask is because I've just experienced a problem when dealing with NULLs where the developer has assumed that the order matters, and can't make my mind up if it's a bug in that code or Oracle...

The 11.2 documentation (as far as I can see) makes no statement about this.

Thanks in advance