Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Angelo.

Asked: February 24, 2003 - 8:52 am UTC

Last updated: August 24, 2007 - 1:51 pm UTC

Version: 8.1.7.4

Viewed 10K+ times! This question is

You Asked

Tom,

I want to be able to capture a DUP_VAL_ON_INDEX exception but it seems I might be having a scope problem.

I want to be able to set a counter(rollup) in a row when duplicate entries are attempted to be input.

I have coded the DUP_VAL_ON_INDEX in the exception block of a before insert trigger hoping to capture this exception. The insert occurs in a java application. During testing the exception occurs but it seems that the exception block is not executing.

I then thought that if I built a function based index for this table that I could capture the exception there but that is not happening either.

What's my problem? Does it have to do with the scope of the exception?

Is there a work around or another way of doing this.

I beleive i've been able to capture other exceptions in a trigger what makes this one different?

Thanks

and Tom said...

The java program is the guy getting the dup val error -- the trigger won't get that exception UNLESS the trigger itself does some modification that throws this error.

You cannot catch a callers error - as their is no error UNTIL AFTER the statement has completed. You've never been able to do this or anything similar -- ever.


The exception block in the trigger is useful to catch errors the TRIGGER itself encounters.

see
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1629338325252 <code>
you can use a view and an INSTEAD OF TRIGGER. The instead of trigger will attempt the insert and upon dup_val_on_index -- do an update.

The java programmers will only see the view -- not the table. Just rename the table, create a view by the same name using "select * from table" and they will not even know you did this.

(in general, it will be more performant to attempt the update and then do the insert if zero rows were updated if you can)


Rating

  (10 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

A reader, February 24, 2003 - 10:14 am UTC

Thanks,

I thought about the instead trigger, they work well but the table is partitoned and creates other problems. I'll talk to the developer and see if they can handle it in their code.

Tom Kyte
February 24, 2003 - 10:43 am UTC

it is all about stored procedures -- revoke insert and make them use a stored procedure.

Not that I can forsee the problem with partitions?

what about MERGE ...

J.Nemec, February 24, 2003 - 5:32 pm UTC

Good point to know that it is better first to update and than to insert!
What ist the main difference to "first insert and than update"? Is it the additional overhead with the unique constraint exception?
What about using MERGE statement, will the performance be better compared to instead of trigger or PL/SQL solutions?

Thanks

Jaromir

</code> http://www.db-nemec.com <code>


Tom Kyte
February 25, 2003 - 8:58 am UTC

it is the handling of the unique constraint (expensive) if it happens alot. The insert does a bit of work to find out "it is already there"

so you would see more recursive sql, more redo generated.

MERGE is basically doing in a single sql statement what the procedural code would be doing -- and that is good ;)

Ugly exception handling

Tk, April 12, 2004 - 6:50 pm UTC

Tom – I found one of my developers wrote this inside a package.
I think it is ugly. Is there a way to do it better and is there a way show that it is a bad coding practice.
Cannot use MERGE ora:8.1.7

INSERT … INTO t …
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN
update t
set …

Thanks

Tom Kyte
April 12, 2004 - 7:23 pm UTC

why is it "ugly".  it is very common.

I would say, if you think the dup val will be raised "fairly often", that:


  update t set ...
  if (sql%rowcount = 0)
  then
      insert into t ...
  end if;


is more efficient -- the insert of a dup actually INSERTS (does work), then upon discovering the dup -- rolls back -- and then queries up cons$ to find the name of the violated constraint.  Tons of work.

Whereas updating zero rows is fairly "work deintensive".


You can work out your break even using a simple test.  Consider two tables:


ops$tkyte@ORA9IR2> create table t1 ( x int primary key, y char(80) );
 
Table created.
 
ops$tkyte@ORA9IR2> create table t2 ( x int primary key, y char(80) );
 
Table created.

<b>now procedure p1 will insert and if dup, update.  procedure p2 will update and if nothing happened -- then insert:</b>

 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace procedure p1
  2  as
  3  begin
  4      for i in 1 .. 1000
  5      loop
  6          begin
  7              insert into t1 values ( i, i );
  8          exception
  9              when dup_val_on_index
 10              then
 11                  update t1 set y = i+1 where x = i;
 12          end;
 13      end loop;
 14  end;
 15  /
 
Procedure created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace procedure p2
  2  as
  3  begin
  4      for i in 1 .. 1000
  5      loop
  6          update t2 set y = i+1 where x = i;
  7          if ( sql%rowcount = 0 )
  8          then
  9              insert into t2 values (i,i);
 10          end if;
 11      end loop;
 12  end;
 13  /
 
Procedure created.

<b>we'll benchmark p1 vs p2 when the table is empty (eg: all of the inserts succeed:</b>

 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec runStats_pkg.rs_start;
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec p1
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec runStats_pkg.rs_middle;
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec p2
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec runStats_pkg.rs_stop(1000);
Run1 ran in 8 hsecs
Run2 ran in 15 hsecs
run 1 ran in 53.33% of the time
 
Name                                  Run1        Run2        Diff
STAT...recursive calls               1,024       2,025       1,001
LATCH.shared pool                    1,148       2,165       1,017
LATCH.cache buffers chains          15,546      16,952       1,406
STAT...consistent gets - exami          16       1,434       1,418
STAT...session logical reads         3,683       5,101       1,418
STAT...consistent gets                  24       1,443       1,419
LATCH.library cache pin              2,096       4,107       2,011
LATCH.library cache                  2,177       4,199       2,022
 
Run1 latches total versus runs -- difference and pct
Run1        Run2        Diff       Pct
23,877      30,503       6,626     78.28%
 
PL/SQL procedure successfully completed.

<b>there the extra work of trying to update 1,000 times almost doubled the wall clock time and added measurably to the latching we did.... BUT, let's reverse the roles here.  We'll fail inserting 1,000 times and have to resort to an update:</b>

 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> commit;
 
Commit complete.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec runStats_pkg.rs_start;
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec p1
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec runStats_pkg.rs_middle;
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec p2
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec runStats_pkg.rs_stop(1000);
Run1 ran in 66 hsecs
Run2 ran in 11 hsecs
run 1 ran in 600% of the time
 
Name                                  Run1        Run2        Diff
STAT...opened cursors cumulati       1,005           4      -1,001
STAT...parse count (total)           1,005           4      -1,001
LATCH.simulator hash latch           1,280         128      -1,152
LATCH.session allocation             2,000           0      -2,000
STAT...calls to get snapshot s       3,001       1,001      -2,000
STAT...no work - consistent re       2,000           0      -2,000
STAT...index fetch by key            3,000       1,000      -2,000
STAT...execute count                 3,005       1,005      -2,000
STAT...table fetch by rowid          2,000           0      -2,000
STAT...redo entries                  3,508       1,507      -2,001
LATCH.redo allocation                3,518       1,511      -2,007
LATCH.row cache enqueue latch        2,010           0      -2,010
LATCH.row cache objects              2,010           0      -2,010
STAT...db block changes              7,051       3,046      -4,005
STAT...consistent gets - exami       7,004       2,005      -4,999
STAT...buffer is not pinned co       5,000           0      -5,000
LATCH.library cache pin alloca       6,028          22      -6,006
LATCH.shared pool                    8,083       1,068      -7,015
STAT...consistent gets              10,004       2,006      -7,998
STAT...db block gets                10,578       1,566      -9,012
STAT...recursive calls              11,002       1,001     -10,001
LATCH.library cache pin             12,061       2,054     -10,007
LATCH.library cache                 18,104       2,086     -16,018
STAT...session logical reads        20,582       3,572     -17,010
LATCH.cache buffers chains          47,703       9,692     -38,011
STAT...redo size                   915,700     462,856    -452,844
 
Run1 latches total versus runs -- difference and pct
Run1        Run2        Diff       Pct
104,895      16,701     -88,194    628.08%
 
PL/SQL procedure successfully completed.

<b>having 1,000 failed inserts was killer, 6x the runtime, 6x the latching. 2x the redo (for the attempted insert+rollback and then update)....

So, where is the break even -- in this case, if we expect more than 10% of the rows to "fail", update and then insert will best insert and then update:</b>
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> delete from t1 where x <= 900;
 
900 rows deleted.
 
ops$tkyte@ORA9IR2> delete from t2 where x <= 900;
 
900 rows deleted.
 
ops$tkyte@ORA9IR2> commit;
 
Commit complete.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec runStats_pkg.rs_start;
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec p1
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec runStats_pkg.rs_middle;
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec p2
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec runStats_pkg.rs_stop(1000);
Run1 ran in 18 hsecs
Run2 ran in 17 hsecs
run 1 ran in 105.88% of the time
 
Name                                  Run1        Run2        Diff
STAT...consistent gets               1,011       2,014       1,003
STAT...consistent gets - exami         611       1,910       1,299
LATCH.cache buffers chains          19,025      17,146      -1,879
STAT...redo size                   622,260     576,452     -45,808
 
Run1 latches total versus runs -- difference and pct
Run1        Run2        Diff       Pct
31,917      29,576      -2,341    107.92%
 
PL/SQL procedure successfully completed.
 
 

Just speechless

Tk, April 12, 2004 - 8:11 pm UTC

Thank you. By the way runStats_pkg is it a wrapper around
DBMS_PROFILER or something homegrown.
And can you share?


Tom Kyte
April 13, 2004 - 7:10 am UTC

best way to insert data into table

A reader, December 22, 2005 - 1:02 am UTC

Tom,
We have few programs as follows:
suppose i have a table EMP, which has primary key on EMPNO.
My requirement is to insert data into the table EMP.
Before inserting data into the table, those programs check if there is any record by checking COUNT(*) from the table for the given EMPNO. If there is no record, insert, and if the count(*) > 0, then update.

Now we have MERGE command to do the same. However, if we have to acheive the same, which one of the following is the better approach:

1)
a) INSERT INTO EMP ....
b) Catch DUP_VAL_ON_INDEX and Update the record if
this exception raises.
c) If there is no exception means that the INSERT was successful.
2)
Whatever approach is currently implemented, means
a) select count(*) from the table for the given account
number
b) If count(*) is 0, insert record
c) If count(*) > 0, update record.

Thanks Tom



Tom Kyte
December 22, 2005 - 10:31 am UTC

wow, I would never have thought to COUNT to decide whether to insert/update.

I would have thought to


update
if (sql%rowcount = 0)
then
insert
end if;


the update "does the query" to find the record and - if found, updates it - else does nothing.


You should try updating and then inserting in that order - unless you expect the insert to ALMOST ALWAYS succeed. The reason - the insert will do work, hit a duplicate record and have to rollback (expensive to do work and undo work). the update will just search, when not found, then do work.


Try to remove all occurences of counting from your code - it is almost NEVER needed and just a waste of resources/time/energy/effort...

sorry... please ignore my earlier question

A reader, December 22, 2005 - 1:42 am UTC

Tom,
I did not read your above followup and raised this question. After posting my question, i found that its already answered. Thanks again. Please ignore my earlier question.

DUP_VAL_ON_INDEX woth a single Statement

Ramchandra Joshi, August 03, 2007 - 3:42 am UTC

Hi,

I have understood the fact that UPDATE then INSERT would consume less resources and is more practical to use.
However I have a requirement wherein the similar concept I'm trying to use .
Basically I have two tables being updated through one single screen and I have to capture the OLD and NEW values for AUDIT purposes and a SINGLE record is to be inserted into the log file.
To do this I have written two triggers on individual tables and the logic is something like this :

TRIGGER 1 : Call a procdure in package to INSERT into a TEMP table having a PRIMARY key on a column the values for OLD and NEW fields of TABLE 1 in two seperate columns

TRIGGGER 2 :
1.Call a second procedure in same
package to INSERT record for same PK column..

EXCEPTION block of the Procedure
2.DUP_VAL_ON_INDEX -- UPDATE the table
with the OLD and NEW field values for
TABLE2 in other two columns
3.Merge all the columns for the same PK
4.INSERT into main LOG file.

The same code is written in PROCEDURE 1 Exception block as well.

This was due to the fact that the order of trigger firing is unknown and any table can get updated first but we have to capture both the data as a single record.
And Temp table was used as there is no PK on main LOG table .

So this gives rise to 2 questions
1.) There always will be ONE INSERT and ONE UPDATE on the table so which approach is preferable? INSERT then UPDATE or UPDATE then INSERT..
I have used INSERT then UPDATE and is that a BAD coding practice?
2.) Is using TEMPORARY tables in your code not a good design?

Thank You in Advance,
Ram

Good or Bad??

Ramchandra Joshi, August 21, 2007 - 2:23 am UTC

Hi,

I understood the logic to use ARRAYS to avoid mutating table error ,however my prolem isnt that actually.
When there are two different triggers being fired with the sequence of firing unknown but still i need to combine the :OLD and :NEW values from both the triggers as single value..
I understand that i can extend the concept of using package and sequence of BEFORE and AFTER triggers to achieve the same.But still i guess this concept will be useful in the event on a single table where as if i have to use the :OLD and :NEW values of TABLE1 in the trigger of TABLE2,I definitely need to store it somewhere and the concept we are using to EMPTY the array in the BEFORE trigger so how do i achieve that?
Also my basic question is "Is using a TEMPORARY table NOT a Good Design?? " -- This is becasue i have been told that using a TEMP table in ANY design is WRONG !!..
And INSET-WHEN DUP_VAL_ON_INDEX -UPDATE is a BAD way of coding since i'm coding in EXCEPTION Block..
So can you please respond to this as to isnt that a fairly common practice in PL/SQL coding??
Kindly answer these.

Thanks in Advance once again,
Ram.
Tom Kyte
August 22, 2007 - 11:21 am UTC

you need to describe the problem better, eg - with an example.

Example

Ramchandra Joshi, August 23, 2007 - 6:04 am UTC

Hi,

Let me explain this with an example.

Consider that there are two tables EMP and DEPT with the following data
EMP : 
Emp_no Emp_name Salary emp_location
001    Tom      10000  USA

DEPT : 
Emp_no Emp_dept Emp_dep_head
001    Finance  John

Now I have a JAVA screen which displays all the information of two tables combined and i have to track all the events that happen on this screen into an Audit Table woth OLD and NEW values (if Update).
Consider that through JAVA Salary is changed to 15000 and Department head is changed from John to Larry.So my Audit log shud look some thing like this:
Emp_NO  DATA_BEFORE_CHANGE   DATA_AFTER_CHANGE
001     Emp_Name=>Tom,       Emp_Name=>Tom, 
        Salary=>10000,      Salary=>15000,
 Location=>USA,      Location=>USA,
 Dept=>Finance,      Dept=>Finance,
 Dep Head=>John      Dep Head=>Larry

The basic problem is that in JAVA screen the SQL queries are dynamically created and are fired in database hence any of the above two tables can get updated first..
Now that i have individual triggers on both the tables ,i need to capture the OLD and NEW values of both and INSERT as a single DATA_BEFORE_CHANGE(Combination of :OLD values of both Tables) and DATA_AFTER_CHANGE(Combination of :NEW values of both Tables) into Audit Table.Since the sequence of trigger firing is unknown I could not combine both the data as a single entity.

Hence what I did was i created a TEMP Table as following

Emp_NO(PK) EMP_DATA_BEFORE EMP_DATA_AFTER DEPT_DATA_BEFORE DEPT_DATA_AFTER


Now Assume that trigger on EMP fires first .So i'll capture :OLD and :NEW values of that and insert the same against respective EMP_DATA_BEFORE and EMP_DATA_AFTER columns of TEMP table.
The data in TEMP Table is like this now
Emp_NO(PK) EMP_DATA_BEFORE       EMP_DATA_AFTER   DEPT_DATA_BEFORE   DEPT_DATA_AFTER
    001        Emp_Name=>Tom,        Emp_Name=>Tom,   NULL               NULL
               Salary=>10000,      Salary=>15000,
        Location=>USA,      Location=>USA,
After this the trigger of DEPT would fire which will try to INSERT for same EMP_NO(PK) in TEMP Table.
Hence on DUP_VAL_ON_INDEX of TEMP table I'll Update TEMP table and fill the Rest two columns of DEPT (DEPT_DATA_BEFORE DEPT_DATA_AFTER).
Now The data in TEMP Table is like this :

Emp_NO(PK) EMP_DATA_BEFORE       EMP_DATA_AFTER   DEPT_DATA_BEFORE   DEPT_DATA_AFTER
    001        Emp_Name=>Tom,        Emp_Name=>Tom,   Dept=>Finance,  Dept=>Finance,      
               Salary=>10000,      Salary=>15000,   Dep Head=>John  Dep Head=>Larry 
        Location=>USA,      Location=>USA,  
In the same Exception block of DUP_VAL_ON_INDEX after Updating temp table I'll merge both EMP_DATA_BEFORE and DEPT_DATA_BEFORE columns and INSERT as a single entry for DATA_BEFORE_CHANGE in my AUdit Log table.Similarly for DATA_BEFORE_CHANGE data.

Hence the final Audit entry would look like this :

Emp_NO  DATA_BEFORE_CHANGE   DATA_AFTER_CHANGE
001     Emp_Name=>Tom,       Emp_Name=>Tom, 
        Salary=>10000,      Salary=>15000,
 Location=>USA,      Location=>USA,
 Dept=>Finance,      Dept=>Finance,
 Dep Head=>John      Dep Head=>Larry

The same Exception block coding is done in the trigger of DEPT table .
Hence whatever order the triggers fire I'll always have the Final Audit table entry shown above which is my requirement.

Now my Question is
1) Since i have used a TEMP table ,I have told that this a WRONG practice and we should not use TEMP table in ANY design.Is that so?
2) INSERT-WHEN DUP_VAL_ON_INDEX -UPDATE --- Isnt that a fairly common practice in PL/SQL Coding ?Is that a BAD way of coding?

Kindly answer these as my above design has been termed Improper Design.

Regards
Ram

Still....

A reader, August 23, 2007 - 12:48 pm UTC

Hi,

I got your point of using workspace manager also..But that was not possible with the time frame that I had .
Also in the thread for the option 4 suggested by you there was comment from you saying "I would definitely not use a temporary table"..Does this mean you too are AGAINST using Temp table??...
I mean I am really very eager to get the answer of this question that Is using a temp table in ANY design a WRONG practice?
I really understood the various ways suggested by you to implement and achieve my requirement and all were remarkable which i would definitely use going further but I just wanted to know did I do something terribly wrong and whatever I did was a waste altogether?

Hope you are getting my point now.

Thanks again for your patience
Ram

Tom Kyte
August 24, 2007 - 1:51 pm UTC

let's see...

workspace manager, time to implementation vs do it yourself.

We must be using different mathematical systems.


More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library