Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question.

Asked: July 18, 2007 - 12:40 pm UTC

Last updated: April 29, 2011 - 8:28 am UTC

Version: 9.2.0

Viewed 50K+ times! This question is

You Asked

Hi,
I have few transactional tables.
Tab_M is the Master transactional table.
All other concern tables are also transactional and child(or child-to-Child)
of the table Tab_M.

I want to put a Trigger to hold some bussiness logic.

In a regular scenario, record inserted on Tab_M then all other
transactional tables and Finally commit takes place.

I want this trigger will fire only when a transaction is Commited.

Thanks

and Tom said...

there is no "on commit" trigger.

I hate triggers, I especially hate the concept of this one - likely you are going to try and enforce some entity integrity constraints - but you cannot do that in triggers without LOCKING tables in general (and I doubt you would do that)

My recommendation:

create a stored procedure that does the transaction from start to finish, performing whatever checks you find necessary. DO NOT mess around with triggers, DO NOT implement logic in triggers. Use a transactional API in the form of stored procedures - do not allow applications to directly modify the tables - have them call these well formed, correct procedures.


http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2986214404007

Rating

  (45 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

A reader, July 18, 2007 - 3:50 pm UTC


Trigger on a table Fire at commit.

S Sarkar, July 18, 2007 - 4:02 pm UTC

Hi,
Thanks for the response so quickly.

It is not possibel to to make a one procedure call for
insert/modify record in those many tables.
And moreover these tables can get data from different way.

I am NOT interested in checking the FK or any constraint
checking. we saw some data is missing in the lowest
tables and we want to trace it ASAP.

Let suppose the Tab_M is the parent table.
Tab_A, tab_B is child of tab_M.
Tab_X and Tab_Y is child of Tab_A and Tab_B respectively.
All these are 1:m relation.
In a one transaction data will insert int Tab_M, then
tab_A, tab_B then Tab_x and tab_y. it is happening 99.999%
time.
Some time we saw data where tab_x or tab_Y (lowest leve)
table has no records - which is not acceptable from
Bussiness point /application of view.
We want to trace while Tab_M is created/commted.

As you said there is no ON_COMMIT trigger or trigger
can not be deffered as table constrains- we may run a
process which will fire each 5 mins and check the new
entried in thr tab_M.

Thanks

Tom Kyte
July 19, 2007 - 10:33 am UTC

...
It is not possibel to to make a one procedure call for
insert/modify record in those many tables
...

you are wrong, it is not only possible - but practical, correct, the best way ever.


if "some data is missing", you - well - are enforcing entity integrity constraints - across entities in some fashion and unless you are well versed in the LOCK TABLE command, I seriously doubt your triggers would be coded correctly.


you know what, if you wrote a procedure - you would be able to ENSURE that tab_x and tab_y had the right stuff - for you wouldn't perform the transaction UNLESS AND UNTIL you had them as inputs!!!


do you see what the fundemental problem here is? You have client applications that do not "do the right thing". You will continue to have these problems as long as you develop your applications the way you do. You *need* to centralize your transactional processing - using stored procedures - to *get the right stuff*.

This makes me so sad. "we have a really buggy program we wrote that corrupts our database and we are going to use a hunt and peck method to try and find out why"

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:42170695313022

add columns to the parent tables that represent the count of child records.

if these columns "bother" you - add them anyway, rename the tables - create VIEWS that select out everything BUT these new columns. No one will know they are there.

use the deferred check constraint.

Ever heard of "After Insert Trigger"

A reader, July 18, 2007 - 4:11 pm UTC

All you need is an "After Insert Trigger". Hope it helps!!



Tom Kyte
July 19, 2007 - 10:36 am UTC

so, go ahead, give us the example - i don't think you will be able to :)

Trigger on a table Fire at commit.

S Sarkar, July 18, 2007 - 4:17 pm UTC

Hi,
Thanks for the response so quickly.

It is not possible to make a one procedure call for
insert/modify record in those many tables.
And moreover these tables can get data from different way.

I am NOT interested in checking the FK or any constraint
checking. we saw some data is missing in the lowest
tables and we want to trace it ASAP.

Let suppose the Tab_M is the parent table.
Tab_A, tab_B is child of tab_M.
Tab_X and Tab_Y is child of Tab_A and Tab_B respectively.
All these are 1:m relation.
In a one transaction data will insert into Tab_M, then
tab_A, tab_B then Tab_x and tab_y. it is happening 99.999%
time.
Some time we saw data where tab_x or tab_Y (lowest level)
table has no records - which is not acceptable from
Business point /application of view.
We want to trace while Tab_M is created/committed.

As you said there is no ON_COMMIT trigger or trigger
can not be deferred like table constrains - we may run a
process which will wake up in each 5 mins
and check the new entries in the table tab_M and the
child tables(Tab_A, _B, _X, _Y).


Thanks

Chandra, July 18, 2007 - 5:32 pm UTC

Quite often when I ask why do they need on commit trigger. Invariably they didn't know about 'deferrable initially deferred'.

Trigger on a table Fire at commit.

S Sarkar, July 18, 2007 - 5:47 pm UTC

Hi,

"After Insert Trigger" - will NOT help.

under one single transaction Parent and child table
got data insertion. Child record can be one or multiple.
There should be at least one record in the Child tables.

My question is, is there any other why to get this
info/notification ASAP.
Alternative option I have to run a process will wake up
in each say 5 mins interval and check the new records
in tab_M.

Thanks




Tom Kyte
July 19, 2007 - 10:44 am UTC

and all that waking up would do is....


well nothing, it would not help you fix the problem, it would just confirm over and over that you have a problem.

see above...

AUDIT

Phil H, July 18, 2007 - 8:34 pm UTC

I think the AUDIT statement is your best option at this point.

And if I may: while I agree that an API stored in the DB is the best way to design an application, there are many real-world situations where that just isn't going to happen. In such cases, a DBA just has to take whatever steps are needed to get an outcome.
Tom Kyte
July 19, 2007 - 10:46 am UTC

hmm, but they want to discover WHEN a record is "NOT" inserted.

auditing detects things that happen, it would not be useful for detecting when something did not happen.


I hate the attitude that "we just have to roll over because doing it right is not an option"

I will forever fight against that attitude.

Alerting

Martin, July 19, 2007 - 2:32 am UTC

Feature overview: Review the functionality of the DBMS_ALERT package - you could use it to "trigger" functionality only AFTER the commit has been done.

simulating ON COMMIT trigger behaviour

ZSirotic, July 19, 2007 - 3:21 am UTC

See

Solving "COMMIT business rules" on the Database Server
http://www.quest-pipelines.com/pipelines/plsql/archives/COMMIT_Rules.htm

(original link https://asktom.oracle.com/magazine-archive.htm is removed)


Regards,
Zlatko Sirotic

Need for COMMIT TRIGGER

Marc Blum, July 19, 2007 - 8:02 am UTC

Me too, I'm a strong supporter of the developement guideline "Don't use table triggers to implement business logic."

Build up a PL/SQL-layer above the tables and put in as much business logic as possible. This way you construct an application server layer hosted by the RDBMS.

--

But I am very interested in having the possibilities of a BEFORE COMMIT TRIGGER. I even found a very detailed enhancement request in metalink. Several years ago someone described very detailed the requiered behaviour of such a thing.

My business case which demands for a BEFORE COMMIT TRIGGER comes from a system wich demands:

- all data is logically versioned, all versions may be reconstructed, say I need fully reproducability of all commited states of the database at runtime.

- concurrent writes of new data versions must be fully supported, writers don't block writers

- the moment of transactional COMMIT dictates the sequence of data versioning. This is natural, as the COMMIT makes data visible to other sessions.

- for COMMITs at the very same point of time, serialization for a very short moment is acceptable

A COMMIT TRIGGER would come very handy in here.

Oh my Gosh, we do the opposite!

Chuck Jolley, July 19, 2007 - 10:44 am UTC

[quote]
...do not allow applications to directly modify the tables - have them call these well formed, correct procedures.
[/quote]

Oh my Gosh, we do the opposite!

eg. The only way to post a payment in our system is to update the "posted" column in a single staging table from 'N' to 'Y'

Logic in an "before update of posted...for each row" trigger then creates records all over the place. Or throws back an error if something doesn't pass db or programmed validation. We post them in batches which are rolled back in their entirety if a row throws an error.

Makes no difference which of maybe 5 or 6 systems populates that row or set of rows in the staging table. One is even a pure manual data entry screen.

Also makes no difference how the posted column gets changed. They have a single window for that now, but it would be pretty easy to change to a web interface since the client does so little of the work.

We leave the code in the trigger it's self instead of having the trigger call a procedure because we want there to be only one way for the posting to happen. And access to :old.* and :new.* make some of the code simpler.

This system has worked perfectly for 8 years now surviving much user abuse. Including the time a tech hit the power button on our server by mistake while we were posting a $40,000,000 batch of electronically loaded payments. The system came back up in perfect shape as if the post button had never been pressed.

We use triggers for posting, rollup totals, just all kinds of stuff. Hundreds of them.
They have always seemed like a good way to break the logic up into bite sized chunks.



Tom Kyte
July 19, 2007 - 12:14 pm UTC

you have just invented a wacky weird way to....

have a transactional API in triggers.

Fundementally "the same" as doing it more efficiently, modularly, correctly in proper packaged procedures - but not as efficient, easy to maintain, ensure correctness.

Glad it worked for you, I would never recommend it as a sound approach.



Rahul, July 19, 2007 - 10:51 am UTC

This reminds of another example of vendor trying to provide a database independent solution. We have products like these where the performance is horrible and maintenance is a nightmare.


Ummm...

A reader, July 19, 2007 - 3:13 pm UTC

[quote]
Fundementally "the same" as doing it more efficiently, modularly, correctly in proper packaged procedures - but not as efficient, easy to maintain, ensure correctness.
[/quote]

1, Is procedure code more efficient than trigger code? Aren't they compiled to the same thing?

2, What could possibly be more modular than a row level trigger in a database? ;)

3, Why would trigger code be more difficult to maintain than procedure code? (I could have the trigger call one or more procedures. Would that make a difference?)

4, The code is correct irrespective of where it is or it doesn't belong in the database. Not so?

I always open to change, so I'm really asking, not arguing.




Tom Kyte
July 19, 2007 - 7:20 pm UTC

1) the procedure is a compiled stored object, it caches SQL cursors much more efficiently. A trigger causes additional redo and other work to be performed.

Stored compiled units (procedures, packages) are more efficient than triggers.

2) the massive triggers and all of their un-imaginable dependencies leads to run on spaghetti code. Code fires as a "side effect" of something else, no direct calls to code, hard to understand.

A package allows you to code structured, modular, easy to maintain, easy to understand code. triggers do not.

3) it is the magic side effects. So, you read a piece of code and see an insert - now you have to query the dictionary to see what magic bit of code is executed there - that code does an update - and so on.

Linear code in packages - a tad bit easier to get the flow, to maintain, to understand, to document.

4) nope, triggers have side effects. Hey - how many times would a BEFORE UPDATE trigger fire if we did this:

update t set x = 5 where y = 10;
5 rows updated

?? Once? Twice?

Now, how about a row trigger on the same - it updated the 5 rows - how many times was your row trigger invoked? 5? 6? 7? do you know :)

It is hard to debug an insert that fires a trigger that causes an update that fires a trigger that does a delete that fires a trigger that........

Worse - it is impossible to understand - coming in fresh (I know, I always come in fresh, burned every time by a trigger)

probably correct

Sokrates, July 20, 2007 - 1:53 am UTC

you are most likely correct in what you are saying about
the side effects of triggers and the difficulty to understand them.

But that's they way we were educated by Oracle.
(At least me for example).

I learnt PL/SQL with Oracle Forms 3.0 and it took me really some months to learn this tool - it was very hard to learn; which (forms) trigger to use to achieve a goal.
But I learnt it and became a great Forms programmer.

Then came Oracle 7 and you had trigger in the database.
Great ! enjoyed them, learnt them to love, learnt to know there caveats !

and now: hard to switch back to "no-trigger-programming",
even when I see your arguments (and can't really say I don't see them)

I'm sure there are some people out sharing my experiences


Triggers

Will, July 20, 2007 - 3:40 am UTC

Sokrates,

you seem to think that triggers in Forms are the same as triggers in the database, but they are very different. Forms triggers are more akin to events in an application.

I'd agree with Tom, you get burned by triggers sooner or later. If you set up the sort of trigger-driven logic that has been discussed, you wil find that you are committed to maintaining and updating it in ever more complex ways. And that's before you start hitting mutating tables, or concurrency and locking problems. Up-front PL/SQL stored procedures are the way to go, even if it means lots of rewriting.
Tom Kyte
July 20, 2007 - 8:27 am UTC

Well, I sort of agree with Sokrates analogy.

I loved forms, I was really good at it.

then, I had to understand a form I wrote a couple of months ago. With that experience, I lost my appreciation for forms. Everything happened as a side effect of something else, you could not "print out" a form and understand it, everything was a magical side effect - there was no linear flow.

it is a gripe I have with many 'object' implementations - unless you write the class hierarchy - you likely do not have a good appreciation of the side effects - the performance implications - the external effects it has by accident.

Toon, July 20, 2007 - 11:09 am UTC

"there was no linear flow"

Kinda sounds like a java program...

:-)

I hate triggers

A reader, July 20, 2007 - 3:26 pm UTC

I am with Tom. I hate triggers. I only use them unless absolutely necessary.

S Sarkar,

Why not just add an additional procedure call before you commit. That procedure can do the check you want and raise_appication_error when there is an issue.

Magnific

Andras Gabor, July 22, 2007 - 6:19 am UTC

I loved this one:

"Why not just add an additional procedure call before you commit. That procedure can do the check
you want and raise_appication_error when there is an issue.
"

Reading through this thread I was wondering if anyone comes up with something so simple and perfect after the whole discussion got derailed and the original question was almost absorbed by the black hole of "love-or-hate triggers".

Bravo-bravo to the one who could come back to Earth :) and answer the question elegantly.

Will this help you?

Toon, July 23, 2007 - 6:03 am UTC

As an answer to the original question.
Will this help you?

Two assumptions I've made are:
- Tables parent and child are only inserted into (i.e. no updates, no deletes).
- You want to prevent a faulty transaction to succesfully commit.

Tables parent and child are your application tables (the ones that have the requirement 'a child must exist for every parent')
Table parent_with_no_child is my auxiliary table to implement your requirement.

At the bottom, the first transaction will succeed. The second one will fail with ORA-02091.


drop table child;
drop table parent;
drop table parent_with_no_child;

create table parent
(parent_id     number not null primary key
,parent_data   varchar2(20) not null)
drop table child;
drop table parent;
drop table parent_with_no_child;

create table parent
(parent_id     number not null primary key
drop table child;
drop table parent;
drop table parent_with_no_child;

create table parent
(parent_id     number not null primary key
,parent_data   varchar2(20) not null)
drop table child;
drop table parent;
drop table parent_with_no_child;

create table parent
(parent_id     number not null primary key
,parent_data   varchar2(20) not null)
/

create table child
(child_id     number not null primary key
,parent_id    number not null
,child_data   varchar2(20)
,foreign key (parent_id) references parent)
/

create table parent_with_no_child
(parent_id    number not null
,check(0=1) initially deferred)
/

create or replace trigger parent_air
after insert on parent
for each row
begin
  --
  insert into parent_with_no_child values(:new.parent_id);
  --
end;
/

create or replace trigger child_air
after insert on child
for each row
begin
  --
  delete from parent_with_no_child where parent_id = :new.parent_id;
  --
end;
/


Rem
Rem Test correct situation.
Rem

insert into parent values(1,'hello world');
insert into child values(100,1,'hello world');
commit;


Rem
Rem Test incorrect situation.
Rem

insert into parent values(2,'hello world');
commit;

I need to improve on my copy and pasting...

Toon, July 23, 2007 - 6:18 am UTC

Sorry for the sloppiness: forget the first 17 lines of code in my previous post

On COMMIT Triggers

Coder the Barbarian, August 29, 2007 - 2:28 am UTC

On Insert, Update and Delete triggers are effectively "On Commit" triggers...they get fired on COMMITting a transaction... is this not so?

But dont use triggers...apart from performance issues, when they go wrong you may be looking for a new job!!
Tom Kyte
September 04, 2007 - 3:46 pm UTC

this is not so

insert, update, delete triggers are fired either

a) immediately before the DML
b) for each row, before the row is modified during the DML processing
c) for each row, after the row is modified during the DML processing
d) immediately after the DML completes

so, if you:

insert into t1...
insert into t2...
commit;

and t1 and t2 had each kind of trigger (before, before for each row, after for each row, after) - then we would have had the trigger fire well before the commit ever happened.

Toon, August 31, 2007 - 6:55 am UTC

Database triggers (insert, update and delete), do *not* fire on commit. They fire on execution of DML-statement.
The code above employs a trick (involving a deferable check constraint) to get the original issue from this post done.

I oppose the black/white statement that triggers are a nono.

Yes, I know it's complex. But triggers are a perfect vehicle for validating data integrity constraints (the ones we cannot deal with declaratively).

(Maintaining data integrity has never made me loose a job)

I agree with Toon

Zlatko Sirotic, September 05, 2007 - 12:31 pm UTC

I agree with Toon.

To quote C.J.Date (Database In Depth, O'Reilly 2005, page 130-131):

"The emphasis in the commercial world always seems to be on performance, performance, performance;
other objectives, such as ease of use, data independence, and in particular integrity,
seem so often to be sacrificed to, or at best to take a back seat to, that overriding goal.
But what's the point of a system performing well if we can't be sure the information we're getting
from it is correct? Frankly, I don't care how fast a system runs if I don't feel
I can trust it to give me the right answers to my questions."


Of course, declarative solutions are always preferred over procedural ones.
To quote C.J.Date again (page 130):

"Of course, declarative solutions are always preferred over procedural ones, if they're available.
Also, declarative constraints in particular open the door to the possibility of doing semantic
optimization, which triggers don't."


To quote Toon Koppelaars (Applied Mathematics for Database Professionals, Apress 2007, page 242):

"Most notably, the CREATE ASSERTION command - which has been in the official standard of SQL
since 1992 - is not supported in most SQL DBMSes (including Oracle's)."


And to quote Thomas Kyte ("Expert Oracle Database Architecture", Apress 2005, page 47) :

"We build applications that use data, and that data will be used by many applications over time.
It is not about the application - it is about the data."


Regards,
Zlatko Sirotic

Tom Kyte
September 05, 2007 - 5:36 pm UTC

I agree that data integrity is key, paramount, #1, the end.


I disagree that triggers should be used frequently and by most people EVER (they just don't know enough about transactions, concurrency, race conditions, how the database works)

Eg: they are not capable of using the tool, they would be better off ignoring it's existence and looking for more straight forward ways to do things.

The problem I have with triggers is most/much of the time, they are DONE WRONG, and people never realize it - and data integrity is compromised BECAUSE THEY USED TRIGGERS to enforce constraints (and don't know what they need to do to do it properly)

so, triggers - path of LAST RESORT and only to be used by people that understand how this thing called the database actually WORKS.

triggers

Zlatko Sirotic, September 08, 2007 - 5:26 am UTC

> so, triggers - path of LAST RESORT and only to be used by people that understand how this thing called the database actually WORKS.

I agree.

And, i must be more precise:
"Applied Mathematics for Database Professionals" - Lex de Haan and Toon Koppelaars


Regards,
Zlatko Sirotic

Tom Kyte
September 12, 2007 - 10:24 am UTC

but the problem is ultimately - have you ever met a developer that felt they didn't know/understand something :)

that is why I just say "NO", especially since I cannot evaluate someone over the internet like that...

Original question

Mhaelt, September 12, 2007 - 11:58 am UTC

About the original question:

Create an on commit aggregate materialized view which joins the parent and child tables, group by the parent ID, do count(*) on the child id(s). Put a check constraint on the count(*) columns. There.

triggers

amit, January 12, 2009 - 1:26 pm UTC

I have a previous defined after trigger. I want to open its code how can I open a previous defined trigger???
Tom Kyte
January 12, 2009 - 9:20 pm UTC

good question.

what does it mean to "open a trigger"

sounds like you might want to edit the current trigger that is in the dictionary? goto otn.oracle.com and download sqldeveloper - it'll do that.

When I like triggers

Galen Boyer, January 15, 2009 - 10:49 am UTC

The only time I like triggers is to fire versioning/history records into another table. The reason I like them is because that operation is not part of the "logic" anyways, and more to the point is that those triggers can be generated, either by writing PL/SQL to generate the trigger code, or from your design tool so you get your history/versioning for pretty cheap price as far as coding goes. Its the only time I think about triggers. To me, if the trigger can be stupid and if you are consistent in that you do the same thing for all table, then its probably okay.

But, I also have been bit by triggers that actually make decisions and take different paths based on what in the before/after slots. Ouch! Hate those things!

About Trigger

wajid, January 28, 2009 - 2:50 am UTC

hi
i want to Ask that when create_trigger will fire.
plz give me the answer

Tom Kyte
January 28, 2009 - 3:28 pm UTC

plz? German postal codes? I'm totally lost as to what you are asking.

If you mean to ask 'when will a database BEFORE CREATE trigger fire' - it will fire before you create something (before the statement that begins with 'CREATE' is executed by the database)

if you mean to ask 'when will a database AFTER CREATE trigger fire'- just change before to after in the above

If you are in forms - and are asking about a forms trigger - please use otn.oracle.com -> forms forums, I haven't used a client server development tool like forms since 1995, I don't know.

Trigger

wajid, January 28, 2009 - 2:55 am UTC

hi
i ant to ask that the trigger when_create_record will fire at save time or give me the pacific solution for it

Thanks
Tom Kyte
January 28, 2009 - 3:37 pm UTC

apparently you do mean forms....

please use the forums on otn.oracle.com for forms...

On commit would be useful

Marius, September 09, 2009 - 10:14 pm UTC

Hello Tom,

I am trying to create a generic table that stores ordered lists(lists managed by users) and for maintaining the positions of the items I was thinking about using triggers, see below.
Unfortunately there is a problem I can't really solve with triggers. If I try to insert two entries with the same position from different sessions at the same time, it fails because the after statement trigger on a session doesn't see the entry the other session put in.

Session1: INSERT INTO list(id, name, p) VALUES(1, 'name1_1', 1);
Session2: INSERT INTO list(id, name, p) VALUES(1, 'name1_2', 1);
Session1:
COMMIT;
Commit complete
Session2:
COMMIT;
ORA-02091: transaction rolled back
ORA-00001: unique constraint (LIST_UNQ) violated

Any suggestions how to make it right?

Thank you,
Marius

CREATE TABLE list
(
 id   INTEGER NOT NULL,
 name VARCHAR2(30),
 p    INTEGER NOT NULL,
 seq  INTEGER NOT NULL PRIMARY KEY
);
ALTER TABLE list ADD CONSTRAINT list_unq UNIQUE(id, p) INITIALLY DEFERRED DEFERRABLE;

CREATE SEQUENCE listSeq_;
CREATE SEQUENCE listNewPosOffset_ MINVALUE 1 MAXVALUE 99999 CYCLE;

CREATE OR REPLACE PACKAGE pkg
AS
 TYPE intArray IS TABLE OF INTEGER INDEX BY BINARY_INTEGER;

 aChangedIds intArray;
 aEmpty  intArray;
 upd BOOLEAN DEFAULT FALSE;
END;
/

CREATE OR REPLACE TRIGGER list_before_iud
 BEFORE INSERT OR DELETE OR UPDATE OF p ON list
BEGIN
 IF NOT pkg.upd THEN
  pkg.aChangedIds := pkg.aEmpty;
 END IF;
END;
/

CREATE OR REPLACE TRIGGER list_before_iudfer
 BEFORE INSERT OR UPDATE OF p ON list FOR EACH ROW
DECLARE
 posOffset INTEGER;
 invalidPos EXCEPTION;
BEGIN
 IF :NEW.p >= 100000 THEN
  RAISE invalidPos;
 END IF;

 IF inserting THEN
    SELECT listSeq_.nextVal INTO :NEW.seq FROM dual;
   END IF;

 IF NOT pkg.upd THEN
  IF inserting OR (updating AND (:OLD.p != :NEW.p))
  THEN
   IF :NEW.p = 1 THEN
    :NEW.p := 0;
   END IF;

   SELECT listNewPosOffset_.nextVal INTO posOffset FROM dual;

   IF updating AND (:NEW.p > :OLD.p) THEN
    :NEW.p := (:NEW.p * 100000) + posOffset;
   ELSE
    :NEW.p := (:NEW.p * 100000) - posOffset;
   END IF;

  END IF;
 END IF;
END;
/

CREATE OR REPLACE TRIGGER list_after_iudfer
 AFTER INSERT OR DELETE OR UPDATE OF p ON list FOR EACH ROW
BEGIN
 IF NOT pkg.upd THEN
  IF (inserting OR (updating AND :NEW.p != :OLD.p)) THEN
   pkg.aChangedIds(pkg.aChangedIds.count + 1) := :NEW.ID;
  ELSIF deleting THEN
   pkg.aChangedIds(pkg.aChangedIds.count + 1) := :OLD.ID;
  END IF;
 END IF;
END;
/

CREATE OR REPLACE TRIGGER list_after_iud
 AFTER INSERT OR DELETE OR UPDATE OF p ON list
DECLARE
 aProcessedIds pkg.intArray;
 curId  INTEGER;
BEGIN
 IF NOT pkg.upd THEN
  pkg.upd := true;
  FOR i IN 1 .. pkg.aChangedIds.count LOOP
   curId := pkg.aChangedIds(i);
   IF NOT aProcessedIds.exists(curId) THEN
    FOR c IN (SELECT seq, p, rownum AS rn FROM (SELECT seq, p FROM list WHERE id = pkg.aChangedIds(i) ORDER BY (CASE WHEN p < 100000 THEN p * 100000 ELSE p END)))
    LOOP
     IF c.p != c.rn THEN
      UPDATE list SET p = c.rn WHERE seq = c.seq;
     END IF;
    END LOOP;
    aProcessedIds(curid) := 0;
   END IF;
  END LOOP;
  pkg.upd := false;
 END IF;

 EXCEPTION
  WHEN OTHERS THEN
   pkg.aChangedIds := pkg.aEmpty;
   pkg.upd := false;
   RAISE;
END;
/

INSERT INTO list(id, name, p) VALUES(1, 'name3', 1);
INSERT INTO list(id, name, p) VALUES(1, 'name2', 1);
INSERT INTO list(id, name, p) VALUES(1, 'name1', 1);
INSERT INTO list(id, name, p) VALUES(1, 'name0', 100);
UPDATE list SET p = 1 WHERE id = 1 AND name = 'name0';
DELETE FROM list WHERE id = 1 AND name = 'name0';
COMMIT;


Tom Kyte
September 14, 2009 - 10:28 am UTC

... If I try to insert two entries with the same position from different sessions at the same time, i....

but if you ask me, I cannot even fathom allowing that situation to occur! I'm reminded of Abott and Costello - "who's on first", except this would be "who is first, who wins and why"

...
Any suggestions how to make it right?

....


do not use triggers

understand that serialization will be necessary.

read about lost updates (you have a case of them) and read why serialization will be necessary and why the application would have to let the end user know "hey, you cannot do that, would you like to do this instead" and retry the transaction

Serializing

Marius, September 14, 2009 - 1:18 pm UTC

Right.
The only thing that I can think of to do it right is
locking the table in exclusive mode and that is a problem
when we're dealing with lots of lists, say 10k or maybe
even more.

Is there any way to lock a table for a column value(list
id)? I can't think of any so what remains is locking the
whole table.

p = position

List append code without triggers:
LOCK TABLE list IN EXCLUSIVE MODE;
INSERT INTO list(id, name, p, seq)
VALUES
(listId,
name_,
(SELECT NVL(MAX(p), 0) + 1 FROM list WHERE id = listId),
listSeq_.nextVal);

Insert at pos p_:
LOCK TABLE list IN EXCLUSIVE MODE;
-- update only when the requested pos not available
UPDATE list
SET p = p + 1
WHERE id = listId
AND p >= p_
AND EXISTS (SELECT null FROM list WHERE id = listId AND p = p_);
INSERT INTO list(id, name, p, seq) VALUES(listId, name_, p_, listSeq_.nextVal);

Delete pos by seq:
LOCK TABLE list IN EXCLUSIVE MODE;
SELECT p, id INTO p_, listId_ FROM list WHERE seq = seq_;
DELETE FROM list WHERE seq = seq_;
UPDATE list
SET p = p - 1
WHERE id = listId_
AND p > p_;

Of course, the table will be locked when beginning the work
on the table, not for every single operation.

Now should it be possible to do something like:
LOCK TABLE list WHERE id = listId IN EXCLUSIVE MODE;
that would be a whole lot better, it will make it possible
to work on all other lists.
Is there any way to simulate something like this?
Tom Kyte
September 14, 2009 - 2:03 pm UTC

you need a parent child table here

you have a table of lists
then you have a table of list values


the code would select for update on the table of lists to "lock a list"

and then it can work on all of the child records for that parent it just locked.

encapsulate your logic in a stored procedure
revoke insert/update/delete from all on the list values table
grant execute on the procedure

now, they will serialize on the parent record only.

Marius, September 14, 2009 - 5:28 pm UTC

Putting the functionality on stored procedures is of course
the better approach.

In this case, the lists table is something that will be
used internally(inserting/deleting from this table will be
done completely from these stored procedures) for locking,
with the added benefit of normalizing(a list id could be
formed by multiple columns).

I'm not going to use this as an excuse, it would be
lame. I'll just state it as a fact: this (web-based)
application must run on multiple RDBMS(fortunately, not on
mySql, which is a very particular case of a RDBMS).

Anyway, this seems to be the only simple way to do it right in
Oracle, so it will be done this way where possible(FOR
UPDATE, REPEATABLEREAD whatever else) and where not
possible, LOCK TABLE and that's it.

Thanks for you help Tom, as always, a pleasure.
Tom Kyte
September 14, 2009 - 5:55 pm UTC

what I described is actually 100% database independent

You have a problem with your data model.

You need a parent table

That parent table will - in every (relational) database I've ever seen - solve the problem (when coupled with a stored procedure to ensure everyone gains access to this resource correctly, consistently)


Marius, September 14, 2009 - 7:34 pm UTC

Of course, every RDBMS that supports row level locking is able to do this.
Yes, the problem is that there is no list table, but we don't really need one, except for solving this problem.

Just to give you an example of where we use such "lists"...
We have for instance a many to many relation table with a structure similar to this:
o1_id    INT
language CHAR(2)
o2_id    INT
o2_type  INT
p INT
attr_1
attr_2
...
attr_[n]


In this case the id of the list is (o1_id, language).
o1_id is an id of specified object type.
(o2_id, o2_type) represent an object of o2_type type(each object type has its own table(s)).
attr* are various relation attributes.
Think of o1 as being a container that contains lots of objects of various types, each language having different objects.

There really isn't any need for a separate "lists table" in this case, except of course for properly locking.

Another example of list, think about a tree of objects, each node being of arbitrary type, all children of any node being ordered manually(a "list" contains all children of a given node).
Tom Kyte
September 15, 2009 - 9:45 am UTC

... Of course, every RDBMS that supports row level locking is able to do this. ...

nope, every RDBMS - regardless of row, page or table level locking are able to do this, the degree of concurrency will be affected, but the ALGORITHM, APPROACH is not only sound, but the correct way to do this.


Yes, the problem is that there is no list table, but we don't really need one, except for solving this problem.

do you realize how circular that is? ouch - my head spins.

"we don't really need one, except we really need one, but we don't really need one, except we really sort of need one"

There really isn't any need for a separate "lists table" in this case, except of course for properly locking.


apparently, you need one.


If you want an Oracle specific implementation - dbms_lock, allocate a named lock, the name of the lock is the name of the list. Done.

But that won't work anywhere else.

Generic Data Model

A reader, September 15, 2009 - 8:12 am UTC

This sounds like a generic Data Model, which is a bad Idea, in my Opinion ;)
Tom Kyte
September 15, 2009 - 10:05 am UTC

it is an incorrect data model, yes - we are working on that...

Incorrect data model

Marius, September 17, 2009 - 1:13 am UTC

> apparently, you need one.
Yes, we do need this lists table.

> If you want an Oracle specific implementation -
> dbms_lock, allocate a named lock, the name of the lock is
> the name of the list. Done.
Gotta do some testing see whether it's worth to use this
Oracle specific method.

> it is an incorrect data model, yes - we are working
> on that...
I didn't reply till now because I wanted to sit on it a bit.
I'm not sure whether you imply more than pointing out the
missing parent/child table as in the original design.

Considering we add the lists table we have now:


lists:
id       INT PK
o1_id    INT
language CHAR(2)


list_values:
seq      INT PK
list_id  INT FK lists(id)
p        INT
o2_id    INT
o2_type  INT
attr_1
attr_2
...
attr_[n]
UNIQUE(list_id, p) deferrable initially deferred


Beside adding the lists table, I wouldn't change a thing.
Whether it sounds generic or not, we need it to work with
arbitrary object types.

The attr* columns are also needed and I highly doubt using
another table just for them would help(it would be a one to
one relationship between list_values and list_values_attr
table, what's the point?)

Is the data model as presented above still incorrect?


As about the objects tree, yes, it's generic but in the end
that is what we need(a nodes type is not limited to a
single possible type). Should we implement it using the
Adjacency List Model the children of a node can be seen as
values in an ordered list(their position is important to
us). Should we implement it as Nested Sets(optimized for
frequent insertions), that is indeed something else.
Tom Kyte
September 17, 2009 - 9:09 am UTC

I'm never a fan of any table that has this in it:

attr_1
attr_2
...
attr_[n]


that is a one to many relationship, looks like yet another missing table.

One to one vs 1-n

Marius, September 17, 2009 - 9:39 am UTC

attr_1
attr_2
...
attr_[n]


I realize now I expressed this the wrong way.

Actually those [n] attributes are in total 6 columns of different types(one char, 2 varchar and 3 numeric),
all six being used for each and every relation.

The alternatives I see:
- attributes table containing all 6 columns + PK(one to one relationship)
- attribute table with 1(varchar column, implying lots of casting), 2(varchar + numeric)
or 3(char + varchar + numeric) value columns + PK.
Even with 3 value columns, it's dirty, it's hard to implement constraints checks, it's just
something I wouldn't feel good about. Yes, we would have a 1-n relationship, but what's the point?
- multiple attribute tables. Pure horror I'd say.
Tom Kyte
September 17, 2009 - 1:08 pm UTC

I didn't follow that, not sure what you mean.

when I see columns named:

c1, c2, c3, c4, .....


I know we are missing a table, it is the child table that would just have the column C in it. And a row for each of c1, c2, c3, c4, ....

Marius

A reader, September 17, 2009 - 5:13 pm UTC

Their name is not attr_1, attr_2 .. attr_6.
I used these names just to denote they are some sort of relation attributes/properties. All have check constraints on them.

Their type is not the same. One of them is char, two are varchar and the rest(three) are numeric.

There are no benefits I can see from using a separate table in this case.
Tom Kyte
September 17, 2009 - 6:20 pm UTC

Ok, I'll have to believe you - since we cannot actually see the schema :)

I can only comment on what I actually *see*

Trigger on commit is useful and Oracle bug

Josh, October 03, 2009 - 12:19 am UTC

here is my case

DML actions on table A triggers DML actions on table B. Example, updates on A cause inserts on B. With a row level trigger, a bulk operation on A will result into multiple singleton operations on B. For efficiency, I devise a setup in which a before row-level trigger is used to call a pl/sql package and store the data going to B in associative arrays and a after statement trigger is used to bulk operate the arrays into B. This seems to work for bulk bind insert where the after statement trigger is called only once but for bulk bind update, after statement trigger behaves like a row level trigger.

Oracle bug: in regard to statement level trigger, bulk bind update behaves differently (and wrongly) than bulk bind insert.
Tom Kyte
October 07, 2009 - 3:56 pm UTC

Oracle bug: in regard to statement level trigger, bulk bind update behaves
differently (and wrongly) than bulk bind insert.


give us a for example to see this supposed bug.

how to insert bulk of recrods in parent and child tables using procedure

sasi, November 18, 2010 - 2:57 am UTC

could you please give reply. i have a transaction table parent a and chlid b. i want to insert one record in parent table and more than 10 records corresponds to parent record needs to insert in child table.is it possible to insert through procedure at a time. if possible then how?
Tom Kyte
November 18, 2010 - 3:39 am UTC

sure, the procedure would just have two inserts in it - one for parent and one for child - very straightforward.

It is what stored procedures do - encapsulate a series of DML statements, punctuated by procedural code if necessary in a single call.

Example of how not to code using triggers

Vikram, April 13, 2011 - 11:54 pm UTC

Hello Tom,

Have just started work on an existing application that uses versioning concept to maintain data i.e. and UPDATE / DELETE operation actualy create's a new version of the record being updated/deleted. Therefore every entity has 2 tables xxxx_I (Identity) and xxxx_V (version).
There are many bussiness constraints that need to be applied ON COMMIT as is being discussed in this thread.

To make matters worse the way GUI / Middle-tier access(READ and WRITE) this data is using views and INSTEAD OF triggers on views.

Now we have hit an issue that some bussiness constraint checks need to be done ON COMMIT. In one of your other threads I have seen that you have discussed how this could possible be done using MV with REFRESH ON COMMIT and having a trigger on the MV.

You have clearly mentioned that it is not recommended and I too agree that in our application context this is making our own lives hell.

For general understanding I have done some test and wanted to share the results:

CREATE TABLE "TEST1M"
   (
      "G_DATE" DATE,
      "COUNTRY" varchar2(30 byte) primary key
   );
   
CREATE TABLE "TEST1C"
   (
      "COUNTRY" varchar2(30 byte),
      "CITY"    varchar2(30 byte),
      constraint "TFK1" foreign key ("COUNTRY") references test1m(country) enable
   );   

create materialized view test1md_mv
refresh on commit
as
select g_date,test1m.country,city 
from test1m, test1c
where test1m.country=test1c.country;
 
 create or replace trigger test1md_mv_trigger
 after insert or update or delete
 on test1md_mv
 REFERENCING NEW AS NEW OLD AS OLD
for each row
 begin
         if ( inserting )
         then
                 dbms_output.put_line( 'Hey, I fired!! for insert ' ||:new.g_date);
                 dbms_output.put_line( 'Hey, I fired!! for insert ' ||:new.city);
         end if;
         if ( deleting )
         then
                 dbms_output.put_line( 'Hey, I fired!! for deleting' );
         end if;
         if ( updating )
         then
                 dbms_output.put_line( 'Hey, I fired!! for updating' );
         end if;
 end;
 /
 
SQL> insert into test1m(g_date,country) values (to_date('13-APR-11','DD-MON-RR')
,'UNITED_STATES');

1 row created.

SQL> insert into test1c(country,city) values ('UNITED_STATES','NEW YORK');

1 row created.

SQL> insert into test1c(country,city) values ('UNITED_STATES','CHICAGO');

1 row created.

SQL> insert into test1c(country,city) values ('UNITED_STATES','TEXAS');

1 row created.

SQL> commit;
Hey, I fired!! for insert 13-APR-11
Hey, I fired!! for insert NEW YORK
Hey, I fired!! for insert 13-APR-11
Hey, I fired!! for insert CHICAGO
Hey, I fired!! for insert 13-APR-11
Hey, I fired!! for insert TEXAS

Commit complete.

SQL> UPDATE test1c set country = 'USA' where country = 'UNITED STATES';

0 rows updated.

SQL> UPDATE test1c set country = 'USA' where country = 'UNITED_STATES';
UPDATE test1c set country = 'USA' where country = 'UNITED_STATES'
*
ERROR at line 1:
ORA-02291: integrity constraint (HECR.TFK1) violated - parent key not found


SQL> UPDATE test1m set country = 'USA' where country = 'UNITED_STATES';
UPDATE test1m set country = 'USA' where country = 'UNITED_STATES'
*
ERROR at line 1:
ORA-02292: integrity constraint (HECR.TFK1) violated - child record found


SQL> delete from test1c where country = 'UNITED_STATES';

3 rows deleted.

SQL> commit;
Hey, I fired!! for deleting
Hey, I fired!! for deleting
Hey, I fired!! for deleting

Commit complete.

SQL> UPDATE test1m set country = 'USA' where country = 'UNITED_STATES';

1 row updated.

SQL> commit;

Commit complete.

SQL> insert into test1c(country,city) values ('USA','NEW YORK');

1 row created.

SQL> insert into test1c(country,city) values ('USA','CHICAGO');

1 row created.

SQL> commit;
Hey, I fired!! for insert 13-APR-11
Hey, I fired!! for insert NEW YORK
Hey, I fired!! for insert 13-APR-11
Hey, I fired!! for insert CHICAGO

Commit complete.

SQL> UPDATE TEST1C set city = 'XXXX' where city = 'CHICAGO';

1 row updated.

SQL> commit;
Hey, I fired!! for deleting
Hey, I fired!! for deleting
Hey, I fired!! for insert 13-APR-11
Hey, I fired!! for insert NEW YORK
Hey, I fired!! for insert 13-APR-11
Hey, I fired!! for insert XXXX

Commit complete.

SQL> insert into test1c(country,city) values ('USA','NEWCITY');

1 row created.

SQL> update test1m set country = country where country = 'USA';

1 row updated.

SQL> commit;
Hey, I fired!! for deleting
Hey, I fired!! for deleting
Hey, I fired!! for insert 13-APR-11
Hey, I fired!! for insert NEWCITY
Hey, I fired!! for insert 13-APR-11
Hey, I fired!! for insert NEW YORK
Hey, I fired!! for insert 13-APR-11
Hey, I fired!! for insert XXXX

Commit complete.

--********************************************************************************************
--Conclusion: Results are confusing 
--            1/ INSERT seems to work ok .
--            2/ DELETE seems to work ok.
--            2/ UPDATE on the master did not fire the trigger.
--            3/ In the Bi-Temp implementation every UPDATE is executed as INSERT (of data into _V) and
--     UPDATE of TCNT in _I
--            4/ UPDATE on the child table shows series of DELETE - INSERT for other non-affected records as well.
--********************************************************************************************


As the application is already 7 months old and I don't see anyw any to make the senior designers agree for an API based approach, is there something you can recommend?

BTW, the recommended design that us being discussed is to have some temporary table and then try and replicate what the REFRESH ON COMMIT in a MV does. Haven't had a full view of that yet.

Is it time to look for a new job?
Tom Kyte
April 14, 2011 - 9:42 am UTC

Is it time to look for a new job?


laughed at that, but - it could be.

Don't like complex triggers doing magic.

The thought of instead of triggers on views - that gets really scary now.





would deferrable initially deferred constraints apply? they are checked upon commit.

Martin Rose, April 15, 2011 - 9:01 am UTC

> i.e. and UPDATE / DELETE operation actualy create's a new version of the record being updated/deleted.

Can't you use flashback ?
Tom Kyte
April 15, 2011 - 11:15 am UTC

you could use flashback data archive in 11g - but not just flashback query, flashback query doesn't go back in time "infinitely"

triggers

A reader, April 15, 2011 - 9:25 am UTC


Override the COMMIT in Java

Vikram Rathour, April 27, 2011 - 10:42 pm UTC

Hi Tom,

In continuation of the question I had asked above, wanted your view on the following:

What if we override the COMMIT in Java (my project uses JPA ORM)? i.e. when the Java code does a connection.commit(), it internally fires a call to a DB proc named custom_commit. This DB proc can then have stubs to help programmers write logic that they want executed before committing.

I am aware that we will have to have a mechanism to know which program/page/user has fired the commit, but I was thinking od using GTT there.

So, in the framework that I have the UI fires an INSERT on a view. The instead of trigger on the view fires the INSERT on a base table. At this point we also insert a record into the GTT to identify which identity fired the commit.

Sample GTT structure:

CREATE GLOBAL TEMPORARY TABLE ACCESS_OP_CONS_TEMP2
(
  ENTITY_NAME    VARCHAR2(30),
  MODIF_ACTION VARCHAR2(10)
)
ON COMMIT DELETE ROWS
NOCACHE

In the DB proc we can query this table and take action accordingly.

I know it's not a neat solution, but given that I cannot go back and change the complete architecture, and also that I would want to give it my best before I really start thinking of a new job ;).. Does this look better that using MV?
Tom Kyte
April 28, 2011 - 7:34 am UTC

I cannot validate this approach. I don't like it either.

Who is to say they (the programmers) utilize connection.commit() all of the time? You can easily commit without calling that anytime you want.


The problem here is - we have NO IDEA what you are trying to do, what problem you are trying to solve. All you wrote is:

Now we have hit an issue that some bussiness constraint checks need to be done ON COMMIT.

You never answered me if a simply deferrable constraint wouldn't apply. Give us an example of the type of business constraint constraint you are trying to satisfy.

ON-COMMIT Lifecycle check scenario

Vikram Rathour, April 28, 2011 - 11:49 pm UTC

Here is an example of the scenario:

As mentioned above every entity is stored as a combination of xxx_I and xxx_V tables . Each entity in the system has a lifecycle. So the table structure is like

EMP_I
ENTITY_NAME,
ENTITY_DESC,
DEPT_PK,
ENTITY_PK

EMP_V
ENTITY_MASTER_PK,
VERSION_ID (sequence generated PK along with the ENTITY_MASTER_PK column),
IDENTITY_START_DATE,
IDENTITY_TERMINATION_DATE,
VERSION_VALID_FROM

DEPT_I
DEPT_NO,
DNAME,
DEPT_PK

DEPT_V
DEPT_PK,
VERSION_ID (sequence generated PK along with the ENTITY_MASTER_PK column),
IDENTITY_START_DATE,
IDENTITY_TERMINATION_DATE,
VERSION_VALID_FROM


Now lets assume that the EMP-DEPT relation is maintained in this table structure. So we will have EMP_I, EMP_V and DEPT_I and DEPT_V.

Some sample data

EMP_I & EMP_V
ENTITY_NAME ENTITY_DESC ENTITY_PK DEPT_PK VERSION_ID IDENTITY_START_DATE IDENTITY_TERMINATION_DATE VERSION_VALID_FROM
Abs   Employee 1 1234  999  7890  01-JAN-2011   31-12-2011     01-JAN-2011
Abs   Empno 1  1234  999  6785  01-JAN-2011   31-12-2011     05-JAN-2011 ---> New version created as name is changed from 5th Jan 2011

DEPT_I & DEPT_V
DEPT_NO DNAME DEPT_PK VERSION_ID IDENTITY_START_DATE IDENTITY_TERMINATION_DATE VERSION_VALID_FROM
10  Dep1 999  8970  01-JAN-2011   31-12-2020     01-JAN-2011
10  D10  999  7856  01-JAN-2011   31-12-2020     05-JAN-2011 ---> New version created as name is changed from 5th Jan 2011


Secenario 1
-------------
The business rule says that whenever a new employee version is created there should be a check to make sure that the lifecycle of the employee falls within the lifecycle of the department that it belongs to.

So, lets say the user is trying to create a new version like:
ENTITY_NAME ENTITY_DESC ENTITY_PK DEPT_PK VERSION_ID IDENTITY_START_DATE IDENTITY_TERMINATION_DATE VERSION_VALID_FROM
Abs   Empno 1  1234  999  34789  01-JAN-2011   31-12-2025     05-JAN-2013


At the moment the check proc is called in the INSTEAD OF trigger right after the INSERT statement for the base table fires.
In this scenario the approach works fine and it result's in failure. "This one is NOT ON-COMMIT".

Secenario 2
-------------
The user does not commit from UI after changing each record. He/She can modify n number of records and then commit.

So lets say the user creates a new DEPT version and new EMP version:

EMP_I & EMP_V
ENTITY_NAME ENTITY_DESC ENTITY_PK DEPT_PK VERSION_ID IDENTITY_START_DATE IDENTITY_TERMINATION_DATE VERSION_VALID_FROM
Abs   Empno 1  1234  999  99761  01-JAN-2011   31-12-2025     05-JAN-2013

DEPT_I & DEPT_V
DEPT_NO DNAME DEPT_PK VERSION_ID IDENTITY_START_DATE IDENTITY_TERMINATION_DATE VERSION_VALID_FROM
10  D10  999  45823  01-JAN-2011   31-12-9999     17-FEB-2011 


In this case as the check runs straight after the EMP insert it fails, which is wrong. If we fired the check ON-COMMIT i.e. after both the DEPT and EMP inserts have happened then it would succeed.

One of the things I have tried convincing the designers is that as the Java programs control the INSERT's it is best for them to make a call to the check procedure, but they do not agree, saying they do not want one more DB call.

This is where we are stuck at the moment.

Hope I have been able to explain.

Regards,
Vikram
Tom Kyte
April 29, 2011 - 8:28 am UTC

hah, do you lock these tables (probably not)... If not, you cannot really do referential integrity (RI) checking in your application (your logic is flawed even when it "appears" to work). Reads don't block writes, writes don't block reads. Doing RI in an application is almost always done wrong.

It sounds like you want a lot of magic to happen - triggers firing, doing stuff automagically in the background.

I urge you to NOT make this overly complex as you are doing now. Unnecessary complexity is evil. I was just reading this this morning:

http://neugierig.org/software/blog/2011/04/complexity.html

One of the things I have tried convincing the designers is that as the Java programs control the INSERT's it is best for them to make a call to the check procedure, but they do not agree, saying they do not want one more DB call.

You are close, but still far away. What should probably be done in this case is - you revoke select, insert, update, delete from the java developers. You grant them EXECUTE on a set of transactional APIs written in PL/SQL. These transactional APIs know what to do, how to do it - correctly - in the database.


Sorry, I'm not going to contribute to a bunch of fragile spaghetti code. I've been there and I'm not going there ever again...

http://www.oracle.com/technetwork/issue-archive/2008/08-sep/o58asktom-101055.html



triggers

A reader, December 22, 2011 - 11:59 pm UTC


More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library