div.b-mobile {display:none;}

Thursday, September 01, 2005

Part III Why Is a Restart Important to Us?

Part III, Why Is a Restart Important to Us?

The first thing that pops out should be “Our trigger fired twice!” We had a one-row table with a BEFORE FOR EACH ROW trigger on it. We updated one row, yet the trigger fired two times.

Think of the potential implications of this. If you have a trigger that does anything nontransactional, this could be a fairly serious issue. For example, consider a trigger that sends an update where the body of the e-mail is “This is what the data used to look like. It has been modified to look like this now.” If you sent the e-mail directly from the trigger, using UTL_SMTP in Oracle9i or UTL_MAIL in Oracle 10g and above, then the user would receive two e-mails, with one of them reporting an update that never actually happened.

Anything you do in a trigger that is nontransactional will be impacted by a restart. Consider the following implications:
  • Consider a trigger that maintains some PL/SQL global variables variables, such as the number of rows processed. When a statement that restarts rolls back, the modifications to PL/SQL variables won’t “roll back.”
  • Virtually any function that starts with UTL_ (UTL_FILE, UTL_HTTP, UTL_SMTP, and so on) should be considered susceptible to a statement restart. When the statement restarts, UTL_FILE won’t “un-write” to the file it was writing to.
  • Any trigger that is part of an autonomous transaction must be suspect. When the statement restarts and rolls back, the autonomous transaction cannot be rolled back.
All of these consequences must be handled with care in the belief that they may be fired more than once per row or be fired for a row that won’t be updated by the statement after all.

The second reason you should care about potential restarts is performance related. We have been using a single-row example, but what happens if you start a large batch update and it is restarted after processing the first 100,000 records? It will roll back the 100,000 row changes, restart in SELECT FOR UPDATE mode, and do the 100,000 row changes again after that.

You might notice, after putting in that simple audit trail trigger (the one that reads the :NEW and :OLD values), that performance is much worse than you can explain, even though nothing else has changed except the new triggers. It could be that you are restarting queries you never used to in the past. Or the addition of a tiny program that updates just a single row here and there makes a batch process that used to run in an hour suddenly run in many hours due to restarts that never used to take place.

This is not a new feature of Oracle—it has been in the database since version 4.0, when read consistency was introduced. I myself was not totally aware of how it worked until the summer of 2003 and, after I discovered what it implied, I was able to answer a lot of “How could that have happened?” questions from my own past. It has made me swear off using autonomous transactions in triggers almost entirely, and it has made me rethink the way some of my applications have been implemented. For example, I’ll never send e-mail from a trigger directly; rather, I’ll always use DBMS_JOB or the new Oracle 10g scheduler facility to send the e-mail after my transaction commits. This makes the sending of the e-mail “transactional”—that is, if the statement that caused the trigger to fire and send the e-mail is restarted, the rollback it performs will roll back the DBMS_JOB request. Most everything nontransactional that I did in triggers was modified to be done in a job after the fact, making it all transactionally consistent.

The end…
POST A COMMENT

17 Comments:

Anonymous Anonymous said....

wow! just wow! i never knew that... argh... need to revisit quite a lot of code i have written

Thu Sep 01, 05:36:00 PM EDT  

Blogger Alberto Dell'Era said....

It could be that you are restarting queries you never used to in the past

(unleashing the nitpicker in me) - "queries" should be "dml statements excluding inserts" and/or "SELECT FOR UPDATE" queries ?

Thu Sep 01, 06:23:00 PM EDT  

Blogger David Aldridge said....

Couple of things ...

i) Does this mechanism have a formal name?

ii) I suppose that this mean that there is an internal savepoint at the beginning of each query, to which Oracle will rollback in the event of a insert formal name here?

iii) Lastly, are there any internal counters/events maintained on insert formal ... blah blah that would allow us to count occurances of it?

Thu Sep 01, 06:39:00 PM EDT  

Blogger David Aldridge said....

That was the expanded definition of "Couple", btw.

Thu Sep 01, 06:41:00 PM EDT  

Blogger Robert said....

that's it ?!

oh right the book....

;)

Thu Sep 01, 07:16:00 PM EDT  

Blogger Thomas Kyte said....


i) Does this mechanism have a formal name?


it is all part of read consistency, but I've taken to calling it "write" consistency since it differs a bit from what we are used to with a straight read only select.

ii) I suppose that this mean that there is an internal savepoint at the beginning of each query, to which Oracle will rollback in the event of a insert formal name here?


there is an internal savepoint before any and every statement in Oracle pretty much, from PLSQL to modifications - yes.

iii) Lastly, are there any internal counters/events maintained on insert formal ... blah blah that would allow us to count occurances of it?


None that I am aware of, you could set trace events to monitor internal rollback to savepoints, but frankly, they happen for other things as well (the rollbacks) so that would be inconclusive.

Fri Sep 02, 12:38:00 AM EDT  

Anonymous Anonymous said....

Tom,
This three part article is a gem. Can you please post it on asktom.com so we can access it for a long time.

Thanks...

Fri Sep 02, 09:40:00 AM EDT  

Blogger Michael Dinh said....

This is amazing!

As always, I enjoy reading anything and everything you publish.

Thank you for all the knowledge you share.

Fri Sep 02, 11:43:00 AM EDT  

Blogger Scott Swank said....

Here's an interaction I find interesting -- Oracle only blocks when it absolutely needs to...

--------------------------------
Session 1 Create a two row
and update one row
--------------------------------
SQL> CREATE TABLE junk (id number primary key, value number);

Table created.

SQL>
SQL> INSERT INTO junk VALUES (1,1);

1 row created.

SQL> INSERT INTO junk VALUES (2,1);

1 row created.

SQL>
SQL> COMMIT;

Commit complete.

SQL>
SQL> SELECT * FROM junk;

ID VALUE
---------- ----------
1 1
2 1

SQL> update junk set value = value + 1
2 where id = 1;

1 row updated.


--------------------------------
Session 2 Update both rows
--------------------------------
SQL> update junk
2 set value = 2*value;

-- update is blocked by session 1


--------------------------------
Session 3 Update the 2nd row
--------------------------------
SQL> update junk
2 set value = value+1
3 where id = 2;

1 row updated.

-- Holy cow, it went right through. It wasn't blocked by Session 2!



--------------------------------
Session 1 Commit
--------------------------------
SQL> commit;

Commit complete.

-- Session 2 is still blocked...


--------------------------------
Session 3 Commit
--------------------------------
SQL> commit;

Commit complete.

-- Session 2 is no longer blocked


--------------------------------
Session 2 Update proceeds, commit
--------------------------------
2 rows updated.

SQL> commit;

Commit complete.



Now what do we see, but

SQL> select * from junk;

ID VALUE
---------- ----------
1 4
2 4



Which clearly indicates that the updates from sessions 1 & 3 incremented value by 1 before session 2 was able to double value.

I would have expected session 2 to block session 3, but Oracle cleverly uses a strategy whereby session 2 blocks no one until
it is in a position to do its work. Very nice.

Fri Sep 02, 04:09:00 PM EDT  

Blogger Thomas Kyte said....

Can you please post it on asktom.com so we can access it for a long time.


It'll be here for a long time and it is from the *book* that is coming out, so it'll be around for a long long time ;)

Fri Sep 02, 07:44:00 PM EDT  

Anonymous Senthil Kumar B. said....

Respected sir,
This is not a comment but a doubt. Sorry for posting it in comments. I don't know how to post my doubt to you in asktom.oracle.com. Could you please tell me? i am very much interested in Oracle and i have several doubts. I would like to get it cleared from you. So, i kindly request you to assist me in this regard. Could i post my doubts here sir? Thanks, Senthil Kumar B.

Tue Sep 12, 07:02:00 AM EDT  

Blogger Thomas Kyte said....

Senthil Kumar B. said...

if you have a question about this particular page (write consistency), sure you may post it here.

On asktom, if you have a question about a particular page - you may use the link at the bottom of each page to post it - please keep it relevant to the page you are looking at however.

when I have time for new questions - there is a button on the home page "submit a new question". I took 20 new questions yesterday. I might take zero today, or I might take more - depends on my schedule.

When I'm not taking new questions - there is a link on asktom for "other resources" - that'll be something to consider using as well!

Tue Sep 12, 07:09:00 AM EDT  

Anonymous Senthil Kumar B. said....

Thanks very much tom. Now i understood!

Tue Sep 12, 07:57:00 AM EDT  

Blogger RAJESH said....

Tom:

Looking at the above scenario said by "Scott Swank"

1) Transaction started at SESSION-1 locks the rows with id = 1;
2) Transaction started at SESSION-2 tries to lock both id = 1 and id= 2. Since id=1 is already locked by SESSION-1 its kept blocked.
3) In the meanwhile Transaction started at SESSION-3 locks id=2;

COMMIT
1) Commit at SESSION-1 releases locks holding rows with id =1.
2) Since SESSION-2 requires both rows (id = 1 and id =2) to success its transaction, its now blocked by Session-3
3) now Session-3 commits and hence at this point session-2 Wins.

Please correct me if i am wrong.

Mon Oct 25, 12:41:00 PM EDT  

Blogger Thomas Kyte said....

@RAJESH

that is what the example demonstrated, yes. You have just restated what Scott demonstrated.

Mon Nov 01, 04:45:00 AM EDT  

Anonymous Anonymous said....

Hi Tom,
You mentioned "it is restarted after processing the first 100,000 records? "

From where does this number of 100k come from? As i understand it will not happen for 10k records. Why so?

Thanks,
Kunwar

Thu Jan 31, 02:59:00 AM EST  

Blogger Thomas Kyte said....

@Kunwar

it was a hypothetical

The second reason you should care about potential restarts is performance related. We have been using a single-row example, but what happens if you start a large batch update and it is restarted after processing the first 100,000 records? It will roll back the 100,000 row changes, restart in SELECT FOR UPDATE mode, and do the 100,000 row changes again after that.


I was saying "what IF after 100,000" - driving home the performance aspect.

Thu Jan 31, 06:06:00 AM EST  

POST A COMMENT

<< Home