Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Alberto.

Asked: August 02, 2003 - 8:22 am UTC

Last updated: December 19, 2011 - 4:50 pm UTC

Version: 8.1.7/9.0.1

Viewed 10K+ times! This question is

You Asked

Hi Tom,

I have always worked under the assumption that a statement like

update t set ... where ...

is evaluated as

1) take a consistent snapshot of the table at statement start time
2) for every row in the snapshot,
wait on the row if the row is locked by another transaction,
and then modify the row if the where clause is satisfied.

I.e. any modification made by transactions started and committed
after the statement start time (SCN) are not seen by the update statement, since they are not part of the "consistent snapshot".

The following lab experiment (that develops from an actual business case) has made me rethink about it.

We will update a big table with an update statement (transaction labeled LONG),but we will make it block in the middle of the table waiting for the transaction BLOCKER to release the lock. While LONG is waiting, we will modify the first and last row with another transaction FIRSTLAST, commit it, and then rollback BLOCKER to free LONG.
Then, we will repeat all of the above but committing BLOCKER instead of rollbacking it.

create table t (id int, y int, filler char(10) default ' ') initrans 2 nologging storage (initial 1M next 1m pctincrease 0 maxextents unlimited);

insert /*+ APPEND */ into t select rownum, 0, ' ' from all_objects,all_objects,all_objects where rownum <=1000000;
commit;

BLOCKER> update t set y = -1 where id = 500000;

LONG> update t set y = rownum where y = -1 or id=500000;

-- it blocks on row 500000

FIRSTLAST> update t set y = -1 where id in (1,1000000);
FIRSTLAST> commit;

BLOCKER> rollback;

LONG > select * from t where y <> 0;

ID Y FILLER
---------- ---------- ----------
1 -1
500000 1
1000000 -1

This works as I expected; since FIRSTLAST started and committed after LONG, its modifications are not part of the "where clause snapshot" of LONG, and so the first and last rows do not get modified by LONG [ie LONG sees the old value (0) of y on the first and last row, even if the new value (-1) would satisfy the where clause of LONG]. So far, so good.

-----------------

Now we will perfom the very same experiment (resetting the table with
all the y set to zero), but now committing BLOCKER instead of rollbacking it:

update t set y = 0 where y <> 0;
commit;

BLOCKER> update t set y = -1 where id = 500000;

LONG> update t set y = rownum where y = -1 or id=500000;

-- it blocks on row 500000

FIRSTLAST> update t set y = -1 where id in (1,1000000);
FIRSTLAST> commit;

BLOCKER> commit; -- this is the only difference

LONG > select * from t where y <> 0;

ID Y FILLER
---------- ---------- ----------
1 1
500000 2
1000000 3

Now, even if FIRSTLAST started and committed after LONG, and so its modifications are not part of the "where clause snapshot" of LONG
shot at start time, LONG actually has seen the modifications of FIRSTLAST (in addition to those of BLOCKER, of course).

This has been reproduced in both 8.1.7 and 9.0.1.

If the update statement of LONG had been rollbacked and then restarted
after detecting that BLOCKER had modified a row, we would have been observed this result.

Question a) Has Oracle really restarted the statement ? Or better, may I develop my software taking the stmt restart as consistency/concurrency rule, even if Oracle has made something different, but equivalent to a restart ?

Question b) What's the rationale for this behaviour ? I intuitively understand that it's in order to see all-or-none of the modifications made by BLOCKER, that is, to preserve the atomicity of BLOCKER ... but I would like to have this intuition confirmed, or not.

[Side note: LONG updates y with the rownum, and assuming that the rownum is assigned to each row as the stmt scans them, it seems that LONG has updated the rows starting with id=1, then 500000, then 1000000, even if it was blocked on id=500000 and had already visited id=1 and had discarded it (my assumption of the actual Oracle's work in this case, i know that in general the order of scanning is absolutely neither predictable nor reproducibile). Just food for thought, no practical implications whatsoever.]

--

The lab experiment performed above was actually motivated by a real business case, since we need to modify, with an update stmt similar to that of LONG, a big table subject to (a lot of) concurrent modifications similar to the one of BLOCKER and FIRSTLAST. I may easily imagine other business cases in which we need to batch update a big OLTP-type table, subject to a lot of updates (or inserts, deletes) per seconds, 24x7. Say, "raise the salary of all employees around midnight, where ...". "around" means "best-effort", it's ok if we update as per the table image of 00:05 AM, but of course not 01:00 PM.

Assuming that the answer of question (a) is "yes", I can't simply fire the batch update around midnight, since the row-to-modify-set snapshot is not simply taken at stmt start, but actually re-evaluated each time some OLTP transactions modifies a row.
In extreme, the stmt may be restarted forever ... and not consistently with the "around midnight" snapshot anyway.

Probably the best way would be to, in a nutshell (the real implementation should be more sophisticated, using temp tables, etc):

create table snap (row_id rowid) as select rowid row_id from t where ...

And then, update t using snap as the row-to-modify-set.

Question c) Am i being too cautious, or plainly wrong, considering such a strategy ?


Thanks for your time and attention
Alberto


and Tom said...

oh geez -- i'm almost afraid to publish this one. It is going to cause quite a bit of followup I believe.

The result set is consistent -- but it may well be consistent as of a restart time. We can see this with a much much smaller example -- I just used 5 rows in a controlled manner, very similar to your example.

we'll use a row trigger and an autonomous transactions to see whats going on under the covers though. Here is the setup:

ops$tkyte@ORA920> create sequence s;
Sequence created.

ops$tkyte@ORA920> create table msg1
2 ( seq int primary key, sid int,
3 old_id int, old_y int,
4 new_id int, new_y int );
Table created.

ops$tkyte@ORA920> create table msg2
2 ( seq int primary key, sid int,
3 old_id int, old_y int,
4 new_id int, new_y int );
Table created.

ops$tkyte@ORA920> create table t
2 as
3 select rownum id, 0 y
4 from all_users
5 where rownum <= 5;
Table created.

ops$tkyte@ORA920> select count(*)
2 from t;

COUNT(*)
----------
5

so, t is our 5 row table we'll do concurrent things on. msg1 will be a table we'll log the before/after image row by row transactionally. msg2 will be a table we'll do the same but using an autonomous transaction:

ops$tkyte@ORA920> create or replace procedure log_msg2( p_seq in int,
2 p_sid in int,
3 p_old_id in int,
4 p_old_y in int,
5 p_new_id in int,
6 p_new_y in int )
7 as
8 pragma autonomous_transaction;

9 begin
10 insert into msg2
11 (seq, sid, old_id, old_y, new_id, new_y )
12 values
13 (p_seq, p_sid, p_old_id, p_old_y, p_new_id, p_new_y );
14 commit;
15 end;
16 /
Procedure created.


ops$tkyte@ORA920> create or replace trigger t_trigger before update on t for each row
2 declare
3 l_seq number;
4 begin
5 select s.nextval into l_seq from dual;
6
7 insert into msg1
8 (seq, sid, old_id, old_y, new_id, new_y )
9 values
10 (l_seq, userenv('sessionid'), :old.id, :old.y, :new.id, :new.y );
11
12 log_msg2
13 (l_seq, userenv('sessionid'), :old.id, :old.y, :new.id, :new.y );
14 end;
15 /

Trigger created.

that trigger will log entries in both tables -- we'll be able to see the atrans entry immediately, we'll only see committed entries in msg1 however

ops$tkyte@ORA920> update t set y = -1 where id = 3;
1 row updated.

set the "middle" row to -1, this locks it as well of course

ops$tkyte@ORA920> set echo off
in another session issue:
update t set y = 1 where y = -1 or id = 2 or id = 3;
then come back and hit enter
ops$tkyte@ORA920> pause

I did that, it blocked of course. it updated a single row -- where id=2 and blocked on id=3

ops$tkyte@ORA920> select decode(msg1.seq,null,'NOT VISIBLE'), msg2.*
2 from msg2, msg1
3 where msg2.seq = msg1.seq(+)
4 order by msg2.seq;

DECODE(MSG1 SEQ SID OLD_ID OLD_Y NEW_ID NEW_Y
----------- ---------- ---------- ---------- ---------- ---------- ----------
1 12565 3 0 3 -1
NOT VISIBLE 2 12566 2 0 2 1

we can see it updated row id=2 -- the trigger fired. It is waiting on id=3...


ops$tkyte@ORA920> set echo off
in another session issue:
update t set y = -1 where id in ( 1,5 );
commit;
ops$tkyte@ORA920> pause

that is your "update the first and last row". I did that, that transaction went right off -- no problem:

ops$tkyte@ORA920> select decode(msg1.seq,null,'NOT VISIBLE'), msg2.*
2 from msg2, msg1
3 where msg2.seq = msg1.seq(+)
4 order by msg2.seq;

DECODE(MSG1 SEQ SID OLD_ID OLD_Y NEW_ID NEW_Y
----------- ---------- ---------- ---------- ---------- ---------- ----------
1 12565 3 0 3 -1
NOT VISIBLE 2 12566 2 0 2 1
3 12567 1 0 1 -1
4 12567 5 0 5 -1

we can see now 4 rows in our log table. seq=1 is the first update that we did and is blocking session2 over there. seq=2 is the one row that blocked session updated so far. seq=3/4 are the entries for the committed "first/last" row update

ops$tkyte@ORA920> commit;
Commit complete.

here is where it gets *very* interesting

ops$tkyte@ORA920> select decode(msg1.seq,null,'NOT VISIBLE'), msg2.*
2 from msg2, msg1
3 where msg2.seq = msg1.seq(+)
4 order by msg2.seq;

DECODE(MSG1 SEQ SID OLD_ID OLD_Y NEW_ID NEW_Y
----------- ---------- ---------- ---------- ---------- ---------- ----------
1 12565 3 0 3 -1
NOT VISIBLE 2 12566 2 0 2 1
3 12567 1 0 1 -1
4 12567 5 0 5 -1

NOT VISIBLE 5 12566 3 0 3 1

NOT VISIBLE 6 12566 1 -1 1 1
NOT VISIBLE 7 12566 2 0 2 1
NOT VISIBLE 8 12566 3 -1 3 1
NOT VISIBLE 9 12566 5 -1 5 1

9 rows selected.

hmmm -- when we committed the first thing to happen was the row with id=3 was updated -- we can see that -- seq=5 is that update. HOWEVER, that was actually rolled back (as was the update to id=2 as we'll see below further) and the update itself was restarted from the beginning. It "reupdated" id=2 and id=3 and added ids 1 and 5 to the mix (y=-1 part of the predicate picked them up).


ops$tkyte@ORA920> set echo off
commit in the formerly blocked session
ops$tkyte@ORA920> pause

ops$tkyte@ORA920> select * from t;

ID Y
---------- ----------
1 1
2 1
3 1
4 0
5 1

so we can clearly see it updated 4 rows and

ops$tkyte@ORA920> select decode(msg1.seq,null,'UNDONE'), msg2.*
2 from msg2, msg1
3 where msg2.seq = msg1.seq(+)
4 order by msg2.seq;

DECODE SEQ SID OLD_ID OLD_Y NEW_ID NEW_Y
------ ---------- ---------- ---------- ---------- ---------- ----------
1 12565 3 0 3 -1
UNDONE 2 12566 2 0 2 1
3 12567 1 0 1 -1
4 12567 5 0 5 -1
UNDONE 5 12566 3 0 3 1
6 12566 1 -1 1 1
7 12566 2 0 2 1
8 12566 3 -1 3 1
9 12566 5 -1 5 1

9 rows selected.

now that everything is "committed" we can see the seq=2 and seq=5 were "undone" -- rolled back. It was worked performed that was "unperformed" in order to allow us to "redo it" (don't want to give that 10% raise 2 times, or 3 times or N times do you?)



so, Oracle just change the time at which your update started on you -- slid it a bit if you will.



The statement will should never be restarted forever as others will queue, blocked, behind this long running statement -- they (the ones that get blocked) will stay blocked until this long running statement completes. the mini-rollbacks will not unblock them.


If I had a large bulk update to perform like this, I would be sorely temped to lock the table to prevent interference from the mini-transactions. You have a conflict of methods here -- and what would happen is most of your users would end up getting blocked after multiple restarts of the long running statement (eg: the cumulative apparent down time for them would be larger).

If you think about it -- it must restart in order to "do the right thing", it really needs to see the data as of "now" in order to safely update it (what if the other updates REMOVED rows from consideration?)

I would say your situation sounds shaky - very very excessively prone to the lost update issues. locking the table for this mass update sounds prudent and most efficient - or letting the database do what it does.

Rating

  (104 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

very interesting thread..

pasko, August 03, 2003 - 8:36 am UTC

Hi Tom,
Thanks for a great answer above...
but what do you mean by saying :

"or letting the database do what it does"

Thanks in advance.


Tom Kyte
August 03, 2003 - 8:50 am UTC

the answer demonstrated what the database "does"

I said

o either lock the table, do the batch or....
o let it do what it does as described above.

many many thanks

Alberto Dell'Era, August 04, 2003 - 6:00 am UTC

As always, your *superb* answer not only has cleared my doubts, but has opened the way for more investigations ... i couldn't ask for more!

So, many many thanks, again.

Really, really thought-provoking ... i have to think about it deeply.

Thanks!
Alberto





final review

Alberto Dell'Era, August 04, 2003 - 4:53 pm UTC

Ok, since the "followup flooding" has not happened ;-), would you mind checking my understanding ? I will understand if you ignore this long followup, of course, and enqueue myself after the "sorry I have a large backlog" semaphore.

a) if a "long" transaction blocks on a locked row, it will wait, check the new values and then

if (the new values satisfy the where clause) {
mini-rollback the changes, and restart the update (or delete) [case A]
} else {
continue processing [case B]
}

b) while mini-rollbacking, the "long" tx will not release its TX lock, so txs enqueued on it will stay blocked, helping preventing the "starvation" (ie "restart ad infinitum") of "long";

c) the reason for mini-rollbacking and restarting in [case A] is that "long" has seen a change (has been affected) by another tx "tx-other", but "tx-other" may have as well changed a row already scanned by "long", and so if "long" didn't restart, it would have seen only a part of the changes of "tx-other", breaking the "tx-other" atomicity;

d) the reason for continuing processing in [case B] is that "long" has not seen (yet) any changes made by "tx-other", so it doesn't matter whether "tx-other" has changed a row already scanned by "long" or not: atomicity means "see all or nothing", and here we are on the "nothing" side;

e) if we were still at the college, c+d may be expressed as "serialize long completely after tx-other";

f) i still think that "long" may be restarted forever, because e.g. it may happen that
- "long" is blocked on a row;
- a "brand-new-tx" modifies a not-locked row;
- "long" restarts, blocks again on that row, and restarts again.
But [my speculation], while mini-rollbacking, "long" will not give away the locks on the rows it has already modified, so the probability that a "brand-new-tx" is able to step into decreases as time progresses, so in real life, we may say that the starvation is near-to-impossible.

g) if you don't mind, a small curiosity: how long did it take to read and answer my question ?

Thanks, again ...
Alberto

Tom Kyte
August 04, 2003 - 6:36 pm UTC

a) yes...

b) yes...

c) that AND what if the row had been changed in such a way so as to no make it meet the criteria anymore? sure the before image did -- but the current value does not. Suppose you had a rule that said "when a = 5, B must be between 10 and 15, else B must be 20". you issue an update "update t set b = 11 where a = 5". while processing you got blocked. it was a block due to someone updating a single sole row -- "update t set a = 6 where id = 1234". Your update will fail -- EVEN THOUGH you said clearly "only when a=5 do this update" -- because a is now 6.

so, it restarts, and skips that row.

d) i think I follow you -- yes.

e) in a manner

f) it is possible i suppose, but highly highly improbable.

g) reading it took longer then answering it. It was hard to digest, but it was a very interesting one. I might make it the subject of a presentation actually. I was surprised at the lack of followup -- I did think it would generate so "oh, look at that" ;)



Thank you for a very good explanation

Helena Marková, August 05, 2003 - 3:47 am UTC


thanks

Alberto Dell'Era, August 05, 2003 - 5:54 am UTC

Many thanks for your re-answering ... and I'm happy to see that the question got your interest, as got mine in first place.

And now, armed with your scaffolding and example, let's sail towards new adventures in the Land of Consistency & Concurrency ;-)


Why different processing for Rollback/Commit ?

Eugen, August 05, 2003 - 7:56 am UTC

During study of this treat I was wondering, why Oracle used a different processing when the blocking session issues a rollback instead of a commit.
Sure, the data of the blocked row isn't changed with a rollback, but even tough it seems to me, it would be more consistent (processing wise) when the rollback of a blocked row would also lead to mini-rollbacks in the LONG transaction.

What do I miss?


Tom Kyte
August 05, 2003 - 8:27 am UTC

it is "if the data changed values".

take the original example and change:

update t set y = -1 where id = 3;

to

update t set y = y where id = 3;


the blocking events will be the same. session 1 will have id=3 locked. session 2 will update id=2 and then block on id=3. session 3 will then update id=1 and id=5 and commit. when session=1 COMMITS this time -- session 2 will unblock and find that "hey, y didn't change -- no worries here, lets continue onward..." and will update two rows only.

Here is the script I used minus the sqlplus prompts and all -- this one shows that even when you commit -- if you didn't change the value -- it continues on like it would have for a rollback.

It would not be any more or less consistent either way -- this is "just as" consistent AND much more performant...

drop table msg1;
drop table msg2;
drop sequence s;
drop table t;

set echo on

create sequence s;

create table msg1
( seq int primary key, sid int,
old_id int, old_y int,
new_id int, new_y int );

create table msg2
( seq int primary key, sid int,
old_id int, old_y int,
new_id int, new_y int );

create table t
as
select rownum id, 0 y
from all_users
where rownum <= 5;

select count(*)
from t;

create or replace procedure log_msg2( p_seq in int,
p_sid in int,
p_old_id in int,
p_old_y in int,
p_new_id in int,
p_new_y in int )
as
pragma autonomous_transaction;
begin
insert into msg2
(seq, sid, old_id, old_y, new_id, new_y )
values
(p_seq, p_sid, p_old_id, p_old_y, p_new_id, p_new_y );
commit;
end;
/


create or replace trigger t_trigger before update on t for each row
declare
l_seq number;
begin
select s.nextval into l_seq from dual;

insert into msg1
(seq, sid, old_id, old_y, new_id, new_y )
values
(l_seq, userenv('sessionid'), :old.id, :old.y, :new.id, :new.y );

log_msg2
(l_seq, userenv('sessionid'), :old.id, :old.y, :new.id, :new.y );
end;
/
update t set y = y where id = 3;

set echo off
prompt in another session issue:
prompt update t set y = 1 where y = -1 or id = 2 or id = 3;;
prompt then come back and hit enter
pause

select decode(msg1.seq,null,'NOT VISIBLE'), msg2.*
from msg2, msg1
where msg2.seq = msg1.seq(+)
order by msg2.seq;

set echo off
prompt in another session issue:
prompt update t set y = -1 where id in ( 1,5 );;
prompt commit;;
pause

select decode(msg1.seq,null,'NOT VISIBLE'), msg2.*
from msg2, msg1
where msg2.seq = msg1.seq(+)
order by msg2.seq;

commit;

select decode(msg1.seq,null,'NOT VISIBLE'), msg2.*
from msg2, msg1
where msg2.seq = msg1.seq(+)
order by msg2.seq;

set echo off
prompt commit in the formerly blocked session
pause

select * from t;

select decode(msg1.seq,null,'UNDONE'), msg2.*
from msg2, msg1
where msg2.seq = msg1.seq(+)
order by msg2.seq;


Will take me about 6 months...

sPh, August 05, 2003 - 11:27 am UTC

I think it will take me about six months to figure this one out. I always tell people one of the strengths of Oracle is that you can understand what goes on "under the hood" and this is a great example.

sPh

Tom Kyte
August 05, 2003 - 1:13 pm UTC

interesting variations

a) the one where I changed the update t set y = -1 where id =3; to
update t set y = y where id = 3;

only 2 rows updated instead of 4...

b) add a column Z with all zeroes to the table t. make the first update

update t set y = y, z = 1 where id = 3;

Now see how many (try to guess)

c) keeping that column Z with all zeros, add this trigger:

create or replace trigger t after update on t for each row
begin
dbms_output.put_line( :new.z );
end;
/

and use this update:

update t set y = y, z = 1 where id = 3;

now see how many rows updated.....




Excellent example

shan, August 05, 2003 - 2:06 pm UTC

Not that I followed everything what you said here...I am reading it again and again and am realizing now what an awesome complex beast is Oracle. I don't think if anyone can explain this more lucidly than you do with your examples. Thank you somuch for this great forum!!

best regards,
Shan.

interesting variation of the interesting variation

Alberto Dell'Era, August 05, 2003 - 4:44 pm UTC

> a) the one where I changed the update t set y = -1 where id =3; to
> update t set y = y where id = 3;
>
> only 2 rows updated instead of 4...

Yes! That's because the update "update t set y = 1 where y = -1 or id = 2 or id = 3" (which i labeled "long") didn't get restarted, so it didn't see the update to -1 made by "first/last". Ie it was consistent as the image before first/last".

Here's your script, which i've modified using client_info as a logging device, and its final results:

drop table msg1;
drop table msg2;
drop sequence s;
drop table t;

set echo on
set lines 123
col client_info format a15
set pages 9999

create sequence s;

create table msg1
( seq int primary key, sid int,
old_id int, old_y int,
new_id int, new_y int );

create table msg2
( seq int primary key, sid int,
old_id int, old_y int,
new_id int, new_y int,
client_info varchar2(64) );

create table t
as
select rownum id, 0 y
from all_users
where rownum <= 5;

select count(*)
from t;

create or replace procedure log_msg2( p_seq in int,
p_sid in int,
p_old_id in int,
p_old_y in int,
p_new_id in int,
p_new_y in int )
as
pragma autonomous_transaction;
client_info varchar2(64);
begin
dbms_application_info.read_client_info (client_info);

insert into msg2
(seq, sid, old_id, old_y, new_id, new_y, client_info )
values
(p_seq, p_sid, p_old_id, p_old_y, p_new_id, p_new_y, client_info);
commit;
end;
/
show errors;


create or replace trigger t_trigger before update on t for each row
declare
l_seq number;

begin
select s.nextval into l_seq from dual;

insert into msg1
(seq, sid, old_id, old_y, new_id, new_y )
values
(l_seq, userenv('sessionid'), :old.id, :old.y, :new.id, :new.y );

log_msg2
(l_seq, userenv('sessionid'), :old.id, :old.y, :new.id, :new.y);
end;
/
exec dbms_application_info.set_client_info ('blocker');
update t set y = y where id = 3;
--update t set y = -1 where id = 3;

set echo off
prompt in another session issue:
prompt exec dbms_application_info.set_client_info ('long');;
prompt update t set y = 1 where y = -1 or id = 2 or id = 3;;
prompt then come back and hit enter
pause

select decode(msg1.seq,null,'NOT VISIBLE'), msg2.*
from msg2, msg1
where msg2.seq = msg1.seq(+)
order by msg2.seq;

set echo off
prompt in another session issue:
prompt exec dbms_application_info.set_client_info ('first/last');;
prompt update t set y = -1 where id in ( 1,5 );;
prompt commit;;
prompt then come back and hit enter
pause

select decode(msg1.seq,null,'NOT VISIBLE'), msg2.*
from msg2, msg1
where msg2.seq = msg1.seq(+)
order by msg2.seq;

commit;

select decode(msg1.seq,null,'NOT VISIBLE'), msg2.*
from msg2, msg1
where msg2.seq = msg1.seq(+)
order by msg2.seq;

set echo off
prompt commit in the formerly blocked session (long)
prompt then come back and hit enter
pause

select * from t;

select decode(msg1.seq,null,'UNDONE'), msg2.*
from msg2, msg1
where msg2.seq = msg1.seq(+)
order by msg2.seq;

Final Results:

ID Y
---------- ----------
1 -1
2 1
3 1
4 0
5 -1


DECODE(MSG1.SEQ,NU SEQ SID OLD_ID OLD_Y NEW_ID NEW_Y CLIENT_INFO
------------------ ---------- ---------- ---------- ---------- ---------- ---------- ---------------
1 3967 3 0 3 0 blocker
2 3968 2 0 2 1 long
3 3969 1 0 1 -1 first/last
4 3969 5 0 5 -1 first/last
5 3968 3 0 3 1 long

But now, another interesting variation of the interesting variation: what if id=5 satisfied the where condition
right from the beginning, and it gets later modified by "first/last" to a non-satisfying value (here i've chosen 99) ?
So let's start with this table image:

ID Y
---------- ----------
1 0
2 0
3 0
4 0
5 -1

Here's the setup:

drop table msg1;
drop table msg2;
drop sequence s;
drop table t;

set echo on
set lines 123
col client_info format a15
set pages 9999

create sequence s;

create table msg1
( seq int primary key, sid int,
old_id int, old_y int,
new_id int, new_y int );

create table msg2
( seq int primary key, sid int,
old_id int, old_y int,
new_id int, new_y int,
client_info varchar2(64) );

create table t
as
select rownum id, 0 y
from all_users
where rownum <= 5;

update t set y = -1 where id = 5;

select * from t;

create or replace procedure log_msg2( p_seq in int,
p_sid in int,
p_old_id in int,
p_old_y in int,
p_new_id in int,
p_new_y in int )
as
pragma autonomous_transaction;
client_info varchar2(64);
begin
dbms_application_info.read_client_info (client_info);

insert into msg2
(seq, sid, old_id, old_y, new_id, new_y, client_info )
values
(p_seq, p_sid, p_old_id, p_old_y, p_new_id, p_new_y, client_info);
commit;
end;
/


create or replace trigger t_trigger before update on t for each row
declare
l_seq number;

begin
select s.nextval into l_seq from dual;

insert into msg1
(seq, sid, old_id, old_y, new_id, new_y )
values
(l_seq, userenv('sessionid'), :old.id, :old.y, :new.id, :new.y );

log_msg2
(l_seq, userenv('sessionid'), :old.id, :old.y, :new.id, :new.y);
end;
/
exec dbms_application_info.set_client_info ('blocker');
--update t set y = -1 where id = 3;
update t set y = y where id = 3;


set echo off
prompt in another session issue:
prompt exec dbms_application_info.set_client_info ('long');;
prompt update t set y = 1 where y = -1 or id = 2 or id = 3;;
prompt then come back and hit enter
pause

select decode(msg1.seq,null,'NOT VISIBLE'), msg2.*
from msg2, msg1
where msg2.seq = msg1.seq(+)
order by msg2.seq;

set echo off
prompt in another session issue:
prompt exec dbms_application_info.set_client_info ('first/last');;
prompt update t set y = 99 where id in ( 1,5 );;
prompt commit;;
prompt then come back and hit enter
pause

select decode(msg1.seq,null,'NOT VISIBLE'), msg2.*
from msg2, msg1
where msg2.seq = msg1.seq(+)
order by msg2.seq;

commit;

select decode(msg1.seq,null,'NOT VISIBLE'), msg2.*
from msg2, msg1
where msg2.seq = msg1.seq(+)
order by msg2.seq;

set echo off
prompt commit in the formerly blocked session (long)
prompt then come back and hit enter
pause

select * from t;

select decode(msg1.seq,null,'UNDONE'), msg2.*
from msg2, msg1
where msg2.seq = msg1.seq(+)
order by msg2.seq;

Final result:

ID Y
---------- ----------
1 99
2 1
3 1
4 0
5 99


DECODE(MSG1.SEQ,NU SEQ SID OLD_ID OLD_Y NEW_ID NEW_Y CLIENT_INFO
------------------ ---------- ---------- ---------- ---------- ---------- ---------- ---------------
1 3971 3 0 3 0 blocker
UNDONE 2 3968 2 0 2 1 long
3 3969 1 0 1 99 first/last
4 3969 5 -1 5 99 first/last
UNDONE 5 3968 3 0 3 1 long
UNDONE 6 3968 5 -1 5 1 long
7 3968 2 0 2 1 long
8 3968 3 0 3 1 long

seq=6 made the update (old_y=-1 comes from the snapshot taken at statement start, i believe), but then "long" saw that the row was modified, so the statement restarted.
Do you agree ?
My subconscious mind is shouting "block SCN ... block SCN .. serializable .. mixup of consistent get and current mode get" but i'm too sleepy to shape that intuition ;-), perhaps tomorrow ...

how many dirty buffers

amit poddar, August 19, 2003 - 3:11 pm UTC

Hi,

If each update gets a read consistent block image that means there would be multiple dirty buffers for the same block. How are theses changes merged since finally only one block is written to the disk. (If all the dirty images are written to the disk all the changes would not be on the disk.


Tom Kyte
August 19, 2003 - 6:15 pm UTC

there is only one dirty buffer, the last one. the rest are just read consistent versions of the same block at different points in time.

last block version is the only true one

followup

amit poddar, August 20, 2003 - 6:42 am UTC

Hi,

So am i correct in assuming that:

while updating oracle creates a read consistent view just for reading purpose and makes the change in the orignal block and not the read consistent view of the block created

thanks
amit

Tom Kyte
August 21, 2003 - 7:47 am UTC

we have to update the last version, yes.

consider:

update emp set sal = sal + 1;


if two people did that at more or less the same time and we didn't modify the LAST version for both -- we'd lose updates.

correct statement

Freek, August 20, 2003 - 7:52 am UTC

Is following statement correct:

the update/delete statement will see the values as they exist on the moment that the statement could be performed without being blocked by prior update/delete statements of other sessions, and not as they existed on the moment the statement started

Tom Kyte
August 21, 2003 - 7:52 am UTC

not really. sort of.

say the update statement starts at time T1.

as it is processing, it encounters a locked row and blocks. time is now T1+x.

it unblocks and sees a changed row.

two cases:

1) row is effectively unchanged from time T1 (we can compare them by getting
read consistent image and checking against current). Then we proceed as
normal.

by effectively unchanged -- i mean the set of relevant columns to us is
unchanged, the ones we used in the predicate, that we are changing with
the set

2) row is DIFFERENT. statement rolls back and restarts. Time is now T1
all over again. We moved the goal posts!


so, the update always sees the rows as of the time the statement began, we just move the concept of "when the statement began"

Most usefull

Christo Kutrovsky, August 20, 2003 - 10:46 am UTC

I would like to see the answer to Freek's statement.

Amazing .. this is a very important point.


Christo

On a plate, August 20, 2003 - 1:26 pm UTC

Rather than having someone feed you the answer why not work it out yourself with two sql*plus sessions ?

to: Christo

Freek, August 21, 2003 - 4:03 am UTC

What I tried to do is summarize Tom's answer, and just wanted to know if I didn't miss some nuance.

(I did in fact did a quick test by letting one session multiply a value in a table but not committing and then let a second session multiply the same value. The result supported the statement)

Different results

Rob, August 21, 2003 - 4:54 am UTC

Hi Tom,

After execution of your script, I get different results.
It seems that Oracle didn't "slid a bit" in my case. We use Oracle 9.2.0. Can you explain why this is happening ?


The output was:

rob@DEV> update t set y = y where id = 3;

1 rij is bijgewerkt.

rob@DEV>
rob@DEV> set echo off
in another session issue:
update t set y = 1 where y = -1 or id = 2 or id = 3;
then come back and hit enter


DECODE(MSG1 SEQ SID OLD_ID OLD_Y NEW_ID NEW_Y
----------- ---------- ---------- ---------- ---------- ---------- ----------
1 25494 3 0 3 0
NOT VISIBLE 2 25508 2 0 2 1

in another session issue:
update t set y = -1 where id in ( 1,5 );
commit;


DECODE(MSG1 SEQ SID OLD_ID OLD_Y NEW_ID NEW_Y
----------- ---------- ---------- ---------- ---------- ---------- ----------
1 25494 3 0 3 0
NOT VISIBLE 2 25508 2 0 2 1
3 25510 1 0 1 -1
4 25510 5 0 5 -1


Commit is voltooid.


DECODE(MSG1 SEQ SID OLD_ID OLD_Y NEW_ID NEW_Y
----------- ---------- ---------- ---------- ---------- ---------- ----------
1 25494 3 0 3 0
NOT VISIBLE 2 25508 2 0 2 1
3 25510 1 0 1 -1
4 25510 5 0 5 -1
NOT VISIBLE 5 25508 3 0 3 1

commit in the formerly blocked session


ID Y
---------- ----------
1 -1
2 1
3 1
4 0
5 -1


DECODE SEQ SID OLD_ID OLD_Y NEW_ID NEW_Y
------ ---------- ---------- ---------- ---------- ---------- ----------
1 25494 3 0 3 0
2 25508 2 0 2 1
3 25510 1 0 1 -1
4 25510 5 0 5 -1
5 25508 3 0 3 1

Tom Kyte
August 21, 2003 - 6:06 pm UTC

update t set y = y where id = 3;

you did the y=y test case, not the y=-1 test. that is why

changing statement

freek, August 21, 2003 - 7:59 am UTC

more complicated then I thought:

"the update/delete statement will see the values as they exist on the moment that the statement could be performed without being blocked by prior update/delete statements of other sessions that change a value used by the query, and not as they existed on the moment the statement started."

create table test
( field1 number(10,0),
field2 number(10,0),
field3 number(10,0)
)
/

insert into test
values(5,5,1);

insert into test
values(4,5,1);

commit;

FIELD1 FIELD2 FIELD3
---------- ---------- ----------
5 5 1
4 5 1

(before each test the table will be reset to this state)
TEST1
-----
session 1:

update test
set field1 = field1 * 1
where field1 = 5;

session 2:
-- this session will be blocked
update test
set field2 = field2 * 2
where field1 = 5;

session 3:

update test
set field1 = 5
where field1 = 4;

commit;

session 1: commit; -- will unblock session 2
session 2: commit;

result:

FIELD1 FIELD2 FIELD3
---------- ---------- ----------
5 10 1
5 5 1

TEST 2:
session 1:

update test
set field2 = field2 * 1
where field1 = 5;

session 2:
-- this session will be blocked
update test
set field2 = field2 * 2
where field1 = 5;

session 3:

update test
set field1 = 5
where field1 = 4;

commit;

session 1: commit; -- will unblock session 2
session 2: commit;

result:

FIELD1 FIELD2 FIELD3
---------- ---------- ----------
5 10 1
5 5 1

TEST 3
------
session 1:

update test
set field3 = field3 * 2
where field1 = 5;

session 2:
-- this session will be blocked
update test
set field2 = field2 * 2
where field1 = 5;

session 3:

update test
set field1 = 5
where field1 = 4;

commit;

session 1: commit; -- will unblock session 2
session 2: commit;

result:

FIELD1 FIELD2 FIELD3
---------- ---------- ----------
5 10 2
5 5 1

TEST 4
------
session 1:

update test
set field2 = field2 * 2
where field1 = 5;

session 2:
-- this session will be blocked
update test
set field2 = field2 * 2
where field1 = 5;

session 3:

update test
set field1 = 5
where field1 = 4;

commit;

session 1: commit; -- will unblock session 2
session 2: commit;

result:

FIELD1 FIELD2 FIELD3
---------- ---------- ----------
5 20 1
5 5 1

but if I understand Tom's answer correctly I should get:

FIELD1 FIELD2 FIELD3
---------- ---------- ----------
5 20 1
5 10 1

?

freek@dev01> select banner from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
PL/SQL Release 9.2.0.1.0 - Production
CORE 9.2.0.1.0 Production
TNS for 32-bit Windows: Version 9.2.0.1.0 - Production
NLSRTL Version 9.2.0.1.0 - Production


Answer ! :)

Christo Kutrovsky, August 21, 2003 - 12:05 pm UTC

To freek ,

Test4 did not do as you expected, because session 1 did not change a field which is part of the where predicates in session 2.

Change
session 2:
-- this session will be blocked
update test
set field2 = field2 * 2
where field1 = 5;

TO
session 2:
-- this session will be blocked
update test
set field2 = field2 * 2
where field1 = 5 and field2 > 0;

And you will observe the result expected.



Thanks

freek, August 21, 2003 - 4:20 pm UTC

christo,

thank you for showing me the mistake in my logic.
It is not the change in the value that we want to update that is controlling the "restarting" of the statement but the change of the value that is in the predicate.

I have now reread Tom's answer to my first post and it becomes clear now.

greetings

Freek


Thank you

rob, August 22, 2003 - 5:02 am UTC


Thanks

Pieraldo Antonello, September 23, 2003 - 4:22 pm UTC

so much!

A reader, June 20, 2004 - 8:23 pm UTC

The Google thread was very interesting also. But it did leave with a unanswered question

--x--
> > 1. As I mentioned earlier, why bother with the restart at all since the READ COMMITTED IL
> > promise would not be broken, restart or no restart. What's the rationale for the restarts ?
>
> Because the alternative is to process rows that no longer meet the criteria.

So what ? The READ COMMITTED IL is not violated -- it did not promise repeatable reads. Why bother about this specific anomaly whilst ignoring others, like ignoring new rows satisfying the predicate? They would not be processed either
--x--

Do you have an answer for this? In other words, what are the exact cases where the update would restart? This thread focuses on changed columns appearing in the WHERE clause of the UPDATE statement, but new rows inserted are ALWAYS ignored by the update regardless of the predicate.

In any case, could you comment on how some of the other RDBMS vendors handle this issue?

Thanks

Tom Kyte
June 21, 2004 - 8:12 am UTC


> Because the alternative is to process rows that no longer meet the criteria.



that, to me, seems as clear as day.


new rows inserted are not always ignored by the update. if the update of "x=5" starts at noon. and at 12:01 you insert a new row with x=5 and at 12:02 you commit it. And at 12:03 the update is forced to restart -- that update will in fact see it. read consistency is at the root of all of this. everything you see is consistent with respect to a given point in time -- you do not process rows that do not meet your criteria (imagine your surprise if you had a rule that says "y can be null IF and ONLY if x=5" and you did "update t set y = null where x=5", and suppose further something modified x=5 to x=6 - if we did not restart, what then (that is the processing rows that no longer meet the criteria). When we see that x WAS 5 but now x IS 6, we restart and that update will see all committed rows as of the point in time of the restart.


Other vendors use locks such that reads block writes and writes block reads -- whereby read only queries deadlock with updates and vice versa. their updates are/can be consistent with respect to the point in time the query FINISHES, ours with respect to the point in time it starts. In most cases however, people run the other dbms's in their read committed mode which means queries do not return consistent results. If you are interested in all of the details -- i wrote on this in detail in my book "Expert One on One Oracle" and compared read consistency to a read locking model.

A reader, June 21, 2004 - 11:54 pm UTC

"new rows inserted are not always ignored by the update. if the update of "x=5" starts at noon. and at 12:01 you insert a new row with x=5 and at 12:02 you commit it. And at 12:03 the update is forced to restart -- that update will in fact see it"

yes, if the update restarts at 12:03, it will pick up the new row, but my question was what causes it to restart at 12:03? How does it "know" that a new row has been inserted that meets the WHERE clause for the update it is doing?

Thanks

Tom Kyte
June 22, 2004 - 7:53 am UTC

the insert will not trip the restart. an update by some other session of a row that was "x=5" at noon to "x=<something else>" would have tripped the restart.



A reader, June 22, 2004 - 9:43 am UTC

"the insert will not trip the restart. an update by some other session of a row that was "x=5" at noon to "x=<something else>" would have tripped the restart"

Hm, regardless, my question remains.

Suppose the update is chugging along, locking each row that meets its WHERE clause and modifying it. Suppose it is already "past" a rowid i.e. it has already updated it and some other session comes along and changes "x=5" to "x=6", how would the session doing the UPDATE know that this has happened?

In other words, suppose when the UPDATE originally starts, the WHERE clause is met by 10 rows, it starts with row 1 and proceeds. After row 9 is updated, the other session changes row 1 from x=5 to x=6. What makes Oracle look back at row 1?

Thanks

Tom Kyte
June 22, 2004 - 9:48 am UTC

the row is locked, the other session cannot change x=5 to x=6



Row locked

A reader, June 22, 2004 - 9:55 am UTC

1. "the insert will not trip the restart. an update by some other session of a row that was "x=5" at noon to "x=<something else>" would have tripped the restart"

2. "the row is locked, the other session cannot change x=5 to x=6"

Hm, but then dont your above 2 statements contradict each other? If the row is locked, how would the other session you mention in (1) be able to update the row?

Tom Kyte
June 22, 2004 - 12:10 pm UTC

at noon, you start "update t set y = 0 where x = 5"

you start doing consistent reads of t to identify rows where "x=5"

every row you find using a consistent read (as of NOON) must be gotten in current mode (as of "right now" - say 12:01).

if the current read of the row differs from the consistent read, we must consider whether we can update this row or not. If at 12:00 -- x was 5 for a given row but at 12:01 when we get around to reading it -- x is now 6 -- we cannot update that row. We restart the update.



It is that the other session modifies a row we have NOT YET gotten to. All of the rows we got to are locked, it is the rows we have not yet visited that will trigger restarts.

A reader, June 22, 2004 - 12:22 pm UTC

Ah, that explains it, thanks a lot for your patient answers. Really appreciate it.

Think about it.....

Mark J. Bobak, June 22, 2004 - 12:39 pm UTC

If x=5 at 12:00, and you start a massive update that will
take lots of time, then the other process will do one of:

1.) Beat the massive update to the row where x=5, change it
to x=6 cause the row IS NOT YET LOCKED. So, that succeeds,
when the massive update hits that row, it sees the change
triggers a restart.

or

2.) The massive update got to the row where x=5 and changed
it to x=6. Now, the row IS LOCKED, the other process
can't get it. It will wait on a TX mode 6 enqueue, till
the massibve update either commits (at which point the
waiter will return saying "0 rows updated") or the massive
update rolls back (at which point the waiter will return
"1 row updated").

The two statements are not contradictory cause they explain
the two possible outcomes.

A reader, June 22, 2004 - 1:32 pm UTC

Thanks, Mark, that was a really good explanation

update V/S Select

Js, June 24, 2004 - 5:27 am UTC

Hi.. 

    This thread is very useful and it made me more 
     doubtful :)
 
    As you described consistent result set for a select 
    statement. Is there any diff. in consistent result set 
    for an update statement ?

  here is a test ..

SQL> select count(*) from t;

COUNT(*)
----------
384170

SQL> select max(rowid) from t;

MAX(ROWID)
------------------
AAAHg+AABAAAMyRAE7

SQL> select * from t where rowid = 'AAAHg+AABAAAMyRAE7';

A
----------
38416

SQL> update t set a = 2 where a != -1;

Leave it ..Now go to another session and do this...
/*****************/

SQL> update t set a = -1 where rowid = 'AAAHg+AABAAAMyRAE7';

1 row updated.

SQL> commit;  -- main update transaction is still running.

Commit complete.

SQL>

/*************************************/

In the first session you will see..

384169 rows updated  -- not all.. 

SQL> select count(*) from t;

COUNT(*)
----------
384170

SQL> select * from t where rowid = 'AAAHg+AABAAAMyRAE7';

A
----------
-1

Why did first session see the update ?

And it is not possible if I issue the select command.. 

[ As you described in Expert one * ]

Thanks,
Js
 

Tom Kyte
June 24, 2004 - 9:43 am UTC

don't understand by what you mean when you say "why did first session see the update"?



The select statment, without a for update, doesn't lock any data -- it isn't relevant to the example at all -- could not change the outcome (unless you were in a serializable transaction -- it would have been the point in time that all statements would be "as of")


not sure where you are referencing "Expert one on one Oracle" from either?

Fabulous explanation!!

A reader, June 24, 2004 - 2:57 pm UTC

This is by far my favorite thread on asktom (based on little that I've read on this website so far). I look forward to spending the next few years poring over this site!

Tom - just curious. Based on your response - it seems like you were aware of this mini-rollback 'phenomenon' well before the initial post. When and where did you learn this fundamental concept in the first place? It doesn't appear in the concepts manual for sure!

Tom Kyte
June 24, 2004 - 3:35 pm UTC

I've known that something like it must have been happening (had to, you cannot update a row where x=6 if you said where x=5).

Before Aug 2003 I never actually researched it to this degree.

That month, I smacked my forehead and said "ah-hah, that explains June of 2000". Why? I was using external procedures in 8i with my own "utl_tcp" implementation to send email using smtp. I was sending email from a trigger. People would tell me sometimes "hey, you sent me this email twice". I'd say "no way -- cannot happen". It was always the same people (the people at the "top" of the table), I figured -- problem must be on your end. Later, we started using dbms_job to send mails in the background after the commit (made the updates much much faster, less locking going on). Problem disappeared -- I figured they fixed their email system.


Later that month, I said Ah-Hah, this explains the summer of 1999. We were working on a very early portal implementation. We had a trigger that would create sequences using Autonomous transactions. These sequences could not exist but every now and again, someone would say "got an error, sequence exists". Well, it was the restart again -- trigger fired, rolled back (but not the DDL, it was committed), trigger fired and failed. It was "impossible" to be in this situation -- but we were...


So, I was vaguely aware that something was happening, just never looked in depth into it. This gave me the excuse to go scuba diving and really look into it.

Thanks!

A reader, June 24, 2004 - 4:50 pm UTC

Thank you for taking us 'behind the scenes'.

> This gave me the excuse to go scuba diving and really look into it.

So how long did it take you to figure the whole thing from scratch, once you set out? (Conceptualize, set up the test case and then scream eureka that is). Was it just a matter of an hour, or longer? Just trying benchmark your impressive analytical skills here..:).

Tom Kyte
June 24, 2004 - 8:56 pm UTC

it didn't take too long as I recall - their example had more then enough information to set up the simulation. so, setting up the initial example -- just a couple of minutes. digesting the results -- that was a couple of hours and a couple more trials (different cases) and some consultation just to make sure i was "saying the right stuff"

I need more stars for this thread.....Fantastic analysis

A reader, June 24, 2004 - 6:11 pm UTC


Superb! You are the ultimate Oracle Guru

A reader, June 25, 2004 - 12:11 am UTC


Further review

QA, June 25, 2004 - 1:05 am UTC

Additional scenario: Why upon the restart, would only 1 of 2 newly inserted rows (by two seperate sessions) which meet the predicate requirement be updated and not both?

Example:
T1 (Session 1) - UPDATE x SET col2 = -1 WHERE col2 = 1;

The update will eventually touch 5000 rows.

T2 (Session 2) - UPDATE x SET col2 = 0 WHERE col1 = 'A' AND col2 = 1;
-- 1 row updated.
(Session 2) - INSERT INTO x (col1, col2) VALUES ('B',1);
(Session 2) - COMMIT;

T3 (Session 3) - UPDATE x SET col2 = 1 WHERE col1 = 'A' AND col2 = 0; /* Setting the row back to its original value */
(Session 3) - INSERT INTO x (col1, col2) VALUES ('C',1);
(Session 3) - COMMIT;

T4 (Session 1) - /* Update finally completes */ COMMIT;

SELECT * FROM x WHERE col1 = 'A';
col1 col2
---- ----
A -1

SELECT * FROM x WHERE col1 IN ('B','C');
col1 col2
---- ----
B -1
C 1

If the restart must start over from the current moment after detecting the updated row, then why wouldn't the second pass of the update detect each newly inserted row? I would have expected both inserts to have the value -1, since the restart had to have been triggered after session 3 (we know this because the row which was set back to 1, is now -1).

Thanks for any insight. Apologize for not providing a test scenario; this was reproduced on a .5 million row table, where it was easy to update a row low in the stack prior to the first session updating it (avoiding the block).

Tom Kyte
June 25, 2004 - 7:55 am UTC

session 1 started at SCN "1"

I'm assuming that session 2 blocked session 1. When session 2 committed, session 1 restarted at SCN "2". Session 1 went into select for update mode "as of scn=2"

session 3 would not have blocked session 1 since the consistent read would have see "col2=0" for that row -- as of SCN=2, col2=0 -- that row would not be processed by session 1.


T3 and T4 could have happened in different orders -- there is nothing but luck saying that T4 happened after T3 since session3 was not modifying any data that session1 was interested in.

best thread on asktom

Matthias Rogel, June 25, 2004 - 3:35 am UTC

also my favourite thread on this site

thx, Tom for explanations - and most of thanks
to the developpers of this "Land of Consistency & Concurrency" !!

(using this land since 12 years and astonished from the
very first day on how faile-free consistency, concurrency, row-level locking and all this stuff works.
this is my major point, why I like Oracle, and it's nice to
look behind the scenes a bit)

matthias


Oracle's concurrency model is mind boggling... Thanks for the ride to the uncharted bottoms of it

A reader, June 25, 2004 - 11:43 am UTC


Further review - cont.

QA, June 25, 2004 - 11:44 am UTC

Additional clarification to the scenario:

"I'm assuming that session 2 blocked session 1"

No blocking occuring, since session 2 is able to update the row and commit prior to session 1 getting that far through the table.

"session 3 would not have blocked session 1 since the consistent read would have see "col2=0" for that row -- as of SCN=2, col2=0 -- that row would not be processed by session 1."

Correct. Since session 3 is reading the row as of SCN=2, no block, but session 3 is setting the value of col2 (0), back to 1 which does meet the predicate of session 1's update statement.

"T3 and T4 could have happened in different orders -- there is nothing but luck saying that T4 happened after T3 since session3 was not modifying any data that session1 was interested in."

Not luck, its a somewhat controlled test (timing would be very difficult in a production environment, but possible).

But, T3 does happen prior to T1 completing and T3 commits.

The reason we know T3 session commits prior to T1 is T1 touched what T3 updated (remember setting the value of col2 back to 1). Here's what we see after session 1 commits -

SELECT * FROM x WHERE col1 = 'A';
col1 col2
---- ----
A -1

So, for some reason session 1 after the restart, detects session 2's insert (sets col2 to -1 where col1 = 'B'), and it detected session 3's update where col1 = 'A', but it does not update session 3's insert where col1 = 'C'.

For consistency, one would have expected session 3's insert to also be updated after the restart (col2 = -1 where col1 = 'C').

Something to think about.


Tom Kyte
June 25, 2004 - 4:17 pm UTC

the blocking assumption wasn't necessary -- it could have blocked -- or not, affect would be the session.

Session1 would perform a consistent read on that data - see that it "was supposed to have seen that row", then it would do a current read (after getting unblocked if blocked) and see "i should no longer see that row". That would cause the restart.

I've actually changed my mind -- I don't see any scenario in which A would be -1 at the end. We'll need to see a test case somehow (i spent a while trying to emulate this, i could not get A to be -1 like that)

Ravi, November 15, 2004 - 6:20 am UTC

Tom,
Sorry for the late catchup with this thread, was re-directed really. Please refer your original feedback example.

Now, I am going to call the current SCN as QENV1 when I do the first update in Session 1 ie "Update t set y = -1 where id =3;" and its still QENV1

when I try in Session 2 "Update t set y = 1 where y = -1 or id = 2 or id =3;"

If I were to commit the first session NOW, I still get
DECODE(MSG1 SEQ SID OLD_ID OLD_Y NEW_ID NEW_Y
----------- ---------- ---------- ---------- ---------- ---------- ----------
56 13599817 3 0 3 -1
NOT VISIBLE 57 13599826 2 0 2 1
NOT VISIBLE 58 13599826 3 0 3 1
59 13599826 2 0 2 1
60 13599826 3 -1 3 1

As output, suggestion that work done by Session 1 ( SEQ 57,58) has been ROLLED BACK anyway and re-done (SEQ 59,60).


After I commit Session 1 we become QENV2.

That is because Oracle realises as it HUNG(or blocked), the QENV has moved and hence re-does the work, by Rolling back!

1) Is that correct?

2) I suppose locking (or blocking) is the ONLY time when the Read consistency part of DML (the query bit) can be told its QENV has changed and it needs to re-do the Query again?

This opposed to lets say I do processing in a Cursor Loop, when I open the cursor, it will always be QENV1 till I finish, doesnt matter if it takes hours or days?

3)Can you confirm Autonomous transactions are an elegant way of demonstrating otherwise invisble Rollbacks that Oracle does? (as the example suggests)

Thanks

Ravi

Tom Kyte
November 15, 2004 - 6:53 am UTC

well Ravi, i don't see your name in any prior followups -- so, seems to me it is ok you were re-directed, I don't think we were talking about this in the past ;)

I'm not really following your "qenv1" "qenv2" stuff.... but basically:

the second update used read consistency to find the rows "where y = -1 or id = 2 or id =3". It got blocked. When it got unblocked, it found that the data changed -- the data read in consistent mode for y, id was modified. hence it restarts.



this does not happen ONLY during blocking/locking -- that is when it is easiest for us humans to see it, since things "stop".

the way it works is:

a) the predicate is done in read consistent mode.
b) when read consistency says "here is a row we are interested in", we then try to do a current mode read on the block.
c) that is when we can get blocked -- but even if we don't..... after we got the block in current mode, we compare the consistent read version of the columns used in the predicate to locate the row with the current read version -- if any are different, rollback, restart.


autonomous transactions -- in fact anything that doesn't rollback (utl_file, utl_http, a select seq.nextval, anything that does not rollback) are ways to see this.

Ravi, November 15, 2004 - 7:48 am UTC

Got redirected from here
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:275215756923#28431043297893 <code>

Reading your reply, most of it is clear, except for point c)


1) Attempting to get the Block in current mode is when Oracle sees there's an uncommitted change and hence Blocked. But what if the Block contains records that are not connected with the current update, but being locked by another Session's update? Would you still be blocked.

Say Session 1 is trying to update empno=50, which shares Block with empno=51 which is being modified by Session 2 and locked. Would Session 1 be Blocked?


2) Point c

Although this makes sense, worth checking this with you:

If you refer my earlier posting, SID 13599826, the record with Old Id 2 has exactly the SAME change done BEFORE rollback and after Restart.

So is it safe to assume that Oracle will look at ALL the records in the current mode and even if ONE of them has a change, it ROLLBACK ALL previous change and RESTARTS work?

Thanks

Tom Kyte
November 15, 2004 - 1:25 pm UTC

1) no, getting a block in current mode is what oracle does before performing an update to the block.

only if the row you are interested in on the block is locked by another transaction would you get blocked. Other rows don't count.

2) correct -- everything is atomic at the STATEMENT level.

Ravi, November 15, 2004 - 10:13 am UTC

Tom,

I dont know if you are going to reply to my earlier post, but looking at it again, I can PROVE that your statement
"When it got unblocked, it found that the data
changed -- the data read in consistent mode for y, id was modified. hence it
restarts."
is correct by modifying my first update to
"Update t set y = y where id =3;"

In the above case there is neither a ROLLBACK or RESTART, because the value of Y is not changing for record id =3 although we do have LOCKING and BLOCKING.

But what is surprising is in the second update, if I were to revert to the original update "Update t set y = -1 where id =3",
"Update t set y = 1 where y = -1 or id = 2 or id =3;"

Am I correct in saying Oracle CLEVERLY disregards the OR condition on (y=-1) when CROSS-CHECKING if any of data in columns Y,ID have changed?

What I mean is Oracle doesnt seems to go "I have Blocked the record not with using condition y=-1 but with id =3. Nevertheless, I USE Y=-1 when CROSS-CHECKING for changes!"

Thats the process of DECIDING to ROLLBACK and RESTART!

Tom Kyte
November 15, 2004 - 1:29 pm UTC

what did your testing show in regards to the last question :)

(nice way of me throwing the ball back into your court ...

concise mathematical recap

Alberto Dell'Era, November 15, 2004 - 4:29 pm UTC

Thesis: when performing

update t set w = g(x1, x2 .. xn) where f(y1, y2 .. yn);

Any change in the vector [w, x1, x2 .. xn, y1, y2 .. yn] will trigger a restart.

Note that

o w is included in the vector
o the actual expression of g(.) and f(.) is not important.
Ie even if the before and after value of f(.) and/or g(.) is the
same, but at least one argument changed, it restarts.

Correct ?

Actually i happen to have a (concise and nicely formatted) demonstration of the theorem (that i deem correct). I can share if you like ..

Alberto (the OP)

Tom Kyte
November 15, 2004 - 9:16 pm UTC

(triggers modify that -- you have to include in F() the :new/:old referenced columns...)

Ravi, November 15, 2004 - 5:20 pm UTC

Thats OK.

What I have done is to have a new column called D which simply stores a varchar2 value so that any change to this is UNCONNECTED with modification to column Y.

Everything circles around the statement
""When it got unblocked, it found that the data
changed -- the data read in consistent mode for y, id was modified. hence it
restarts."

It appears if column Y is not INVOLVED in the update, then we dont seem to have a ROLLBACK and a subsequent RESTART.

I have modified the statement ""Update t set y = y where id =3;" with

"update t set D='DUMMY' where id = 3;"

Now I run just two statements

"update t set D='DUMMY' where id = 3;"

in the first session (Session 1) and


"update t set y = 1 where y = -1 or id = 2 or id = 3;"

which is the Blocked session (Session 2).

I commit the Session 1 and then commit Session 2 and the result from the query is

DECODE(MSG1 SEQ SID OLD_ID OLD_Y NEW_ID NEW_Y
----------- ---------- ---------- ---------- ---------- ---------- ----------
68 18619 3 0 3 0
69 18618 2 0 2 1
70 18618 3 0 3 1

There is no evidence of Rollback or Restart there.

--------------------------

Instead of doing the above two Updates in session 1 and Session 2 if I do the following in the same sequence:

""Update t set y = y where id =3;"

"update t set y = 1 where y = -1 or id = 2 or id = 3;"

DECODE(MSG1 SEQ SID OLD_ID OLD_Y NEW_ID NEW_Y
----------- ---------- ---------- ---------- ---------- ---------- ----------
71 18619 3 0 3 -1
NOT VISIBLE 72 18618 2 0 2 1
NOT VISIBLE 73 18618 3 0 3 1
74 18618 2 0 2 1
75 18618 3 -1 3 1

Sequence 72,73 have been Rolled Back while 74,75 have been Re-started and Re-done work.
-----------------

So the question is, its not the Updating record, but the updating columns must be involved in the where clause predicate or being updated themselves to trigger a Rollback and Restart?

Tom Kyte
November 15, 2004 - 9:22 pm UTC

the columns in the where clause (used to identify, to find the rows) are the relevant columns.

Ravi, November 16, 2004 - 7:39 am UTC

I might be misunderstanding what you've said (sorry in that case), but here's evidence that its BOTH updating and columns in the where clause that are interrogated for change across sessions to trigger Rollback and Restart.

If I do “Update t set y = -1 where id =3;” in Session 1 and “Update t set y = 1 where id = 2 or id =3;” in Session 2 here’s the output:


DECODE(MSG1 SEQ SID OLD_ID OLD_Y NEW_ID NEW_Y
----------- ---------- ---------- ---------- ---------- ---------- ----------
141 13616145 3 0 3 -1
NOT VISIBLE 142 13613313 2 0 2 1
NOT VISIBLE 143 13613313 3 0 3 1
144 13613313 2 0 2 1
145 13613313 3 -1 3 1

5 rows selected.

Column YÂ’s value has changed by Session 1 and Session 2 and it seem to trigger Rollback and Restart.
The difference to your original example is the “OR condition which checked y=-1 in the Session 2 has been removed”.

But in my previous post, when I update column D which is not being updated by BOTH sessions and hence Oracle seems to think there is no necessity for a Rollback, and quite rightly so.

Am I wrong or am I wrong?


Tom Kyte
November 16, 2004 - 12:49 pm UTC

(it is the TRIGGER..... the TRIGGER changes everything -- the references to :old, :new.....)

the observer observing is changing the results !

Alberto Dell'Era, November 16, 2004 - 4:43 pm UTC

>(triggers modify that -- you have to include in F() the
>:new/:old referenced columns...)

Good ole Heisenberger back again ... well, thanks, i've verified it with an heisenberg-effect-free trigger and yes, it's true for both BEFORE and AFTER triggers.

It's true also for *internal* triggers; referencing a column in a MATERIALIZED VIEW LOG triggers a restart:

create materialized view log on t with rowid, (y);

Fascinating - Oracle is a truly scientifically engineered system indeed (sqlplus = matlab) !

thanks again
Alberto

Tom Kyte
November 16, 2004 - 11:36 pm UTC

it is more complex than I can understand -- alot of thought went into many things.

A reader, November 17, 2004 - 1:25 am UTC

"it is more complex than I can understand"

Ah, but you are being too modest! Take this thread for instance. How in the world did you know that the triggers were influencing the outcome of the update? i.e. the observer was affecting the observation, as you put it?

Thanks

Tom Kyte
November 17, 2004 - 9:25 am UTC

from another question :)

I have a slide with the heisenberg uncertaintity principle referenced on it. Someone said "why is this happening, so I quickly set up the example and added triggers to it -- and the behaviour changed, took the triggers away and I saw what they saw (no restart after a block), put the triggers back and it was different (restart after a block)"

So, I didn't know that until after this thread was written -- it was learned knowledge after the fact.


I learn something new about Oracle every day.

Tom did you....

Gee, November 17, 2004 - 9:39 am UTC

..... do a lecture on this topic yet?

Did you do it at UKOUG?

Do you have a link so we can read it?

Tom Kyte
November 17, 2004 - 1:59 pm UTC

i've done it many times yes...

at the UKOUG too

</code> http://asktom.oracle.com/~tkyte/ukoug.tar.gz <code>



Was the variation ( c )......

Gee, November 17, 2004 - 2:55 pm UTC

...... where you used the trigger to show ( :new.z )

was to show that during an update Oracle is updating columns even if you do not explicitly update it?

Or did I miss what that was suppose to show?

Tom Kyte
November 17, 2004 - 4:18 pm UTC

was to show that the columns in the PREDICATE plus columns referenced in the trigger are compared... it is not havng to update :new.z, just referencing it can cause the restart

( :new.z )

me again, November 17, 2004 - 4:29 pm UTC

ok if it is to show that it is comparing the predicate columns to the :new.z references how come I get this:

cgregory@SAMS9I> update t set y = 1 where y = -1 or id = 2 or id = 3;
0
0
0
1
0

4 rows updated.

so :new.z is NOT actually new. :new.z in the above statement is the same as " :current.z " ?

I say this because z is not a predicate column in the above statement. So the trigger was not meant to show the comparison with this particular statement but with the "session 1" statement that referenced z.

Ravi, November 18, 2004 - 6:28 am UTC

Tom,

To recap

Upd1 = "Update t set y = -1 where id =3;"
Upd2 = "Update t set y = 1 where y = -1 or id = 2 or id =3;"


It appears the COMMIT in the autonomous transaction is involved, for instance if I were to do it non-autonomous, there is no Re-start.

Or even if I comment out the call to Log_msg2 there is no re-start.

As each update, behind the scenes starts and finishes a transaction, (autonomous transaction), the change in SCN (system change number?) appears to have a say here.

Let me assume that I have commented out the call to log_msg2 in the trigger and assume when the first update happens, its associated with a SCN (System Change Number) at that time lets say SCN50 and assume, QENV 50 is the query environment with SCN 50.

Now, without the insert and commit onto msg2 (via log_msg2), when I start my second update, we are still in SCN50/QENV 50 and upd2 is blocked. When upd1 commits, upd2 sees no change in SCN and does not Rollback and Restart.

But, if I were logging onto log_msg2, it gets pretty complicated.

The first update might start in QENV50/SCN50, but the insert onto msg2 causes QENV51/SCN51.

The second update does NOT start at QENV50/SCN50 (in order to be blocked), but starts at QENV51/SCN51 and it creates its own SCN52/QENV52 due to its own insert onto msg2.

When Upd1 commits, itÂ’s created SCN53/QENV53 and removes its lock.

But now UPD2 finds the SCN to have moved from 51 to 53 and my guess is because of this move, Oracle tells UPD2 to Re-start query at SCN53?

1) Question is, is the SCNs involved here?

Cheers

Ravi






Tom Kyte
November 18, 2004 - 10:48 am UTC

scn's are not really "involved" - the scn is constantly increasing (just log in and using dbms_flashback query it up, wait, do it again.... you'll see)

more clarification

amit poddar, December 31, 2004 - 1:07 pm UTC

Hi,

From Mark Bobak's response:

"If x=5 at 12:00, and you start a massive update that will
take lots of time, then the other process will do one of:

1.) Beat the massive update to the row where x=5, change it
to x=6 cause the row IS NOT YET LOCKED. So, that succeeds,
when the massive update hits that row, it sees the change
triggers a restart."

Lets say the other process had started before the massive update, then will the massive update still wait for the other processes lock to be released when it tries to get the block in current mode ?

thanks


correction

amit poddar, December 31, 2004 - 1:26 pm UTC

Hi,

Please ignore my previous update.
Following is the corrected one

From Mark Bobak's response:

"If x=5 at 12:00, and you start a massive update that will
take lots of time, then the other process will do one of:

1.) Beat the massive update to the row where x=5, change it
to x=6 cause the row IS NOT YET LOCKED. So, that succeeds,
when the massive update hits that row, it sees the change
triggers a restart."

Lets say the other process had started after the massive update, then will the
massive update still wait for the other processes lock to be released when it
tries to get the block in current mode ?

thanks

Tom Kyte
December 31, 2004 - 1:37 pm UTC

doesn't matter if the processes started before, at the same time or after "mass update"

only matters who got to the row first. first guy there wins.

more clarification

amit poddar, December 31, 2004 - 2:49 pm UTC

So
Lets say Transaction A starts at 12:00
Lets say transaction B starts at 12:10

Transaction A will not see any changes made by Transaction B,
But if Transaction B gets a row lock , Transaction A still have to wait for B to release the lock. So that means a transaction obtains a lock when it gets the block in current mode i.e. a transaction tries to get the row lock just before changing the data. Am I correct ?

thanks

Tom Kyte
December 31, 2004 - 4:07 pm UTC

a transaction obtains a lock when it

a) hits the data. gets to the data it wants to lock.


if transaction A was:

update huge_table set y = 10 where x = 5;

and it was full scanning huge_table using CONSISTENT reads to find "x=5" and then upon finding each one getting a row lock on that row and doing the update. Say it would take 1/2 hour to full scan this table.

Meanwhile, in another room.... some other transaction B started at 12:10 and using a primary did "update huge_table set x = 6 where primary_key = 42" -- and suppose that record just happened to also have "x=5" before the update.....

Well, transaction A plods along -- doing consistent reads -- looking for x=5. Eventually, it hits this row with primary_key = 42 and sees "ah-hah, x=5 is true -- or at least it was when I began". It'll then do a current mode read and either

a) get this row, transaction B has committed. It will lock this row and find out "bummer, x was 5, x is now 6 -- must restart".




Footnote of sorts...

Padders, February 03, 2005 - 5:19 am UTC

From a thread entitled 'Strange behaviour of UPDATE RETURNING BULK COLLECT INTO array' on the PL/SQL Pipelines (
http://pipetalk.quest-pipelines.com
.

9.2.0.6 (and no doubt prior versions) appears to return duplicates in collection for RETURNING clause when the update is restarted. Presumably the collection is not cleared out on restart. This appears to be fixed in 10g.

(based on examples from Tom Kyte)

==== 9.2.0.6 =====

--- session 1 ---

Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
JServer Release 9.2.0.6.0 - Production

SQL> CREATE TABLE t
  2  AS
  3    SELECT ROWNUM ID, 0 y
  4    FROM all_users
  5    WHERE ROWNUM <= 5;
  
Table created.

SQL> UPDATE t
  2  SET    y = -1
  3  WHERE  id = 3;

1 row updated.

--- session 2 ---

Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
JServer Release 9.2.0.6.0 - Production

SQL> SET SERVEROUTPUT ON;
SQL> DECLARE
  2     TYPE tt_t IS TABLE OF t%ROWTYPE;
  3  
  4     t_t tt_t;
  5  BEGIN
  6     UPDATE    t
  7           SET y = 1
  8         WHERE y = -1 OR ID = 2 OR ID = 3
  9     RETURNING         ID,
 10                       y
 11     BULK COLLECT INTO t_t;
 12     
 13     DBMS_OUTPUT.put_line ('SQL%ROWCOUNT: ' || SQL%ROWCOUNT);
 14     
 15     FOR i IN 1 .. t_t.COUNT LOOP
 16       DBMS_OUTPUT.put_line ('t_t (' || i || '): ' || t_t (i).id);
 17     END LOOP;
 18  END;
 19  /
 
(hangs) 

--- session 3 ---

Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
JServer Release 9.2.0.6.0 - Production
 
SQL> UPDATE t
  2  SET    y = -1
  3  WHERE  id IN (1, 5);
 
2 rows updated.

SQL> COMMIT;
 
Commit complete.
 
SQL> 

--- session 1 ---

SQL> COMMIT;

Commit complete.

SQL>

--- session 2 ---

(continues)

SQL%ROWCOUNT: 4
t_t (1): 2   <--- oops!
t_t (2): 1
t_t (3): 2
t_t (4): 3
t_t (5): 5

PL/SQL procedure successfully completed.

SQL> 


==== 10.1.0.2 =====

--- session 1 ---

Personal Oracle Database 10g Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> CREATE TABLE t
  2  AS
  3    SELECT ROWNUM ID, 0 y
  4    FROM all_users
  5    WHERE ROWNUM <= 5;
  
Table created.

SQL> UPDATE t
  2  SET    y = -1
  3  WHERE  id = 3;

1 row updated.

--- session 2 ---

Personal Oracle Database 10g Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> SET SERVEROUTPUT ON;
SQL> DECLARE
  2     TYPE tt_t IS TABLE OF t%ROWTYPE;
  3  
  4     t_t tt_t;
  5  BEGIN
  6     UPDATE    t
  7           SET y = 1
  8         WHERE y = -1 OR ID = 2 OR ID = 3
  9     RETURNING         ID,
 10                       y
 11     BULK COLLECT INTO t_t;
 12     
 13     DBMS_OUTPUT.put_line ('SQL%ROWCOUNT: ' || SQL%ROWCOUNT);
 14     
 15     FOR i IN 1 .. t_t.COUNT LOOP
 16       DBMS_OUTPUT.put_line ('t_t (' || i || '): ' || t_t (i).id);
 17     END LOOP;
 18  END;
 19  /
 
(hangs) 

--- session 3 ---

Personal Oracle Database 10g Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
 
SQL> UPDATE t
  2  SET    y = -1
  3  WHERE  id IN (1, 5);
 
2 rows updated.

SQL> COMMIT;
 
Commit complete.
 
SQL> 

--- session 1 ---

SQL> COMMIT;

Commit complete.

SQL>

--- session 2 ---

(continues)

SQL%ROWCOUNT: 4
t_t (2): 1
t_t (3): 2
t_t (4): 3
t_t (5): 5

PL/SQL procedure successfully completed.

SQL>  

A reader, April 24, 2005 - 11:00 am UTC

DECODE SEQ SID OLD_ID OLD_Y NEW_ID NEW_Y
------ ---------- ---------- ---------- ---------- ---------- ----------
1 12565 3 0 3 -1
UNDONE 2 12566 2 0 2 1
3 12567 1 0 1 -1
4 12567 5 0 5 -1
UNDONE 5 12566 3 0 3 1
6 12566 1 -1 1 1
7 12566 2 0 2 1
8 12566 3 -1 3 1
9 12566 5 -1 5 1


Tom,
You were right, this post is going to keep bouncing to you.

My question is, why did it even have to do SEQ 5 here, I mean its wasting resource by doing a Perform at 5 and then doing an unperform and then doing it ‘correctly’ at Seq 8.

1) I mean when SID 12566 became unblocked at SEQ5, how does Oracle detect a restart internally, does it really have to do SEQ 5 to do this?
2) Am I correct that although value of Y has changed in SEQ 3 to -1, when doing SEQ 5, due to consistency issue (12566 looks at its transaction at the point at which it started, that is reference before 12565 committed, which means both 12565 and 12566 would have seen value of Y to be 0). So at SEQ 5 does 12566 actually reads the value of Y from the Rollback Segment?


Tom Kyte
April 24, 2005 - 11:41 am UTC

the restart is done when:

a) consistent read (the search component of your modification) says "this row, when we started, was one you were interested in"

b) the current read (get the row as of right now, we have to modify the CURRENT value) is done

c) a comparision of the current read to the consistent read says "the row was modified". If not using triggers -- only the columns in the predicate are considered. If using triggers, the columns referenced in the :new/:old would be considered.


So, if you issue something like:

update t set x = 5 where y = 6;

and when that update started, the "last row" in the table was "y=6" -- that is what your consistent read would see.

Then, when you hit that row, you do a current mode read and discover "hey, someone changed "y=6" to "y=7", meaning you are no longer interested in that row. That triggers the restart.

very interesting

A reader, August 04, 2005 - 11:17 am UTC

"a) consistent read (the search component of your modification) says "this row,
when we started, was one you were interested in"

b) the current read (get the row as of right now, we have to modify the CURRENT
value) is done"

So does that mean that for an update that affected 20 rows,
for each row, we do two reads
1. The consistent read as of the point the update started
resulting in 20 logical IOs
2. The current get of each of the rows again to update them
resulting in 20 more logical IOs.
3. a restart if there is a difference in the two as
you say.

Does that make sense?

Tom Kyte
August 04, 2005 - 12:12 pm UTC

1) resulting in SOME NUMBER of logical IO's (it only takes one block to hold 14 rows in EMP for example, it would be 1 LIO to read them all)

2) at least 20 current mode gets, maybe more (indexes and all)

3) yes.

also..

A reader, August 04, 2005 - 11:47 am UTC

""a) consistent read (the search component of your modification) says "this row,
when we started, was one you were interested in"

b) the current read (get the row as of right now, we have to modify the CURRENT
value) is done"

Q 1.
Does this mean that all the rows affected by an update are
not locked instantaneously? As we get to the row during our consistent read, we lock it. i think this is true from the discussion aboe but want to confirm. This is also due to the fact that we store our lock info in the data block and we need to get to it first to either lock it or block on it, correct?
Q 2.
Also, if there is an insert/commit that also affects the
"where clause" of a prior blocked update, it should not be seen
normally unless the update was "restarted" due to some other updates in which case the inserted row might now become visible. Is this correct?


Tom Kyte
August 04, 2005 - 12:19 pm UTC

q1) correct, nothing (until quantum computing anyway) is "instantaneous".

we lock as we go.

q2) The rows affected by a modification are those rows that were committed in the database AS OF the time of the last restart (counting the first start as a restart itself).

Unless you are in serializable in which case you get ora-8177

thanx!

A reader, August 04, 2005 - 3:13 pm UTC


write consistency and deadlock

Menon, August 15, 2005 - 9:44 am UTC

Tom
Let us say we have an update statement with a where
clause. The same update statement is being executed
in two sessions. During the part of update where all rows are being obtained in a "read consistent" manner and being locked, is it possible that the two sessions obtain
the same set of rows but in different order? For example,
consider table t( x number) containing numbers 1 through 10.

Is it possible that session 1 sarts locking the rows
in the order of rows with values 10 to 1 whereas session 2 starts locking them in the order of rows with values 1 to 10? I cant think of a situation like this (given update does not have an order by clause). What I am looking
for is a scenario where this can lead to deadlock because of the way the locking is done.



Tom Kyte
August 15, 2005 - 3:48 pm UTC

nothing to do with write consistency really, but yes - the updates may have different plans for numerous reasons. Different plans -> different order of locking.

ok...

Menon, August 15, 2005 - 4:09 pm UTC

Updates having different plans - sounds very much like
selects having different plans.

I guess one scenario would be two updates, both
trying to update a set of overlapping rows...I
take in some such cases, you will have deadlocks.

Tom Kyte
August 15, 2005 - 10:38 pm UTC

you may have deadlocks.

but -- think about it, you have probable LOST UPDATES the other way around (if they do not deadlock)

yup , of course

Menon, August 16, 2005 - 11:39 am UTC

"you may have deadlocks.

but -- think about it, you have probable LOST UPDATES the other way around (if
they do not deadlock) "

Did not mean to say that it was some kind of a problem with Oracle design...

Tom Kyte
August 17, 2005 - 10:50 am UTC

I wasn't taking it that way - I was pointing out that two "blind" updates (that aren't working on already retrieved and locked data) are probably "lost update" candidates (eg: I question the design of the application that issues overlapping "blind" updates like that)

A reader, August 18, 2005 - 12:18 pm UTC

create or replace trigger t_trigger after update on t for each row
begin
dbms_output.put_line( :new.z );
end;

I can understand (I think) if any columns in the predicate affect the restart.
But exactly, why does a Trigger affect it? Are we saying it 'MAY BE' that the value referenced could have been modified in the trigger text?

Tom Kyte
August 18, 2005 - 4:39 pm UTC

the before trigger is called with the read consistent data, before the block is gotten in current mode. Hence :new.z could be "wrong" and the update must be restarted..

One of the best threads !

K, August 18, 2005 - 4:31 pm UTC


A reader, August 19, 2005 - 4:33 am UTC

create or replace trigger t_trigger after update on t for each row
begin
dbms_output.put_line( ' OLD ID ' ||:old.id );
if :old.id = 2 then
dbms_output.put_line( ' NEW z ' ||:new.z );
end if;
end;

Tom,

Here's a statment I make, 'all it takes is the trigger should merely mention or in PL/SQL terms refer to :new.z' to be selected for restart. I changed the code above so that only the Blocked Session (update t set y = 1 where y = -1 or id = 2 or id = 3;) and not the Blocking Session (update t set y = y, z = 1 where id = 3;) references it conditionally (
if :old.id = 2 then).

But yes, both Blocking and Blocked session run the trigger, but the Blocking session does'nt do the IF part and hence doesn't need to access the :new.z value at all.

And yet, the Blocked session updated 4 rows (did a Restart).

Is it right in saying Oracle doesn't check the IF condition when it checks for Restart?

Tom Kyte
August 20, 2005 - 4:00 pm UTC

because IT COULD HAVE, it doesn't care that you "didn't actually look at it", it just realizes -- "HEY, we just ran a trigger and fed it some read consistent values that didn't actually exist when we got the block in current mode - we don't know WHAT they did with them, but they could have looked at them so let's fix that and restart"

restart in SELECT FOR UPDATE mode

Alberto Dell'Era, September 01, 2005 - 6:13 pm UTC

You are going to say in "Expert Oracle Database Architecture: 9i and 10g Programming Techniques and Solutions":

"In READ COMMITTED mode, after the transaction rolls back your update, the database will restart the update (i.e., change the point in time at which the update is “as of”), and instead of updating the data again, it will go into SELECT FOR UPDATE mode and attempt to lock all of the rows WHERE Y=5 for your session. Once it does this, it will run the UPDATE against that locked set of data, thus ensuring this time that it can complete without restarting."

a) so the first restart will rollback the changes AND remove the locks on the rows ?

b) it switches into "SELECT FOR UPDATE" mode because
1 - minimal undo & redo is produced when locking
rows in FOR UPDATE mode - only the lock byte and
the ITL are touched, not the other row (or indexes!)
bytes
2 - triggers are not fired
thus making a faster "locking scan" and so lessening the probability that other transactions will modify the rows and cause another restart, correct ?

c) [please skip this if you like] have you a test case that shows this SELECT FOR UPDATE mechanism in action ?

Thanks-In-Advance :)
Alberto

Tom Kyte
September 02, 2005 - 1:10 am UTC

a) yes, but people that got blocked on you are still blocked - they (the other sessions) do not block on rows - they block and wait on your transaction. so others you have blocked would not become unblocked

b) correct.

c) I do not, only an empirical one that

a) creates a 10,000 row table with a primary key 1, 2, 3, .... 10,000
b) row 5000 is known to be in the "middle" of the table during a full scan, 10,000 at the "end" during a full scan
c) the table has an AFTER FOR EACH ROW trigger to audit changes

d) session A locks row 5000.
e) session B tries to update the entire table, every row.

at this point the audit trail has 5000 records in it, 1 from (d) and 4,999 from (e).

f) session A commits and immediately locks row 10,000
g) session B works a bit and then gets blocked again. <== this is the select for update. No new audit trail records are added by session B, there are 5001 audit trail records now.
h) session A commits and then session B really goes to town and 10,000 more records are added to the audit trail at this time.



Write Consistency Causing 'transaction rollbacks'?

Brandon Allen, September 12, 2005 - 12:42 pm UTC

Hi Tom,

I've been working on an I/O bound server for a while now trying to minimize or eliminate as many sources of I/O as I can (by tuning SQL, indexes, pctfree, etc.) and I recently noticed that v$sysstat is showing extremely high values (compared to any other server I've ever seen) for the 'transaction rollbacks' statistic. I've posted on the oracle-l freelists group and opened a TAR trying to figure out what is causing these and someone else referred me here to your discussion of write consistency as a possible cause.

Here is a sample of what I'm seeing from statspack (over a 60 min period):

Statistic Total per Second per Trans
--------------------------------- ------------------ -------------- ------------
enqueue waits 295 0.1 0.0
transaction rollbacks 389,303 108.1 2.1
rollback changes - undo records a 394,309 109.5 2.1
user commits 189,364 52.6 1.0
user rollbacks 0 0.0 0.0

You can see the the 'transaction rollbacks' are very high while 'user rollbacks' are zero. My prior understanding was that 'transaction rollbacks' is a subset of 'user rollbacks', i.e. 'user rollbacks' is incremented everytime a user issues 'rollback' even if there is nothing to actually be rolled back, but 'transaction rollbacks' is only incremented when actual rollback work is performed because an insert/update/delete has been issued previously. However, if these stats are correct, then my understanding must be incorrect or at least incomplete. Can you please shed some light?

Also, if these 'transaction rollbacks' are due to write consistency, should I see an equally large number for 'enqueue waits', or does Oracle not incur a wait when this write consistency rollback occurs?

Are there any other stats or anything else I can look at to confirm if these rollbacks are due to write consistency?

If not due to write consistency, any idea what else could be the cause?

Thanks!
Brandon

Tom Kyte
September 12, 2005 - 2:51 pm UTC

ops$tkyte@ORA10GR1> create table t ( x int primary key, y int );

Table created.

ops$tkyte@ORA10GR1> insert into t values ( 1, 1 );

1 row created.

ops$tkyte@ORA10GR1> insert into t values ( 2, 2 );

1 row created.

ops$tkyte@ORA10GR1> create or replace trigger t_trigger
  2  before update on t for each row
  3  begin
  4          dbms_output.put_line( 'old: ' || :old.x || ', ' || :old.y );
  5          dbms_output.put_line( 'new: ' || :new.x || ', ' || :new.y );
  6  end;
  7  /

Trigger created.

ops$tkyte@ORA10GR1> update t set y = 42 where x = 2;
old: 2, 2
new: 2, 42

1 row updated.

ops$tkyte@ORA10GR1> set echo off
in another session issue:
@mystat rollbacks
update t set y = 55;
@mystat rollbacks
ops$tkyte@ORA10GR1> pause

ops$tkyte@ORA10GR1> commit;

Commit complete.


<b>the other session showed:</b>

ops$tkyte@ORA10GR1> @mystat rollbacks
update t set y = 55;
@mystat rollbacks

NAME                                                    VALUE
-------------------------------------------------- ----------
user rollbacks                                              0
transaction tables consistent read rollbacks                0
rollbacks only - consistent read gets                       0
cleanouts and rollbacks - consistent read gets              0
transaction rollbacks                                       0
IMU CR rollbacks                                            0

6 rows selected.

ops$tkyte@ORA10GR1>
old: 1, 1
new: 1, 55
old: 2, 2
new: 2, 55
old: 1, 1
new: 1, 55
old: 2, 42
new: 2, 55

2 rows updated.

ops$tkyte@ORA10GR1> ops$tkyte@ORA10GR1> set echo off

NAME                                                    VALUE
-------------------------------------------------- ----------
user rollbacks                                              0
transaction tables consistent read rollbacks                0
rollbacks only - consistent read gets                       0
cleanouts and rollbacks - consistent read gets              2
transaction rollbacks                                       0
IMU CR rollbacks                                            4

6 rows selected.


<b>here is an example of a transaction rollback:</b>

ops$tkyte@ORA10GR1> create table t ( x int primary key deferrable initially deferred );

Table created.

ops$tkyte@ORA10GR1> insert into t values ( 1 ) ;

1 row created.

ops$tkyte@ORA10GR1> insert into t values ( 1 ) ;

1 row created.

ops$tkyte@ORA10GR1> @mystat rollbacks;

NAME                                                    VALUE
-------------------------------------------------- ----------
user rollbacks                                              0
transaction tables consistent read rollbacks                0
rollbacks only - consistent read gets                       4
cleanouts and rollbacks - consistent read gets              0
transaction rollbacks                                       0
IMU CR rollbacks                                            0

6 rows selected.

ops$tkyte@ORA10GR1> commit;
commit
*
ERROR at line 1:
ORA-02091: transaction rolled back
ORA-00001: unique constraint (OPS$TKYTE.SYS_C005801) violated


ops$tkyte@ORA10GR1> @mystat rollbacks;
ops$tkyte@ORA10GR1> set echo off

NAME                                                    VALUE
-------------------------------------------------- ----------
user rollbacks                                              0
transaction tables consistent read rollbacks                0
rollbacks only - consistent read gets                       4
cleanouts and rollbacks - consistent read gets              0
transaction rollbacks                                       1
IMU CR rollbacks                                            0

6 rows selected.

ops$tkyte@ORA10GR1> rollback;

Rollback complete.

ops$tkyte@ORA10GR1> @mystat rollbacks;
ops$tkyte@ORA10GR1> set echo off

NAME                                                    VALUE
-------------------------------------------------- ----------
user rollbacks                                              1
transaction tables consistent read rollbacks                0
rollbacks only - consistent read gets                       4
cleanouts and rollbacks - consistent read gets              0
transaction rollbacks                                       1
IMU CR rollbacks                                            0

6 rows selected.



 

Transaction rollbacks

Jonathan Lewis, September 12, 2005 - 3:23 pm UTC

One possibility that would explain Brandon's statistics is "single row" batch loads colliding on duplicate keys.

The enqueue waits could arise from two processes inserting the same PK and one waiting for the other to commit; the transaction rollbacks could be due to
insert, on duplicate key update.

Silly sample of code to demonstrate the point:
T1 (n1 number(2), n2 number(2), n3 number(2), n4 number(2))
i1 is unique index on t1(n1);

begin
insert into t1 values (1,2,3,4);
commit;

for i in 1..1000 loop
begin
insert into t1 values (1,2,3,4);
exception
-- we expect dup key
when dup_val_on_index then null;
end;
end loop;

end;
/

Check the statistics, and you will see zero user rollbacks,
but 1,000 transaction rollbacks as Oracle does a rollback to savepoint on error, and 1,000 rollback changes - undo records applied.

Since the rollback to savepoint also rolls back to the start of the transaction, it will also terminate the transaction so you will aldo see 2,000 enqueue requests and releases.


Tom Kyte
September 12, 2005 - 5:32 pm UTC

Thanks Jonathan, appreciate that.

I thought of that example too - but forget to commit after the first insert ;)

so mine didn't rollback the entire transaction.

A much better possible cause.


Alberto Dell'Era, September 12, 2005 - 4:56 pm UTC

Jonathan,

> Since the rollback to savepoint also rolls back to the start of the
> transaction, it will also terminate the transaction
> so you will also see 2,000 enqueue requests and releases.

Thanks, that throws light on a problem I had!

Here's the implementation of your test case, slightly modified and runstatted to show the difference when the savepoint is at the beginning of the tx vs after some modifications have been done.

Thanks!!

dellera@ORACLE10> create table t1 (n1 number(2), n2 number(2), n3 number(2), n4 number(2));

Table created.

dellera@ORACLE10> create unique index i1 on t1 (n1);

Index created.

dellera@ORACLE10>
dellera@ORACLE10> insert into t1 values (1,2,3,4);

1 row created.

dellera@ORACLE10> commit;

Commit complete.

dellera@ORACLE10>
dellera@ORACLE10> exec runstats_pkg.rs_start;

PL/SQL procedure successfully completed.

dellera@ORACLE10>
dellera@ORACLE10> begin
2
3 for i in 1..1000 loop
4 begin
5 insert into t1 values (1,2,3,4);
6 exception
7 -- we expect dup key
8 when dup_val_on_index then null;
9 end;
10 end loop;
11
12 end;
13 /

PL/SQL procedure successfully completed.

dellera@ORACLE10>
dellera@ORACLE10> exec runstats_pkg.rs_middle;

PL/SQL procedure successfully completed.

dellera@ORACLE10>
dellera@ORACLE10> begin
2 -- only change:
3 insert into t1 values (-1,-2,-3,-4);
4
5 -- the savepoint is set here now, it's not
6 -- at the start of tx anymore
7 for i in 1..1000 loop
8 begin
9 insert into t1 values (1,2,3,4);
10 exception
11 -- we expect dup key
12 when dup_val_on_index then null;
13 end;
14 end loop;
15
16 end;
17 /

PL/SQL procedure successfully completed.

dellera@ORACLE10>
dellera@ORACLE10> exec runstats_pkg.rs_stop (900);
Run1 ran in 510 hsecs
Run2 ran in 513 hsecs
run 1 ran in 99.42% of the time

Name Run1 Run2 Diff
STAT...enqueue requests 1,002 3 -999
STAT...enqueue releases 1,002 2 -1,000
LATCH.enqueue hash chains 2,083 132 -1,951
LATCH.dml lock allocation 2,000 27 -1,973
STAT...undo change vector size 98,496 106,640 8,144
STAT...redo size 545,240 558,632 13,392

Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
110,040 107,850 -2,190 102.03%

PL/SQL procedure successfully completed.

Side note: wonder if that modifies the behaviour of a "write consistency restart" also, ie if the restart is affected by the restarted stmt being at the beginning vs "in the middle" of the tx.

Tom Kyte
September 12, 2005 - 5:45 pm UTC

it won't affect the write consistency (mine was the first statement) since the transaction doesn't rollback - just the statement did. But then it starts over, the transaction isn't "killed"

Write-consistency restarts do not increment 'transaction rollbacks'

Brandon Allen, September 12, 2005 - 6:38 pm UTC

Thanks Tom, Jonathan & Alberto for the prompt, helpful replies.  I also ran through some testing and verified the following:

1) 'user rollbacks' is incremented anytime a user issues the 'rollback' command, regardless of whether or not there is actually anything to be rolled back

2) 'transaction rollbacks' is incremented in (at least) the following 3 situations:
    a. user issues 'rollback' after performing some DML, so there is actually rollback to be applied
    b. a process is terminated abnormally, and PMON performs rollback
    c. DML encounters error, e.g. unique constraint violation (as you pointed out), and is automatically rolled back

3) Neither 'user rollbacks' nor 'transaction rollbacks' are incremented by the occurence of a "write consistency" mini-rollback


Below is my testing to support the above conclusions:


## Ex. 1: Rollback w/o any DML to be rolled back increments 'user rollbacks', but not 'transaction rollbacks'

SQL>@stats

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
user commits                                                              1
user rollbacks                                                            1
transaction rollbacks                                                     0

SQL>rollback;

Rollback complete.

SQL>@stats

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
user commits                                                              1
user rollbacks                                                            2
transaction rollbacks                                                     0

## Ex. 2: Rollback after DML incrmements both 'user rollbacks' & 'transaction rollbacks'

SQL>update t set y=0;

5 rows updated.

SQL>@stats

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
user commits                                                              1
user rollbacks                                                            2
transaction rollbacks                                                     0

SQL>rollback;

Rollback complete.

SQL>@stats

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
user commits                                                              1
user rollbacks                                                            3
transaction rollbacks                                                     1

## Ex. 3: Abnormally terminated session with uncommitted DML increments 'transaction rollbacks', but not 'user 

rollbacks'

SQL>update t set y=77;

5 rows updated.

(started second session to monitor v$sysstat)

SQL>host

/mnt1/oracle ->ps
   PID TTY       TIME COMMAND
 12283 pts/tf    0:00 telnetd
 17443 pts/tf    0:00 sqlplus
 12297 pts/tf    0:00 sh
 21149 pts/tf    0:00 sh
 21150 pts/tf    0:00 ps

/mnt1/oracle ->kill 17443

/mnt1/oracle ->Terminated


## 2nd Session ##

/mnt1/oracle ->sqlplus /nolog

SQL*Plus: Release 9.2.0.6.0 - Production on Mon Sep 12 11:44:16 2005

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


SQL>conn / as sysdba

SQL>select n.name, s.value from v$sysstat s, v$statname n where n.statistic#=s.statistic# and n.name in ('transaction rollbacks', 'user rollbacks', 'user commits');

SQL>/  (Before kill)

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
user commits                                                              7
user rollbacks                                                            5
transaction rollbacks                                                     1

SQL>/  (After kill)

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
user commits                                                              7
user rollbacks                                                            5
transaction rollbacks                                                     2


## Ex. 4: mini-rollback due to write consistency does not increment 'user rollbacks' or 'transaction rollbacks'

# Check stats first

SQL>select n.name, s.value from v$sysstat s, v$statname n where n.statistic#=s.statistic# and n.name in ('transaction rollbacks', 'user rollbacks', 'user commits', 'enqueue waits');

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
user commits                                                             13
user rollbacks                                                            5
enqueue waits                                                             0
transaction rollbacks                                                     2

(opened 3 other sessions and duplicated Tom's write-consistency-mini-rollback-and-restart sample code from the top of this thread - omitted here for brevity)

# Check stats again - commits and enqueue waits have incremented, but no rollbacks

SQL>/

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
user commits                                                             14
user rollbacks                                                            5
enqueue waits                                                             1
transaction rollbacks                                                     2


So, based on all this, it seems that I can safely conclude that "write consistency" is *not* the cause of my high 'transaction rollbacks' and they are more likely the result of unique constraint violations or some other error.

Thanks again.  As always, I'm amazed by the quick and excellent support I get here. 

Cause of 'transaction rollbacks' found

Brandon Allen, September 22, 2005 - 3:12 pm UTC

Just a follow-up - I did confirm that unique constraint violations are in fact the cause of my high 'transaction rollbacks'. I captured a 10046 trace on a session that was generating some transaction rollbacks and I found the following in the trace file:

=====================
PARSING IN CURSOR #1 len=96 dep=0 uid=48 oct=2 lid=48 tim=4021023302 hv=3246794128 ad='65b2b754'
insert into REPORT_INDEX_TEMP(REPORT_ID, RECORD_ID, PROJECT_ID, MODULE_ID) values (:1,:2,:3,:4)
END OF STMT
EXEC #1:c=0,e=2873,p=0,cr=9,cu=11,mis=0,r=0,dep=0,og=2,tim=4021026271
ERROR #1:err=1 tim=380526050
=====================

oerr ora 1
00001, 00000, "unique constraint (%s.%s) violated"

This same sequence appeared 229 times, which matched exactly with the increase I saw for the value of 'transaction rollbacks' in v$sesstat for this session.

So it appears the problem is due to a flaw in the application causing it to attempt 30 duplicate key inserts every second!

Thanks again,
Brandon





Update with selects in where clause

Roman Filipsky, December 29, 2005 - 5:07 am UTC

Suppose there is a table rf1_tb as

create table rf1_tb
(
process_type varchar2(10),
submit_tm_min date,
rec_cnt integer,
status varchar2(10)
)

insert into rf1_tb values ( null, to_date('2000-01-01','YYYY-MM-DD'), 4000, 'PACKED' );
insert into rf1_tb values ( null, to_date('2000-01-01','YYYY-MM-DD'), 4000, 'PACKED' );
insert into rf1_tb values ( null, to_date('2000-01-01','YYYY-MM-DD'), 4000, 'PACKED' );
insert into rf1_tb values ( null, to_date('2000-01-01','YYYY-MM-DD'), 4000, 'PACKED' );
commit;

Given the following statement:

update rf1_tb t0
set t0.process_type='PACK'
where t0.status='PACKED' and t0.process_type is null and t0.rec_cnt<=5000
trunc(t0.submit_tm_min)=
(
select t2.submit_tm
from
(
select trunc(t1.submit_tm_min) submit_tm, count(*) part_cnt, sum(rec_cnt) rec_cnt
from rf1_tb t1
where t1.status='PACKED' and t1.process_type is null and t1.rec_cnt<=5000
group by trunc(t1.submit_tm_min)
having count(*)>3
order by 2 desc, 3 asc
) t2
where rownum=1
)


The inner most select returns submit_tm='2001-01-01' as expected. So the update will set process_type to 'PACK' also as expected.
From the explain plan I understand that Oracle first peforms the inner selects using consistent reads and then does the update.
Is it possible that by some chance some other concurrent transaction deletes some rows with submit_tm='2001-01-01' and commits just after
the inner selects are finished and just before the update part is started? This would cause update statement to update wrong rows as they dont satisfy the inner selects
any more.
I suppose that it does not work this way. At the same time I do not think that the inner selects lock the records. So I wonder if Oracle is able to detect
if the records in the inner selects changed. Unfortunately I am not able to come up with a suitable test case. I'd just like to ask your opinion on this.

Tom Kyte
December 29, 2005 - 11:44 am UTC

well, the "reads" will be performed read consistently - so as long as the query (the update) does not 'restart', the innermost query will be done once in a read consistent fashion and it's results used to identify the rows to be modified in T0.


</code> http://asktom.oracle.com/Misc/something-different-part-i-of-iii.html http://asktom.oracle.com/Misc/part-ii-seeing-restart.html http://asktom.oracle.com/Misc/part-iii-why-is-restart-important-to.html <code>

Correction to SQL above

Roman Filipsky, December 29, 2005 - 5:09 am UTC

I am sorry, I missed the and keyword. Here's the correct one:

update rf1_tb t0
set t0.process_type='PACK'
where t0.status='PACKED' and t0.process_type is null and t0.rec_cnt<=5000 and
trunc(t0.submit_tm_min)=
(
select t2.submit_tm
from
(
select trunc(t1.submit_tm_min) submit_tm, count(*) part_cnt, sum(rec_cnt) rec_cnt
from rf1_tb t1
where t1.status='PACKED' and t1.process_type is null and t1.rec_cnt<=5000
group by trunc(t1.submit_tm_min)
having count(*)>3
order by 2 desc, 3 asc
) t2
where rownum=1
)


Power of Imagination

Yogesh Purabiya, February 24, 2006 - 12:02 pm UTC

I do not browse the internet regulary.
But whenever I get time, I do visit / read this site frequently
and learn a lot from it.

Since I use old versions of Oracle on small systems,
this site is more useful to learn new ideas
and be aware of them.
Though many of them may not be directly useful to me;
it gives me knowledge.

While going through these pages (this site),
I wonder how many factors do you think of
while analyzing the queries and replying them.
And, many replies come in a short while !
You even guess something not (in)-directly told.
It does require a lot of imagination.

Even while going through the reply
of the first / original query of this page,
I had to use a lot of imagination.
And, not that I could follow it
to any significant extent !

And, then, you say, every day you learn something new about Oracle !
Well, not everyday, but whenever I browse the net,
I also learn many new things about oracle ;)
Less effort, and more gain - isn't that a smart move ?!

I wonder whether you handle all the queries alone,
or you are heading a group.
More precisely, I would like to know
whether you are helping to develope a second line
to continue this for many many years.
Not that, it can actually be done by you
- but that it is to be done by others
- you can only give them guidance and not imagination.


Tom Kyte
February 24, 2006 - 1:17 pm UTC

unless the answer begins with "I ask XXXXX and they said:"

the answer is coming from me.

these two must be related for sure...

antonio vargas, July 04, 2006 - 4:38 am UTC

This thread has been really informative about why sometimes long processes can take much more time than expected...

Having reached this thread from your blog posts </code> http://tkyte.blogspot.com/2005/08/something-different-part-i-of-iii.html
I quote you:

---quote---
Oracle does do two types of block gets when processing a modification statement. It performs

* Consistent reads: When “finding” the rows to modify
* Current reads: When getting the block to actually update the row of interest"
---quote---

So maybe this is related to the "how to update millions of rows" at 
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:6407993912330 <code>where you take the way of doing CTAS instead of updates, sure you mention doing it so that we don't get too bogged down with indexes, but maybe it's also helping towards not having to do a current read for each block we update?


Tom Kyte
July 07, 2006 - 8:13 pm UTC

well, there will be current mode gets for the create table as select - fewer than for the update of every row (likely). But that would likely be a small part of it for an indexed table.

There is row migration to consider.
Index maintenance.
Undo generation.
Redo that can be avoided in some cases.

Those are the big things we are avoiding.

Where is current presentation?

Robert, September 14, 2006 - 12:20 pm UTC

Tom,

I tried to link to

</code> http://asktom.oracle.com/~tkyte/ukoug.tar.gz <code>

But it is a bad link.

Can you please direct us to your most recent presentation on the subject (the one that includes awareness of effect of trigger on results).

Thanks,

Robert.

Ravi, October 21, 2006 - 6:17 am UTC

Tom,

Just to clarify a few things here, is write consistency about ensuring data that was fetched consistent to a start of an operation is exactly same as it should be when looked up to lock and change in a write consistent mode.

Lets take an example

Time T1
<<invisible Savepoint>>

Update table a where columnA in (Select xxxx)

or

Delete table a where columnA in (Select xxxx)

or

Insert into table a select....


1) Now if the Select query is fetching data that takes a LONG time to arrive to be modified, Oracle does NOT lock the row until it is actually arrived for the modification.?
Is an 'update x where y ='blah' for update of y not possible?', just to lock every interested row down?

2)This means say Oracle could be T1+100 seconds and doing the 10000th out of a million rows, modified 9999 rows, but then finds the 10000th row had been modified by another session, Oracle detects this, rollsback quietly to the invisible Savepoint and restarts the change again?

Is this what is happening?

3) Can a practical situation be say, Sales table having an Award column flag, that needs to be updated to Yes for any dept that has all its Sales reps having sold 75% of their wares. Something like 'Update sales X set award_flag='Y' where not exists (Select null from sales y where x.dept = y.dept and nvl(y.sales_perct,0) < 75)' and then finding that after the update has started the 10000th salesman's figures had been corrected to be lower?

Thanks

Ravi

Tom Kyte
October 21, 2006 - 11:06 am UTC

1) locking a set of rows will never be atomic, you find one, lock it, move on - find another lock it, move on....

even if you use "for update" on a select - it finds a row, locks it, moves on, repeat.


2) yes.

3) yes.

A reader, October 21, 2006 - 4:53 pm UTC

Tom,

I thought that Oracle locks ALL the rows before returning the first one if we do a FOR UPDATE OF, see :

</code> http://asktom.oracle.com/pls/ask/f?p=4950:61:2914877876170358966::::P61_ID:44798631736844#46597348673575

1) Is that correct, if yes how come in your discussion 

http://tkyte.blogspot.com/2005/08/something-different-part-i-of-iii.html <code>

You had said that when a restart happens, it restarts with FOR UPDATE OF. In that case how would the y=5 value become y=11 as all the rows would have been locked with no update permitted?
"a row that was Y=5 when you started the SELECT FOR UPDATE is found to be Y=11 when you go to get the current version of it"

I can understand why if we do a Select for update a million rows, its not great to generate Undo for a million records.


2) You said that on the restart Oracle always does a FOR UPDATE of. Is that a feature of restart, I mean, restart after locking rows?

Tom Kyte
October 21, 2006 - 5:19 pm UTC

sure it locks all of them before it returns.

It however does this by

a) finding a row to lock
b) locking it
c) going back to A until no more to lock - restarting for the same reasons an update would.


"for update" is not "instantaneous" - nothing is..



A reader, October 22, 2006 - 3:39 am UTC

Then again,
In

</code> http://tkyte.blogspot.com/2005/08/something-different-part-i-of-iii.html <code>

1) You had said that when a restart happens, it restarts with FOR UPDATE OF. In
that case how would the y=5 value become y=11 as all the rows would have been
locked with no update permitted?
"a row that was Y=5 when you started the SELECT FOR UPDATE is found to be Y=11
when you go to get the current version of it"

2) You said that on the restart Oracle always does a FOR UPDATE of. Is that a
feature of restart, I mean, restart after locking rows?

Tom Kyte
October 22, 2006 - 7:47 am UTC

1) a select * from t where ... for update is EXACTLY THE SAME as an update conceptually, does the same thing (but won't fire triggers)

when you started the update, the table had:

Y
-----
5
5
5
5
5
5
5
5
5

it was full of fives. You updated 2 records and found the "third" was updated. So, you restart. Now, when you restart, we reset the scn and say the table looked like this:

Y
------
5
5
11
5
5
5
5
5
5
5

Now the select for update starts - but when it gets to record 4, someone modified it from 5 to 11.

so the select for update restarts (just like the UPDATE DID)



2) that is pretty much what the article says. first we try to update, if the update restarts - reset the SCN, go into select for update mode, lock a stable set of records - then come back and actually do the update to them.

A reader, December 27, 2006 - 6:31 am UTC

Tom,

It looks like restarts work for non-blocking rows that a second session may be interested to update, my test case is:

I have a table T2 with two columns and a trigger on it with
following code

create or replace trigger t_bufer
before update on t2 for each row
begin
dbms_output.put_line(' updated ');
end;
/


select * from t2;

X Y
---------- ----------
18 2
6 9

Session 1:

dbms_message -> "updated"

update t2 set x=x+1 where x> 10



Session 2, blocked when run:

update t2 set x = x+1 where y > 0


Session 3

delete t2 where x < 10;
commit;


Session 1:

Commit

Session 2 does:

updated
updated

1 row updated.

The Session 2 appears to come back and realised that a row that it was NEVER blocked on by Session 1, has gone MISSING since the update began and hence began an implicit rollback and Restart!!


Questions:

1) Am I right about restart?

2) How does Oracle manages this, say in a multi million row table, does it visit ALL potential rows for a Query Environment (the timestamp a query began) and checks if their SCN has been changed. If yes, then Restart?


Thanks

Ravi




Tom Kyte
December 28, 2006 - 9:19 am UTC

if a row you were interested in (based on READ CONSISTENCY, all rows as of the time your statement began) was modified (include "made to disappear"), yes, you restart.

It does this using the same technique on a 1 row table as it would on a 1 billion row table.

A reader, January 02, 2007 - 11:19 am UTC

I have a table T with

Name Null? Type
----------------------------------------- -------- ----------------------------
X NUMBER
Y NUMBER

And data:

select * from t;

X Y
----- ----------
12 1

Now session 1

My trigger is from your Blog:

create or replace trigger t_bufer
2 before update on t for each row
3 begin
4 dbms_output.put_line( 'fired' );
5 end;
6 /


Now, Oracle does a restart if in Session 1 I do

update t set x= x+1;

And in Session 2 I do:

update t set y=y where x > 0;

But if in Session 2 I do a SCALAR subquery, oracle DOES NOT restart, why?

update t set y= y where (select x from t) > 0;
Tom Kyte
January 04, 2007 - 9:51 am UTC

because the predicate only needs to be evaluated once per query - not once per row per query, you are not filtering on the current rows values - it does not have to check for each row modified "if there is an x > 0"

A reader, January 03, 2007 - 10:25 am UTC

Looks like Restart is done for both a Subquery
and a co-related subquery

/* Formatted on 03/01/2007 15:18 (Formatter Plus v4.8.7) */
UPDATE t a1
SET y = y --where (select x from t a2 where a1.rowid = a2.rowid ) > 0
WHERE EXISTS ( SELECT NULL
FROM t a2
WHERE a1.ROWID = a2.ROWID
AND a1.x > 0 )


/* Formatted on 03/01/2007 15:21 (Formatter Plus v4.8.7) */
UPDATE t a1
SET y = y --where (select x from t a2 where a1.rowid = a2.rowid ) > 0
WHERE 1 = ( SELECT 1
FROM t a2
WHERE a1.ROWID = a2.ROWID
AND a1.x > 0 )

Same question as above, why not for a scalar sub-query, to me it doesn't sound right!
Tom Kyte
January 05, 2007 - 8:12 am UTC

those are both correlated subqueries, they are both fired row by row.

A reader, January 04, 2007 - 5:05 pm UTC

Tom,
I do not follow, the following query:


UPDATE t a1
SET y = y
where (select x from t a2 where a1.rowid = a2.rowid ) > 0;

Is a correlated subquery, presumably every row from the parent table A1 has to be done row by row and supplied to the inner query. I mean, it cannot be executed for the query just once, it needs to be executed per row.

But still, there is NO RESTART?

But surprisingly, the following on 'similar' query, does have a RESTART:

select * from t a1
WHERE 1 = ( SELECT 1
FROM t a2
WHERE a1.ROWID = a2.ROWID
AND a1.x > 1 )

Any explanation?
Tom Kyte
January 05, 2007 - 9:35 am UTC

ahh, read it too fast, let me research that over time.

Ravi, January 05, 2007 - 8:36 am UTC

Tom,
I guess the post above queue jumped, so re-posting, the following co-related subquery wouldnt RESTART, why?


UPDATE t a1
SET y = y
where (select x from t a2 where a1.rowid = a2.rowid ) > 0;

Is a correlated subquery, presumably every row from the parent table A1 has to be done row by row and supplied to the inner query. I mean, it cannot be executed for the query just once, it needs to be executed per row.

But still, there is NO RESTART?

But surprisingly, the following on 'similar' query, does have a RESTART:

select * from t a1
WHERE 1 = ( SELECT 1
FROM t a2
WHERE a1.ROWID = a2.ROWID
AND a1.x > 1 )

Any explanation?

Ravi, January 05, 2007 - 8:36 am UTC

Tom,
I guess the post above queue jumped, so re-posting, the following co-related subquery wouldnt RESTART, why?


UPDATE t a1
SET y = y
where (select x from t a2 where a1.rowid = a2.rowid ) > 0;

Is a correlated subquery, presumably every row from the parent table A1 has to be done row by row and supplied to the inner query. I mean, it cannot be executed for the query just once, it needs to be executed per row.

But still, there is NO RESTART?

But surprisingly, the following on 'similar' query, does have a RESTART:

select * from t a1
WHERE 1 = ( SELECT 1
FROM t a2
WHERE a1.ROWID = a2.ROWID
AND a1.x > 1 )

Any explanation?

consistent gets

Balu, January 08, 2007 - 12:48 am UTC

Dear Tom,

SQL> select count(1) from big where not exists
2 (select 1 from small where big.object_id=small.object_id);

COUNT(1)
---------
15705


Statistics
----------------------------------------------------------
0 recursive calls
188715 db block gets
78850 consistent gets
221 physical reads
0 redo size
181 bytes sent via SQL*Net to client
280 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

Table big is having 344 blocks and small is having only 18 blocks and if you talk about the size , size is 3 mb for big table and 10k for small table respectively.


I just want to focus on these two.

188715 db block gets
78850 consistent gets

As i know that these two are logical reads.

Tables is having only 344 blocks how this possible to get
188715 db block gets and 78850 consistent gets this . i am still not clear about this consistent gets and db block gets can you pls explain.

Regards

Balu.

Tom Kyte
January 08, 2007 - 12:20 pm UTC

(ouch - that count(1) which should be a count(*) and that select 1 that should be a select null hurts my head....)


trace it, probably small is being full scanned over and over.


consistent gets

Balu, January 08, 2007 - 10:24 pm UTC

Dear Tom,

This is the explain plan for the above one.please clarify

SQL> @?/rdbms/admin/utlxpls.sql

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------

--------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
--------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 17 | 43 |
| 1 | SORT AGGREGATE | | 1 | 17 | |
|* 2 | HASH JOIN ANTI | | 1 | 17 | 43 |
| 3 | TABLE ACCESS FULL | BIG | 23712 | 94848 | 35 |
| 4 | TABLE ACCESS FULL | SMALL | 327 | 4251 | 2 |
--------------------------------------------------------------------


PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("BIG"."OBJECT_ID"="SMALL"."OBJECT_ID")

Note: cpu costing is off




Tom Kyte
January 09, 2007 - 7:35 am UTC

let's see a tkprof of it

Very interesting

A reader, January 10, 2007 - 10:31 pm UTC

Tom,
I feel really bad not able to understand why seq=5 id=3 had old_y=0.
Blocked session (session2) executed the update statment ( though mini rollback)after commit happend in session1.
Why old_y =0 ? Should't that be -1 since session 1 commited.
THank you,
V


ops$tkyte@ORA920> select decode(msg1.seq,null,'NOT VISIBLE'), msg2.*
2 from msg2, msg1
3 where msg2.seq = msg1.seq(+)
4 order by msg2.seq;

DECODE(MSG1 SEQ SID OLD_ID OLD_Y NEW_ID NEW_Y
----------- ---------- ---------- ---------- ---------- ---------- ----------
1 12565 3 0 3 -1
NOT VISIBLE 2 12566 2 0 2 1
3 12567 1 0 1 -1
4 12567 5 0 5 -1

NOT VISIBLE 5 12566 3 0 3 1

NOT VISIBLE 6 12566 1 -1 1 1
NOT VISIBLE 7 12566 2 0 2 1
NOT VISIBLE 8 12566 3 -1 3 1
NOT VISIBLE 9 12566 5 -1 5 1

A slightly different example....

D, May 31, 2007 - 4:51 pm UTC

Tom -

Maybe this is implied in some example above, but I wasn't sure, so I created a more explicit test.

I was doubtful that Oracle actually compared before and after values of columns used to qualify rows for update. My guess was that it compared the current transaction SCN with the current block SCN or the SCN of the last transaction that updated the row to determine whether or not an update statement should restart. Can you confirm if this is true?

I modified your original example to add an additional column to table "t" so that 2 sessions modify the same row but do not conflict in terms of which columns they change. It appears a restart still occurs in the second session which would suggest a comparison of SCNs.

***session 1***
SQL> create sequence s;

Sequence created.

SQL> 
SQL> create table msg1
  2  ( seq int primary key, sid int,
  3    old_id int, old_y int, old_z int,
  4    new_id int, new_y int, new_z int
  5   );

Table created.

SQL> create table msg2
  2  ( seq int primary key, sid int,
  3    old_id int, old_y int, old_z int,
  4    new_id int, new_y int, new_z int
  5   );

Table created.

SQL> create or replace procedure log_msg2( p_seq in int,
  2    p_sid in int,
  3    p_old_id in int,
  4    p_old_y  in int,
  5    p_old_z  in int,
  6    p_new_id in int,
  7    p_new_y  in int,
  8    p_new_z  in int
  9   )
 10  as
 11   pragma autonomous_transaction;
 12  begin
 13   insert into msg2
 14    (seq, sid, old_id, old_y, old_z, new_id, new_y, new_z )
 15   values
 16    (p_seq, p_sid, p_old_id, p_old_y, p_old_z, p_new_id, p_new_y, p_new_z );
 17  commit;
 18  end;
 19  /

Procedure created.

SQL> create table t
  2    as
  3    select rownum id, 0 y, 1 z
  4  from all_users
  5  where rownum <= 5;

Table created.

SQL> create or replace trigger t_trigger before update on t for each row
  2  declare
  3   l_seq number;
  4  begin
  5   select s.nextval into l_seq from dual;
  6   
  7  insert into msg1
  8   (seq, sid, old_id, old_y, old_z, new_id, new_y, new_z )
  9    values
 10   (l_seq, userenv('sessionid'), :old.id, :old.y, :old.z, :new.id, :new.y, :new.z );
 11   
 12   log_msg2
 13   (l_seq, userenv('sessionid'), :old.id, :old.y, :old.z, :new.id, :new.y, :new.z );
 14   end;
 15   /

Trigger created.

SQL> update t set y = -1 where id = 3;

1 row updated.

***session 2***
update t set z = -2 where id in ( 1,2,3,4,5 );
(blocks)

***session 1***
SQL> select decode(msg1.seq,null,'NOT VISIBLE'), msg2.*
  2   from msg2, msg1
  3   where msg2.seq = msg1.seq(+)
  4   order by msg2.seq;

DECODE(MSG1        SEQ        SID     OLD_ID      OLD_Y      OLD_Z     NEW_ID      NEW_Y      NEW_Z
----------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
                     1        655          3          0          1          3         -1          1
NOT VISIBLE          2        659          1          0          1          1          0         -2
NOT VISIBLE          3        659          2          0          1          2          0         -2

SQL> commit;

Commit complete.

SQL> select decode(msg1.seq,null,'NOT VISIBLE'), msg2.*
  2   from msg2, msg1
  3   where msg2.seq = msg1.seq(+)
  4   order by msg2.seq;

DECODE(MSG1        SEQ        SID     OLD_ID      OLD_Y      OLD_Z     NEW_ID      NEW_Y      NEW_Z
----------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
                     1        655          3          0          1          3         -1          1
NOT VISIBLE          2        659          1          0          1          1          0         -2
NOT VISIBLE          3        659          2          0          1          2          0         -2
NOT VISIBLE          4        659          3          0          1          3          0         -2
NOT VISIBLE          5        659          1          0          1          1          0         -2
NOT VISIBLE          6        659          2          0          1          2          0         -2
NOT VISIBLE          7        659          3         -1          1          3         -1         -2
NOT VISIBLE          8        659          4          0          1          4          0         -2
NOT VISIBLE          9        659          5          0          1          5          0         -2

9 rows selected.


Tom Kyte
June 01, 2007 - 8:23 am UTC

it is because of your trigger (the observing thing changes that which is observed!!)


If you reference the column in the trigger, as you are with your logging, it changes what causes a restart.

The columns referenced in the trigger become "part of the where clause" in effect - they are compared as well.

So, modify example to not reference the colums (:new and :old) and you'll see it is "different"


All I can say is....Wow

D, June 01, 2007 - 11:05 am UTC

You are right. Just changed the new_y / old_y to be updates with nulls instead of referencing :old and :new.

create or replace trigger t_trigger before update on t for each row
declare
l_seq number;
begin
select s.nextval into l_seq from dual;

insert into msg1
(seq, sid, old_id, old_y, old_z, new_id, new_y, new_z )
values
(l_seq, userenv('sessionid'), :old.id, null, :old.z, :new.id, null, :new.z );

log_msg2
(l_seq, userenv('sessionid'), :old.id, null, :old.z, :new.id, null, :new.z );
end;
/

update t set y = -1 where id = 3;

***session 2***
update t set z = -2 where id in ( 1,2,3,4,5 );
(blocks)

***session 1***
select decode(msg1.seq,null,'NOT VISIBLE'), msg2.*
from msg2, msg1
where msg2.seq = msg1.seq(+)
order by msg2.seq;


DECODE(MSG1        SEQ        SID     OLD_ID      OLD_Y      OLD_Z     NEW_ID      NEW_Y      NEW_Z
----------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
                    21        663          3                     1          3                     1
NOT VISIBLE         22        662          1                     1          1                    -2
NOT VISIBLE         23        662          2                     1          2                    -2

SQL> commit;

Commit complete.

SQL> 
SQL> 
SQL> select decode(msg1.seq,null,'NOT VISIBLE'), msg2.*
  2  from msg2, msg1
  3  where msg2.seq = msg1.seq(+)
  4  order by msg2.seq;

DECODE(MSG1        SEQ        SID     OLD_ID      OLD_Y      OLD_Z     NEW_ID      NEW_Y      NEW_Z
----------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
                    21        663          3                     1          3                     1
NOT VISIBLE         22        662          1                     1          1                    -2
NOT VISIBLE         23        662          2                     1          2                    -2
NOT VISIBLE         24        662          3                     1          3                    -2
NOT VISIBLE         25        662          4                     1          4                    -2
NOT VISIBLE         26        662          5                     1          5                    -2

6 rows selected.

SQL> 

Write consistency for ddl?

Nilanjan Ray, June 07, 2007 - 5:43 pm UTC

Hi Tom,

Just came across this in Dizwell.com http://dizwell.com/prod/node/810

A ddl performs two commits. Any answers why? I am sure he has proof of this.

I can understand the second commit for the data dictionary, but what is the use of the first?

Thanks & Regards
Ray

Paul, December 04, 2007 - 6:10 pm UTC

The discussions of write consistency here and elsewhere have been most useful and educational.

Following on with a twist re write consistency restarts and v$sesstat, your example of a write-consistency restart showed that the statistic "cleanouts and rollbacks - consistent read gets" was incremented for the session that experienced the restart - which makes me wonder if my long running session that has 270 of them has therefore experienced some write consistency restarts.. or if there might be other reasons that this statistic is incremented.

My session doesn't have any IMU CR rollbacks, though, so perhaps there hasn't been any write consistency activity after all.

"Read committed" indeed!

Sorry if that was a new question..



restartable update fun

Chris, March 25, 2008 - 3:02 pm UTC

Tom,

First, thank you for this wonderful resource and all the time you spend helping out the Oracle community. We really appreciate it.


I am trying to get a firmer grasp on this re-startable update stuff, and have an interesting test case where predicates in the restartable session are not involved:


[session 1]

     create table t (x number primary key);

     insert into t select level from dual connect by level <= 3;

     create or replace trigger t_bufer
     before update on t for each row
     begin
         dbms_output.put_line('fired');
     end;
     /


     update t set x = 20 where x = 2;
     fired

     1 row updated.


[session 2]
     update t set x = x + 1;
     ...hangs...


[session 1]
     update t set x = 30 where x = 3;
     fired

     1 row updated.

     commit;

     Commit complete.


[session 2]

     fired
     fired
     fired

     3 rows updated.




This first test case seems to contradict this statement:
"if a row you were interested in (based on READ CONSISTENCY, all rows as of the time your statement began) was modified (include "made to disappear"), yes, you restart."

From session's 2's perspective, ALL rows that I was interested in when my query began were modified (certainly the row where x = 3 was. If you change `update t set x = 30 where x = 3' to `update t set x = 10 where x = 1', it will deadlock). However, the re-start did not happen.

Compare this to this test case: (notice session 2 is exactly the same):


[session 1]

     truncate table t;
     insert into t select level from dual connect by level <= 3;
     commit;

     update t set x = 20 where x = 2;
     fired

     1 row updated.


[session 2]

     update t set x = x + 1;
     ...hangs...


[session 1]

     DELETE from t where x = 3;

     1 row deleted.


     commit;

     Commit complete.


[session 2]

     fired
     fired
     fired
     fired

     2 rows updated.


This time we restarted.


I was also a little confused on this:

"only the columns in the predicate are considered"

as I had no predicate in session 2.


I wonder if a more accurate statement (for this case anyway) might be:

If it's possible during the current-mode read that *different rows, or less rows* would satisfy the predicate (compared to the consistent-mode read), the update will restart. More rows is ok (i.e. people inserting new rows while your update is processing, even if those rows match your predicate). The same row(s) being modified is okay (and no restart) if Oracle believes it doesn't change the result set.


It is also interesting to note that in the first case, if you add the predicate `where x is not null' to session 2's query, it will restart the update. You and I both know this should not make a difference (as x is a primary key and cannot be null by defenition), but apparently Oracle doesn't check down to this level (and it probably shouldn't--this seems like overkill).

What do you think?
Tom Kyte
March 26, 2008 - 8:38 am UTC

you have no predicate on the query (the update)

the restart happens when you have:

.... where column = some_value;

for example, as the update using consistent read FINDS a row where column = some_value - but then in doing the CURRENT read (to update the row) finds that column = some_value is no long true.


when the predicate is:


where 1=1;

as yours is, that'll never happen on the update - but the delete did cause it to happen because we used consistent read to find the row and then the current mode read said "row isn't there!!!".

Chris, March 27, 2008 - 10:40 pm UTC

Thank you Tom for your quick reply.

"...finds that column = some_value is no long true."

However in this update case, the predicate is ALWAYS TRUE (for both consistent and current mode reads), and the restart still happens:

[session 1]

     create table t (x number primary key);

     insert into t select level from dual connect by level <= 3;

     create or replace trigger t_bufer
     before update on t for each row
     begin
         dbms_output.put_line('fired');
     end;
     /


     update t set x = 20 where x = 2;
     fired

     1 row updated.




[session 2]
      update t set x = x + 1 where x IS NOT NULL;
      ... hangs...



[session 1]
     update t set x = 30 where x = 3;
     fired

     1 row updated.

     commit;

     Commit complete.


[session 2]

     fired
     fired
     fired
     fired
     fired

     3 rows updated.


The predicate in this case is true during both the current and consistent mode reads (has to be, x is a primary key), and we get a restart.

Is this just another special case?



Tom Kyte
March 31, 2008 - 7:08 am UTC

the value *changed*, that is all it cares about - the value it used to find the row was modified.

Heck, if you just put in the trigger -

dbms_output.put_line( 'x was ' || :old.x || ' and is now ' || :new.x );

it'll restart even without a predicate. It cares that it was modified.

CSV_ORACLE_TABLE_UPLOAD

Maria, August 23, 2008 - 3:20 pm UTC

http://apex.oracle.com/pls/otn/f?p=28857:1:7803652371416264:::::
HI i just want to ask u,how can i access my collections and save it in one table with another items value,see please my site.i try to simmilary to write 'insert into'||P1_TableName|| values(get_records)but i had errors value not defiend
my qeustion how can i put data from collections to one table,solve my problem or give me somebody examples:-)
Thanks
Maria
Tom Kyte
August 26, 2008 - 8:06 pm UTC

the link was not able to be accessed - a protected url.

Anyway, I'd need a snippet of your code to tell you what you are doing wrong (and versions always help)

Documentation Bug?

Anthony Wilson, August 28, 2008 - 3:34 am UTC

G'day Tom,

I notice in your blog post on the topic (and presumably in your book, which I don't have with me), you explain why the restart phenomenon can be used to explain why AFTER ROW triggers are more efficient than BEFORE ROW triggers.

However, the documentation at http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28370/triggers.htm states otherwise:

"Note:
BEFORE row triggers are slightly more efficient than AFTER row triggers. With AFTER row triggers, affected data blocks must be read (logical read, not physical read) once for the trigger and then again for the triggering statement. Alternatively, with BEFORE row triggers, the data blocks must be read only once for both the triggering statement and the trigger."

Is this a documentation bug?

It is also interesting to note that the next paragraph in the documentation specifically describes the restart process (something which I have not found elsewhere in the docs).

cheers,
Anthony
Tom Kyte
August 29, 2008 - 3:45 pm UTC

after row triggers are not more efficient because of the restart

they are more 'efficient' because they can have no side effect, they cannot modify the :new values at all, they see the data AFTER it was modified fully, they generate less redo.

looks to be a doc bug, yes. consider:

drop table t;

create table t ( x int );

create or replace trigger t_trigger
AFTER INSERT OR UPDATE OR DELETE ON T for each row
begin
    if (inserting or updating)
    then
        if (:new.x < 0)
        then
            dbms_output.put_line( 'hello' );
        end if;
    end if;
    if (deleting or updating)
    then
        if (:old.x < 0)
        then
            dbms_output.put_line( 'hello' );
        end if;
    end if;
end;
/

insert into t select 1 from dual;
update t set x = x;
delete from t;

@trace
insert into t after_trigger select object_id from all_objects;
update t after_trigger set x = x+1;
delete from t after_trigger ;


@tk "sys=no"



I did it with 'after' and 'before' and the results were:


insert into t before_trigger select object_id from all_objects

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.02       0.02          0          0          0           0
Execute      1      2.69       2.72          0      64591      51462       49725
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      2.72       2.75          0      64591      51462       49725

insert into t after_trigger select object_id from all_objects

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.03       0.03          0          0          0           0
Execute      1      2.71       2.66          0      64577      51460       49733
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      2.75       2.69          0      64577      51460       49733



update t before_trigger set x = x+1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          1          0           0
Execute      1      1.59       1.86          0        368     102878       49725
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      1.60       1.86          0        369     102878       49725
********************************************************************************
update t after_trigger set x = x+1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          1          0           0
Execute      1      1.45       1.42          0         84      50879       49733
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      1.45       1.42          0         85      50879       49733



delete from t before_trigger

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0         70          0           0
Execute      1      1.45       1.74          0        114      53634       49725
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      1.47       1.76          0        184      53634       49725
********************************************************************************
delete from t after_trigger


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          1          0           0
Execute      1      1.44       2.53          0         84      51625       49733
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      1.44       2.53          0         85      51625       49733


Write consistency - Restart anomaly

joyan sil, September 26, 2011 - 2:51 pm UTC

This is the example you explained in your book "Expert Oracle Database Architecture"

In the section "Seeing a restart" you have said:
"Normally, the columns referenced in the WHERE clause of the UPDATE or DELETE statement are used to determine whether or not the modification needs to restart"

and given the following example:

Session -1

SQL> create table t (x int, y int);

Table created.

SQL> insert into t values (1,1);

1 row created.

SQL> commit;

Commit complete.

SQL> ed
Wrote file afiedt.buf

  1  create or replace trigger t_bufer
  2  before update on t for each row
  3  begin
  4        dbms_output.put_line('fired');
  5* end;
SQL> /

Trigger created.

SQL> update t set x=x+1;
fired

1 row updated.


Session - 2
SQL> update t set x=x+1 where y>0;


Session -2 is blocked by Session--1

Session - 1

SQL> commit;

Commit complete.


Session - 2

SQL> update t set x=x+1 where y>0;
fired

1 row updated.

SQL> select * from t;

         X          Y
---------- ----------
         3          1


My question is:

1). If the query never restarted in this case, how can Session -2 see the current value of 2 and not consistent value of 1.

2). If it works like this, what is the use of restart when I use where x > 0 in Session 2, it could directly get the current value and complete the update instead of restarting.

I might sound totally foolish but I am really struggling with this concept.



Tom Kyte
September 26, 2011 - 7:02 pm UTC

1) You were updating X

You never used X in a where clause
You never referenced X in the trigger

So, we didn't need to restart. As written in the book - we use a consistent read to read the rows in the table (to find the rows to be affected by "update t set x=x+1" and "update t set x=x+1 where y>0") and when we find a row using a consistent read - we get the block in CURRENT MODE to do the update. When we do that - we look to see if something relevant has changed since the query began - and in this case - nothing has. Sure, X changed, but Y didn't and we used Y to find the rows. X's value wasn't ever referenced.

2) In that case, you were using X to identify the rows you wanted to update. If the value of X we used to find the row (the read consistent value) differs from the current value of X (the current mode value) - then we might not be wanting to update that row - so we restart. *We MIGHT not*, not "we are not", but we might not. We restart whenever a column we depend on changes.

Write consistency anomaly resolved

joyan, September 26, 2011 - 11:37 pm UTC

I realised my mistake. This was due to lack of understanding:
1). We use consistent rows to find the rows (where clause) but get the block in current mode to update
2). When current value of x differs from consistent value of x,
then there are two possibilities:

i). We might not be wanting to update that row. But we cannot skip that row.
This might cause data integrity issue as update will be based on physical location of the row.
So, restart and hold the row in 'SELECT FOR UPDATE' mode.
ii). We might still be updating the row. In case the Session 1 rollbacks.

Is that correct ?

Thanks for listening !!! Thanks for explaining !!!
Tom Kyte
September 27, 2011 - 8:24 am UTC

are we commenting on your example? If so, the value of x is not relevant, as I stated above. We didn't use X to locate the row.



What about nested subqueries?

A reader, September 27, 2011 - 6:18 pm UTC

I'm clear on how this works when columns explicitly mentioned in the predicate of the outermost (or only) query change between a consistent and current read, affecting the predicate results (a restart occurs) - but we've been having a debate about what this means for nested sub-queries.

Here's a concrete example:

A table with PK id, and other columns state, temp and date.

UPDATE table SET state = DECODE(state, 'rainy', 'snowy', 'sunny', 'frosty') WHERE id IN (
SELECT id FROM (
SELECT id,state,temp from table WHERE date > 50
) WHERE (state='rainy' OR status='sunny') AND temp < 0
)

The real thing was more convoluted (in the innermost query), but this captures the essence.

We found that even though this query only selects 'rainy' and 'snowy' rows explicitly in the first inner query, the update can fail at the DECODE because the rows have already been updated by the same query running concurrently, and the DECODE evaluates to null (it happens that state is not nullable).

Should that be possible? Does the restart not occur because the nested query hides the predicate on 'state', so the restart check passes? This would seem to violate "point-in-time" consistency for the entire query since there is no possible single state which should cause the DECODE to fail.
Tom Kyte
September 27, 2011 - 7:56 pm UTC

the update will not fail. the where clause is done read consistent.

WHERE id IN (
  SELECT id FROM (
    SELECT id,state,temp from table WHERE date > 50      
  ) WHERE (state='rainy' OR status='sunny') AND temp < 0
)


is evaluated as of the time the update started (or restarted due to another transaction)

Travis Downs, September 27, 2011 - 9:42 pm UTC

Thanks for your quick response. We've observed the update failing, intermittently. I guess this is a bug then, perhaps along the lines of the one mentioned here:

http://jonathanlewis.wordpress.com/2011/03/20/upgrade-whoa/

... although we using 10.x, not 11.x mentioned there (I checked metalink and no indication of it occurring on 10.x).

I'm curious though, if the whole outer WHERE is done consistently with the update (via restarts), how does it really track which columns are used through all levels of nesting? Since the state column isn't explicitly used in the WHERE of the outermost query, it must somehow remember all columns that were used in any way to calculate the subquery results, and then check *all* of them again when it does the update, because there is not necessarily any simple mapping between the rows to be updated, and the rows accessed in the nested subqueries.
Tom Kyte
September 28, 2011 - 5:36 am UTC

failing with what error? How does it fail?

define "fail"


and then check *all* of them
again when it does the update,


correct, all it has to do is remember what columns of the base table you referenced in the where clause or touched in a trigger using :new or :old.column references. If the consistent read version differs from the current mode value - then we need to restart.


But a restart would not result in "failure", so I don't know what you mean by "theupdate failing". Something might be happening in your situation, but I doubt it is related to this.

So, details, give more of them.

Travis Downs, September 28, 2011 - 1:18 pm UTC

Thanks again for your response, Tom.

By the update failing, I mean that the update is rejected because of an attempt to put NULL into the state column, which is not nullable. The only happens if the DECODE falls through to the default (i.e., the state column doesn't contain 'rainy' or 'sunny'), which is NULL.

The only what this could happen if the query and update were not consistent with respect to one point in time.

Basically the inner WHERE is selecting only 'rainy' and 'sunny' rows, but the outer UPDATE somehow sees rows which are neither.
Tom Kyte
September 28, 2011 - 2:37 pm UTC

drop table t;

create table t
( state varchar2(20) not null,
  id    number,
  temp  number,
  dt    number
)
/
insert into t (state,id,temp,dt) values ('rainy',1,-1,51);
commit;

set echo off
prompt in another session issue:
prompt update t set state = 'xxx';;
prompt and hit enter back here
pause
prompt in another session issue:
prompt commit;;
prompt and hit enter back here
set echo on

UPDATE t SET state = DECODE(state, 'rainy', 'snowy', 'sunny', 'frosty') WHERE id IN (
  SELECT id FROM (
    SELECT id,state,temp from t WHERE dt > 50
  ) WHERE (state='rainy' OR state='sunny') AND temp < 0
)
/



I filed a bug with that test case in it, it reproduces.

I see what they did there - the update doesn't reference anything EXCEPT for ID in the where clause - the update is just:


update t set state = decode( ... )
where id in (another_set_gotten_with_read_consistency);

they are not considering anything other than ID to be a restart candidate - stats is not good enough.


A workaround would be:


create or replace trigger t before update on t for each row
begin
    if (1=0)
    then
        dbms_output.put_line( :old.state );
    end if;
end;
/



that would cause the restart to happen (and possibly other restarts that are currently not happening since every update against T would have "state" in its where clause in effect)

Travis Downs, September 28, 2011 - 3:30 pm UTC

Thanks Tom, much appreciated. If you can share the metalink ID so I can track it, I would be a happy camper.

The workaround we went with was to change:

DECODE(state, 'rainy', 'snowy', 'sunny', 'freezy')

to

DECODE(state, 'rainy', 'snowy', 'sunny', 'freezy', state)

so it just does a no-op update for columns which changed in the race. I think also moving (or duplicating) the WHERE clause referencing state on the outermost query would work.

In our case adding triggers is generally frowned upon, and requires 10x the effort (DBAs, etc) versus a query change.
Tom Kyte
September 28, 2011 - 4:52 pm UTC

I filed Bug 13036440

You won't be able to follow it unless you open an SR and take it over (which you certainly can do). You would call in and say you are affected by that unpublished bug and would like to follow up on it.


technically, the query should not be written that way. It should just be

update t set state = decode( ... )
where dt > 50 and temp < 0 and state in ('rainy', 'sunny' );


that would solve it.

I frown on triggers as well - just offered it up as a "quick and dirty workaround"

A reader, September 28, 2011 - 8:35 pm UTC

Thanks, I'll pass the SR # along.

Agree that query doesn't make sense as written - the actual query was more complex in that the subqueries couldn't trivially be expanded out and added to the predicate on the outer query, since aggregated over multiple rows on t, used some OLAP stuff, etc. The simplified query was more for illustration.

Original query:

UPDATE /*+ ORDERED USE_NL(sometable) */ sometable SET status = DECODE(status, 'e', 'r', 'n', 'i', 'u', 'i', 'm', 'i') WHERE id IN ( SELECT id FROM ( SELECT /*+ ORDERED USE_NL(sometable) */ id, some_id, status, SUM(DECODE(status, 'f', 1, 'd', 1, 'u', 1, 'm', 1, 0)) OVER (PARTITION BY another_id ORDER BY some_id ROWS ? PRECEDING) some_olap_thing FROM sometable WHERE another_id IN (SELECT DISTINCT another_id FROM sometable WHERE some_id = ? AND (status = 'n' OR status = 'e')) ) WHERE some_id = ? AND (status = 'n' OR status = 'e') AND some_olap_thing > 0)

In this case it's not immediately obvious to be how to get rid of some of the subqueries. We could definitely expose status all the way to the outer query though and use WHERE (status = 'n' OR status = 'e') there too, but it's not clear if the identity of the "columns" would get passed through from a restart tracking perspective.

Disclaimer: I didn't write it :)

Thanks again for your time and help.

Odds to deadlock with single select for update

David T, December 16, 2011 - 8:57 am UTC

Hi Tom,

Considering this post :
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:11504247549852#46077549402890 (and 3 question/answers onward.)
and that you said later down that "locking a set of rows will never be atomic, you find one, lock it, move on - find another lock
it, move on...."

I would infer that a deadlock may occur if 2 sessions concurrently run :
select 1 from items where 
   (pk1,pk2) IN((SELECT *
                 FROM TABLE(CAST(:collection AS NUMBERX2TABLE)) t)) 

with pk1 and pk2 being NUMBER(22, 0) ; primary key declared as (pk1, pk2).
and :
 create or replace TYPE numberx2 AS OBJECT(a NUMBER, b NUMBER);
 create or replace TYPE numberx2table AS TABLE OF numberx2;

where passed collections have overlapping rows.

What do you think are the odds of actually getting a deadlock ?
I'm asking this because we are trying to strike a balance between robustness and speed :
- robustness would lead us to issue as many individual queries that there are items to lock, so that locks are obtained in a guaranteed order
- speed would lead us to issue only one query to lock all items "at once".
(I'm dealing with a JEE web application ; locking between 100~100,000 items means as many queries -- slow !)

I'm not expecting concrete figures here of course ;-), but maybe you could list some factors that may interfere (or not) with the order in which rows are locked :
- does it matter that the PKs given are always sorted in some well defined order ?
- does an ORDER BY clause would change the locking order ?
- does it help to use hints such as /*+ RULE */ ? (there are >1,000,000 items ; we only lock a small fraction of them)
- does it help if parallel dml is disabled ?
- does it change something if table 'items' is an IOT ? is partitionned ?
- does it change anything if some blocks are the cache while other are not ?
- ??? any other ?
- we are running on 10.2.0.4.0 - 64bits ; shall we expect Oracle's behaviour to change in newer versions ?

Thanks for your time and any thought on how remote is the chance to get a deadlock in this case.

David

Tom Kyte
December 16, 2011 - 5:51 pm UTC

I would infer that a deadlock may occur if 2 sessions concurrently run :
select 1 from items where
(pk1,pk2) IN((SELECT *
FROM TABLE(CAST(:collection AS NUMBERX2TABLE)) t))


why would that deadlock?

I presume there must be a for update somewhere???


What do you think are the odds of actually getting a deadlock ?


if you are getting resources in different orders - I get A and then B, you get B and then A, the odds are greater than zero in general. Meaning - it is going to happen.

The odds for a given pair of sqls deadlocking needs to have more inputs. If I run my sql before you start yours - the odds are zero. If you and I run at the same time, the odds are nearer to 100%. If you run your sql before I start mine, the odds are zero. So, it is a big old curve that depends on "when" things were run and the order the resources are gained in.

I'm asking this because we are trying to strike a balance between robustness and speed :
- robustness would lead us to issue as many individual queries that there are items to lock, so that locks are obtained in a guaranteed order
- speed would lead us to issue only one query to lock all items "at once".
(I'm dealing with a JEE web application ; locking between 100~100,000 items means as many queries -- slow !)



Why would a J2EE application lock 100,000 items??? what the heck? what is it doing? that seems a bit outrageous.


I'm not expecting concrete figures here of course ;-), but maybe you could list some factors that may interfere (or not) with the order in which rows are locked :

it depends on the plan - entirely and totally.


If I run a for update select that uses a full scan to find 100,000 rows and you use an index to find 100 rows - you will gain the locks in "key order" and I'll gain the locks in the order the rows appear on disk - we'll use two different orders...



Locking 100,000 out of 1,000,000 is not a small number. Forget fractions - when you start talking big numbers, you have big numbers. To get 100,000 rows out of 1,000,000 I'd probably want to full scan.

It all depends on the plan - nothing else. and please - begging here - do not use hints to try to select a plan, a hint does NOT assure you of a given plan. Do not resort to tricks - they will burn you every.single.time.

Every.
Single.
Time.



why are you locking 100,000 things?

Odds to deadlock with single select for update -- edit

David T, December 16, 2011 - 9:03 am UTC

sorry -- forgot the "FOR UPDATE" clause on the select above.

Why we lock many items

David T, December 19, 2011 - 9:52 am UTC

Hi Tom,
thanks for the feedback.

So it all depends on the plan : I guess I could have labeled my question : "how do you ensure the execution plan is always the same ?" then.
Ok -- got it loud and clear : no hint ;-)
Any other way available in our case ? Or are you going to say that, by design, the plan depends on the very query that is being run, and locking Oracle into a pre-determined plan is simply not possible ?
You did not explicitly anwser about the ordering of the inbound collection being useful or not. I presume it is not ?

I see you are asking for the reasons why we lock so many rows.
Items are things that are for sale ; there may be millions of them. The application is receiving sale orders and quote requests from customers, who are interested in between 1 and up to 20,000 *individual* items (in general).
100,000 is the biggest sale order that has been seen so far.

We also have sale managers : they do keep "market demand" under scrutiny and react accordingly : they select some items and raise their price, mark them as "monitored", or establish relationships between them, etc. All these actions end up updating one or more columns of table 'items' and commonly touch 15,000-30,000 items, sometimes more.

Some of these usages involve complex computations, so we have resorted to a lock-read-compute-update-commit approach.
That's where the big select for update comes into play.

Note that for some scenarii, we could simply arrange to issue updates in the right order without prior locking ; e.g.
update items set sale_req=sale_req+:x where pk1=:v1 and pk2=:v2

where 'x' is either 1 or -1.
Running one parameterized update with thousands different bound values is pretty quick (with toplink and jdbc doing a very good job with so-called "batch writing".)

But while these updates cannot deadlock one with another, nor can they deadlock with many individual select-for-update issued in the right order, they could dedalock with a single select-for-update IN(:collection), couldn't they ?

So here we are, dealing with large, often unpredictable, possibly overlapping sets of rows, sometimes to increment a column holding a count, sometimes to update 10 columns after complex computations.

Are you suggesting we'd rather lock the whole items table if we know that we'll have to touch more than [value-to-be-determined] rows or so ?

Thanks
David

Tom Kyte
December 19, 2011 - 4:50 pm UTC

oh, you can lock in a plan (see query plan baselines - and sql plan management) - to a degree, but what if something changes - an index is dropped and recreated a bit differently - and the stored plan cannot be used anymore. It would be what I would call "fragile", having the code RELY on a specific plan being used.


You did not explicitly anwser about the ordering of the inbound collection being useful or not. I presume it is not ?


it would not make a whit of difference.

ops$tkyte%ORA11GR2> select * from dual where dummy in ('a', 'b' );
Elapsed: 00:00:00.03

Execution Plan
----------------------------------------------------------
Plan hash value: 272002086

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |     2 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("DUMMY"='a' OR "DUMMY"='b')

ops$tkyte%ORA11GR2> select * from dual where dummy in ('b', 'a' );
Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
Plan hash value: 272002086

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |     2 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("DUMMY"='a' OR "DUMMY"='b')


we rewrite, we sort, we distinct stuff - it would all be rather scrambled by the time the runtime query engine got it all.


100,000 is the biggest sale order that has been seen so far.


wow, keep that customer :)


what is the degree of concurrency here? How many of these order processers have to run at the same time and how long does the LOCK phase time?


A simple, naive approach could be:

loop
use dbms_lock to get a named lock - everyone uses the same name - in X mode.
lock with NOWAIT
use dbms_lock to release the named lock
if locked - then exit
else dbms_lock.sleep for a bit
end loop
read
compute
update
commit


Then - you wouldn't have any deadlock on the lock phase - you'd want to use a select for update NOWAIT so you could go to sleep for a bit and let someone else try to get their rows


they could dedalock with a single select-for-update IN(:collection), couldn't they ?

yes.




More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library