Skip to Main Content

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question, Peter.

Asked: March 29, 2002 - 7:08 pm UTC

Last updated: September 27, 2021 - 1:41 pm UTC

Version: 8.1.7

Viewed 10K+ times! This question is

You Asked

Hi Tom,

On page 130-131 of your book, "expert one-on-one Oracle," you gave an example of two sessions querying and inserting into tables a and b. The result is that both tables contain a row with the value zero in them. My question is that wouldn't the result be the same even if the isolation_level is not serializable? Not sure what you're demonstrating by this example.

Thank

- peter

and Tom said...

Allow me to clarify. The text in question is as follows. The text in BOLD is what the example is trying to demonstrate. There is a common misconception that serializable MEANS serial execution -- that there is some serial ordering of transactions. If that were true, then the results we observe in my example would be impossible -- there is NO serial (one after the other) ordering of those two transactions that could result in both sessions inserting 0.

<quote>
Serializable

This is generally considered the most restrictive level of transaction isolation, but provides the highest degree of isolation. A serializable transaction operates in an environment that makes it appear as if there are no other users modifying data in the database, the database will be "frozen" at the point in time your query began. Your transaction sees the database consistently, at a single point in time. Side effects (changes) made by other transactions are not visible to it, regardless of how long it has been running. Serializable does not mean that all transactions executed by the users are the same as if they were executed one right after another in a serial fashion. It does not imply that there is some serial ordering of the transactions that would result in the same outcome. This last point is a frequently misunderstood concept and a small demonstration will clear it up. The following table represents two sessions performing work over time. The database tables A and B referenced in the example start out empty and are created as such:

tkyte@TKYTE816> create table a ( x int );
Table created.

tkyte@TKYTE816> create table b ( x int );
Table created.

Time Session 1 Executes Session 2 Executes
0:00 Alter session set isolation_level=serializable;
0:01 Alter session set isolation_level=serializable;
0:02 Insert into a select count(*) from B;
0:03 Insert into b select count(*) from A;
0:04 Commit;
0:05 Commit;

Now, when this is all said and done - A and B will each have a row with the value of zero in it. If there was some "serial" ordering of the transactions - we could not possibly have both tables containing the value zero in them. If Session 1 executed before Session 2 ? then table B would have a count of 1. If Session 2 executed before Session 1 ? then table A would have a count of 1. Both tables however will have a count of ZERO. They just executed as if they were the only transaction in the database at that point in time. No matter how many times Session 1 queried table B - the count would be the count that was committed in the database at time 0:00. Likewise, no matter how many times Session 2 queries table A - it will be the same as it was at time 0:01.
</quote>

So, what I was trying to demonstrate is what serializable means -- in short:

A serializable transaction operates in an environment that makes it appear as if there are no other users modifying data in the database, the database will be "frozen" at the point in time your query began. Your transaction sees the database consistently, at a single point in time. Side effects (changes) made by other transactions are not visible to it, regardless of how long it has been running.





Rating

  (73 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Excellent explanation

Reader, March 30, 2002 - 12:34 pm UTC

Excellent explanation

For isolation_level Serializable

Sudhanshu Jain, March 31, 2002 - 1:41 pm UTC

I completely agree with peter, in the experiment what you are trying to demonstrate is just controlled by Commit time.

If i commit in 1st Session before 2nd session runs it scripts, i will see the data else irrespective of Isolation_level it will be 0.



Tom Kyte
March 31, 2002 - 1:54 pm UTC

Sigh,

here are the facts:

1) i am using serializable.

2) 99.99% of the world seems to believe "serializable means there is some serial ordering of transactions that would achieve the same results". That is, when in serializable mode -- there must be some way to order the transactions CONSECUITIVELY such that the same outcome is achieved"
3) The premise in #2 is wrong.

The example is #bnot demonstrating any wild new crazy feature of serializable.#b It is not intended to show how serializable is different from read committed in Oracle. The example is showing that serializable DOES NOT imply *serial ordering*. That is all it is doing (and it does it quite simply -- perhaps TOO simply)

I know, 100% know, that the same outcome in Oracle would be achieve in read committed. I know that -- it is not relevant to the concept being demonstrated -- that serializable DOES NOT imply there is some serial ordering.




Serializable

Sudhanshu Jain, March 31, 2002 - 5:03 pm UTC

I now understand your point of experiment.

But still i could not understand PURPOSE of Serializable. What additional i am achieving by Serializable, other than, what is given by read consistent etc.

Tom Kyte
March 31, 2002 - 5:17 pm UTC

(for the whole story, you can read the chapter in my book that contains pages 130-131 -- I go into this in detail as it is a very important topic to understand).

It all stems from the isolation levels defined and how they are defined. Ansi defines them using specific "phenomena".

These phenomena are (excerpted from my book)

<quote>
The ANSI/ISO SQL92 standard adds to the concurrency control mix as well. In this standard, they define four levels of transaction isolation with different possible outcomes for the same transaction mixes. That is, the same work performed in the same fashion with the same inputs, may result in different answers given your isolation level. These isolation levels are defined in terms of three 'phenomena' that are either permitted or not at a given level. These phenomena are:

o Dirty read ? The meaning of this is as bad as it sounds. You are permitted to read uncommitted 'dirty' data. This is the effect you would achieve by just opening an OS file someone else is writing, and reading whatever data happened to be there. Data integrity is compromised, foreign keys violated, unique constraints ignored.

o Non-repeatable read ? This simply means that if you read a row at time T1, and attempt to re-read that row at time T2, the row may have changed. It may have disappeared, it may have been updated, and so on.

o Phantom read ? This means that if you executed a query at time T1, and re-execute it at time T2, additional rows may have been added to the database, which affects your results. This differs from the non-repeatable read in that in this case, data you already read has not been changed but rather that more data satisfies your query criteria than before.

SQL92 takes these three phenomena and creates four isolation levels based on the existence, or lack thereof, of the above phenomena. They are:

Isolation Level Dirty Read Non-Repeatable Read Phantom Read
Read Uncommitted Permitted Permitted Permitted
Read Committed Permitted Permitted
Repeatable Read Permitted
Serializable

Oracle supports explicitly two of the above isolation levels as they are defined ? read committed and serializable. This doesn't tell the whole story however. The SQL92 standard was attempting to set up isolation levels that would permit various degrees of consistency for queries performed in each level. Repeatable read is the isolation level they claim you must be at to get a read consistent result from a query. Read committed does not give you consistent results. Read uncommitted is the level to use to get non-blocking reads.
</quote>


So, serializable gives you the ability to prevent "phantom reads" -- a really really repeatable read if you will. To paraphrase:

A serializable
transaction operates in an environment that makes it appear as if there are no
other users modifying data in the database, the database will be "frozen" at the
point in time your query began. Your transaction sees the database consistently,
at a single point in time. Side effects (changes) made by other transactions are
not visible to it, regardless of how long it has been running.


that is what is given beyond read committed. In a serializable transaction if you do the following:


t1 select count(*) from b -- returns "0"
t2 insert into b values(1);
t3 commit;
t4 select count(*) from b --
returns "1"
t5 select count(*) from b -- returns "0"
t6 commit;
t7 select count(*) from b -- returns "1"

Using read committed, the query at time t5 would return 1, not 0. You get a "phantom read"



Serializable

Sudhanshu Jain, March 31, 2002 - 6:23 pm UTC

Fantastic Feature, and great explanation.

Thanks.

Serializable vs. Read Committed

Saravanakumar Natarajan, April 01, 2002 - 7:54 am UTC

In the follow-ups of Sudhanshu Jain, i feel the tabular column is wrong.. Phantom read and dirty reads are interchanged.... 'Dirty Read' is not permitted except 'Read Uncommitted' Isolation level...

Tom Kyte
April 01, 2002 - 10:06 am UTC

darn cut and paste from word. You are right the chart was like this:


dirty non-repeat phantom
x x x
x x
x

and should have been this:

dirty non-repeat phantom
x x x
x x
x


the tabs messed it up. Sorry about that -- I corrected it (thanks!)


Serializable

sudhanshu jain, April 01, 2002 - 1:07 pm UTC

What happens to Locking? What Happens to Redo Entry, does oracle reads the previous data from rollback segment itself, which could lead to again snapshot too old like error.



Tom Kyte
April 01, 2002 - 1:10 pm UTC

Locking is done in exactly the same fashion as ever.

reads don't block writes
writes don't block reads
writes only block writes of the SAME exact data.

Oracle uses UNDO (not redo) to give each serializable transaction a "frozen" picture of the database at the point in time the transaction began.

You can lead to snapshot too old or cannot serialize access errors, yes. Serializable transactions are to be use for very short duration OLTP transactions where a snapshot too old should not ever be an issue.

serializable transaction

kit, April 02, 2002 - 10:28 am UTC

In your example why did t7 then return 1 as t3 also did a commit. Whats the difference between t4 and t5.

Are Serial transactions similar in creating a savepoint


Tom Kyte
April 02, 2002 - 10:41 am UTC

t3 was another session.

Until that first session commits -- it sees the database frozen at the point in time the transaction began. That is the entire purpose of "serializable"

see
</code> http://docs.oracle.com/docs/cd/A87860_01/doc/server.817/a76965/c23cnsis.htm#2778 <code>
for a more in depth discussion.

How does it use rollback segments?

Colin Davies, June 06, 2002 - 3:31 pm UTC

I read your excellent book, but it was uncear to me how Oracle achieves SERIALIZABLE, in that it was not explained how the rollback segments were affected. In normal operation Oracle uses the rollback information to maintain a consistent read of the data (sometimes resulting in the dreaded "ORA-01555 Snapshot too old" error). Is this how it does it for other isolation levels, such as SERIALIZABLE?

Tom Kyte
June 06, 2002 - 6:10 pm UTC

Exactly.

I alluded to this with this paragraph in chapter 3 when discussing serializable transactions:

...
In Oracle, serializability is achieved by extending the read consistency we get at the statement level to the transaction. Instead of results being consistent with respect to the start of a statement, they are preordained at the time you begin the transaction. Pretty deep thought there – the database already knows the answer to any question you might ask it, before you ask it.
......


We use the rollback (UNDO) information in the same way we would for a single statement -- it is just that the point in time your query is assumed to have started is the point in time the transaction began.



Starting point of a serializable transaction

D. Robinson, February 07, 2003 - 12:51 pm UTC

Obviously, a COMMIT marks the end of a transaction, but what marks the start of a transaction in serializable mode? Is it the first SELECT since the last commit, the first UPDATE/INSERT/DELETE since the last commit, the first SELECT or DML of any kind since the last commit, or something else not mentioned?

Tom Kyte
February 07, 2003 - 1:33 pm UTC

the first statement of the transaction.

A commit is the last statement -- so anything after that.

(next select would be enough)

Set Transaction

Mark J. Bobak, February 07, 2003 - 4:57 pm UTC

Actually, Tom, I think the answer to his question:
"...but what marks the start of a transaction in serializable mode?" would be "set transaction isolation level serializable".

If the transaction begins with anything else, the isolation
level is not serializable.

-Mark

PS Looking forward to seeing you in Dallas next week.

Tom Kyte
February 07, 2003 - 5:13 pm UTC

ops$tkyte@ORA920> alter session set isolation_level=serializable;

Session altered.

don't have to use set transaction.


 

Good Explanation

H S Anand, February 07, 2003 - 11:55 pm UTC

Even after working with Oracle Databases for about 2.5 years, I still had a muddy concept of this "isolation level serializable." I had this misunderstanding that there is some serial ordering of the transactions. I really appreciate the explantion.

How could this be solved ?

David, April 03, 2003 - 8:42 am UTC

Suppose something like:

procedure p (...)
is
...
begin
select count(*) into v_count from t1 where status=p_status
if v_count <> 0 then raise e_status_exists; end if;
insert into t2 values (p_id, p_status);
...
commit;
exception
when e_status_exists then ...
end;

1) I would like to replace the "stinky" count(*) test with an exception test. I was thinking of something like (as you said in another thread):

begin
begin
select x into y from t where ...;
exception
when no_data_found then y := default_for_y;
end;
(continue processing)
end;

2) I would like to guarantee consistency along my code. Suppose someone else comes in between my select and my insert and inserts a row where status=p_status or updates a row to p_status. Then my database would be inconsistent because I can only insert into t2 as long as there is no p_status in t1.

3) Would locking be a solution ?

4) Would serializable be a solution ?

Tom Kyte
April 03, 2003 - 9:00 am UTC

1)

begin
for x in ( select 1 x from t1 where status = p_status )
loop
raise e_status_exists;
end loop;
.....

exception
....


2) an "anti foreign key". Basically -- you'll need to lock the table T1 to prevent modifications to it during this period of time. And inserts into T1 will need to do the same to T2. To force serialization.

It is a funny situation -- curious -- what is the business requirement that is driving this? how did you get here?



3) yes.

4) no.

Business requirement

David, April 03, 2003 - 10:30 am UTC

Ok, thanks for the insight -- it was VERY useful.

The code I wrote above was an over-simplification of my application -- and I think in that process I confused us both.

I don't have "t1" and "t2". They are both one table "t". I did it so a unique key wouldn't be your answer.

Now, forget it all -- never mind. Start from zero here. What I really want to know is...

In fact, the business rule that is bugging me is:

I have table t(id, name, status). Status can be 0 or 1.

I have to implement: "I cannot insert a new (or update an existing) record in t if there's already a record with the same name as the new one, if status is 0. Any other combination is valid."

That is, I can have:

name status
==== ======
abc 0
abc 1
def 1
def 1

But never:

name status
==== ======
abc 0
abc 0

I thought of something like a unique key, but only when status is 0. What would be the best approach ? Maybe a trigger ?

Tom Kyte
April 03, 2003 - 11:30 am UTC

As pooh bear says "oh bother".... wish you would have asked me that question, that one is EASY...

create unique index t_idx on t( decode(status,0,name,null) );

done. there you go. just like in:

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:8980833603650 <code>


you see - entirely null entries are not made into a b*tree index, hence, by decoding on status, we in effect only index NAME when status = 0, therefore NAME will be unique when status=0.

Dave, April 03, 2003 - 1:09 pm UTC

Although the explanation of the serializable transaction tends to be a little complex, the implementation of it within Oracl is probably a doddle, I would think.

In a non-serializable transaction each query looks for data consistant for the SCN when the individual query started. In a serializable transaction all queries look for data consistant for the SCN when the FIRST query of the transaction started, no?

Tom Kyte
April 03, 2003 - 2:09 pm UTC

yes

A reader, April 03, 2003 - 2:29 pm UTC

Can you give us a business scenario where serializable transaction would be useful?
thanks

Tom Kyte
April 03, 2003 - 2:55 pm UTC

when you need to run the benchmarks provided at www.tpc.org.

when you need a point in time view of the database (a report with dozens of queries -- all of which must be consistent with regards to eachother)

Point-in-time view

Glenn A. Santa Cruz, April 04, 2003 - 10:01 am UTC

With the new "Flashback query" in 9i, and the ability to flashback based on an SCN, couldn't you accomplish the same thing for a given set of queries (provided that each query were modified to include the "as of SCN" predicate)?

I can see the benefit of setting a serializable isolation level in that no queries would need to be modified (and these individual queries could be reused in other code that did not *need* flashback), but are there any performance considerations between using a serialized isolation level versus a flashback ?

Tom Kyte
April 04, 2003 - 11:32 am UTC

not really -- you cannot update "as of"....

another side effect of serializable is that you cannot update data that was updated by someone else during your transaction. flashback won't provide that protection

COM+ and serializable transactions

Arun Gupta, May 27, 2003 - 8:12 pm UTC

Tom
We have a need where within a transaction, we need to update data in multiple physical databases. The design team wants to use COM+ as only COM+ supports connection to multiple databases (in Microsoft technology world). They also told me that COM+ creates only serializable transactions. Since we do not have any real business need for creating serializable transactions, is there a workaround? Are serializable transactions less efficient in performance? My thought would be that even though the transaction is serializable, the SQL statements that make up the transaction would have the same execution plan/statistics.
Thanks


Tom Kyte
May 28, 2003 - 6:29 am UTC

why would you not just use dblinks? doing "database things" outside of the database is just "not the right idea"

Arun Gupta, May 28, 2003 - 1:50 pm UTC

My initial reaction was the same. Use dblink from one database to other and let Oracle manage the transactions. That's what it is designed for. But the design team doesn't want to do this. My other thought was to put a
execute immediate 'set transaction isolation level read committed'; in every procedure. Not sure if it will work though...
Thanks

Tom Kyte
May 28, 2003 - 7:11 pm UTC

why do they call themselves a design team? maybe "redesign" or "as hard as we can make it design".

set transaction statements like that must be the first statment in a transaction, it'll be a really bad idea to stick it in each procedure.

Do we know the reason...

Kamal Kishore, May 28, 2003 - 7:54 pm UTC

Do you know (or rather, did they tell you) the convincing argument from your "design" team as to why using DB-Links is such a bad idea?
Did you, and your design team, elaborate and discuss about pros and cons of using/not using db-links?
I wounder what is it that is holding them from saying NO to db-links?
Lack of knowledge, maybe...
If that is the case, should they be part of the "design team"???


Tom Kyte
May 28, 2003 - 8:36 pm UTC

that is actually a good point and shame on me really.

there could be a valid reason for using an external TPM to coordinate Oracle resources -- there could be non-oracle data sources not covered by a gateway for example.

there could be a valid reason.

serializable and discrete transactions

A reader, June 17, 2003 - 3:32 pm UTC

Tom,
In a serilizable transaction the SCN is frozen at the first statement of the transaction so that we get the same results till we commit in that transaction. Also these same results are recreated from the rollback segments.. i wish to know what happens when a discrete transaction makes changes to data which is being queried by a serializable transaction.. a discrete transaction does not create any undo so the serializable transaction would have no way to know what the data would be.. however i m unable to reproduce this scenario.. i m using the dbms_transaction.begin_discrete_transaction before i update a row in one session and then try to query the same row from another session which has been set in a islolation_level = serializable..
is the behavior i m expecting correct.. or have i completely misunderstood the concept of serializable transactions..
hope to hear from u

Tom Kyte
June 17, 2003 - 4:39 pm UTC

discrete transactions are deprecated and require the setting of an "_" parameter

discrete transactions "don't work" without it.

is the behviour true for older versions .. 7. and before??

A reader, June 17, 2003 - 5:49 pm UTC


Tom Kyte
June 18, 2003 - 1:47 pm UTC

well, serializable was not added until 7.3.3 so it was moot "before"

you had to enable discrete transactions in the init.ora then. you could have gotten:

ERROR:
ORA-08176: consistent read failure; rollback data not available

but it would not be assured, the blocks you need might be in the buffer cache - we might not need the UNDO, there might be a read consistent block sitting there for you already.

isolation level parameter ?

Yves Debizet, June 18, 2003 - 3:43 am UTC

Tom,

I think ALTER SESSION SET ISOLATION LEVEL = SERIALIZABLE very useful (specially when doing reporting).

Question 1: why isn't there any init.ora parameter about isolation level which would allow to change Oracle default isolation level value (which is READ COMMITTED) ?

Question 2: in a moderately transactional application (few updates, deletes, inserts) is it expansive to run serializable sessions (with an ALTER SESSION SET ISOLATION LEVEL = SERIALIZABLE inside an ON LOGON trigger) ?

Thank you

Tom Kyte
June 18, 2003 - 6:09 pm UTC

q1) you can certainly file an enhancement request regarding that.

q2) no

Perhaps a better example would be:

Matt, June 18, 2003 - 7:20 pm UTC

Perhaps a better example for book would have been (or NEXT book which I am very much looking forward to):

Time Session 1 Executes Session 2 Executes
0:00 Alter session set
isolation_level=serializable;
0:01 Alter session set
isolation_level=serializable;
0:02 Insert into A select
count(*) from B;
0:03 Insert into B select
count(*) from A;
0:04 Commit;
0:05 Insert into A select
count(*) from B;
0:06 Select * from A;

A
----------
0
0

Thus you can show differences between read commited, serializable and the (ugh) read uncommited.

Allan, June 16, 2004 - 4:58 pm UTC

I think some of the confusion regarding serializable isolation (many think it means that the transactions run one after the other) is reinforced by the Concepts Guide, which for 9iR2 on page 20-8 states:

"Serializable isolation permits concurrent transactions to make only those database changes they could have made if the transactions had been scheduled to execute
one after another."

That does not say that the transactions execute one after the other, but unless one reads carefully one could get that impression.

transaction beginning

A reader, June 18, 2004 - 11:02 am UTC

hi Tom
From one of the oracle docs (on LOBs) -
"If you begin a transaction and subsequently select a locator, then the locator
contains the transaction ID. Note that you can implicitly be in a transaction
without explicitly beginning one. For example, SELECT... FOR UPDATE
implicitly begins a transaction."

What denotes the beginning of a transaction? Does a
simple select begin a transaction?

1. I did a simple experiment - did a select and
v$transaction had no rows. did a select for update
and v$transaction had some rows. This seems to indicate
that we need to have either a modification statement
(insert/update/delete) or an intention to
modify for a transaction to begin implicitly.
Is this conclusion correct?

2. can we explicitly begin transaction in Oracle?
My thinking is no - what do you think?

thanx!



Tom Kyte
June 18, 2004 - 11:14 am UTC

a transaction is begun when you use

o set transaction with some options (other than read committed)....
o insert/update/delete/merge
o touch a dblink for the first time
o use "for update"





thank you!

A reader, June 18, 2004 - 11:29 am UTC


ok I ran some tests on "set transaction"

A reader, June 18, 2004 - 11:42 am UTC

--
scott@ORA10G> set echo on
scott@ORA10G> set transaction read only;

Transaction set.

scott@ORA10G> select * from v$transaction;

no rows selected

scott@ORA10G> rollback;

Rollback complete.

scott@ORA10G> set transaction read write;

Transaction set.

scott@ORA10G> select * from v$transaction;

no rows selected

scott@ORA10G> rollback;

Rollback complete.

scott@ORA10G> set transaction isolation level serializable;

Transaction set.

scott@ORA10G> select * from v$transaction;

no rows selected

scott@ORA10G> rollback;

Rollback complete.

scott@ORA10G> set transaction isolation level read committed;

Transaction set.

scott@ORA10G> select * from v$transaction;

no rows selected

scott@ORA10G> rollback;

Rollback complete.

scott@ORA10G> set transaction name 'test_trans';

Transaction set.

scott@ORA10G> select * from v$transaction;

no rows selected

scott@ORA10G> rollback;

Rollback complete.

scott@ORA10G> spool off
---

does this mean that selecting from v$transaction
and checking for existence of a row is not
an indicator of whether a transaction has begun or
not? or does it mean that "set transaction" does not
begin a transaction?

thanx!



Tom Kyte
June 18, 2004 - 11:51 am UTC

they implicitly begin a "transaction" regardless.

read only/serializable for example demark the point in time that your queries will be "as of" -- until you commit.


you have a de-facto transaction there.


should add another way (there are probably others)

trans_id := dbms_transaction.local_transaction_id( TRUE );



thanx!

A reader, June 18, 2004 - 12:05 pm UTC

I suppose selecting from v$transaction is not
a fool proof way of detecting if you are in a
transaction or not. Is there any other test
that can tell us if we are in a transaction?

thanx!



Tom Kyte
June 18, 2004 - 12:11 pm UTC

well, what is the goal -- why do you need to know, what are you trying to accomplish?

not important tom

A reader, June 18, 2004 - 12:22 pm UTC

well, i just wanted to find a way of proving it that is
all:) Anyways thanx for the answers!

ALTER SESSION SET ISOLATION_LEVEL = READ COMMITTED

A reader, January 20, 2005 - 12:32 pm UTC

Tom,
we are examing a database monitoring system of a 3rd party on our production database. the ventor told me that they created the tool account, say WATCHER. WATCHER gets in the dataabse every hour to collect v$ views. But i found the account doing:

ALTER SESSION SET ISOLATION_LEVEL = READ COMMITTED

all the times.

Could you tell me what are impacts on the database, in particular on the SYS activities.

Thanks lot.

Tom Kyte
January 20, 2005 - 7:19 pm UTC

it is dandy, it is the default. it is harmless

Is serializable required here

T Tupman, July 22, 2005 - 12:13 pm UTC

If I have a PL/SQL block like this:

declare
cursor c1 is select id from x;
BEGIN
delete x where userid=100;
FOR r1 in c1 LOOP
--process the cursor here
END LOOP;
end;

Is it possible that between the delete x and the FOR r1 in c1 LOOP that another session will have committed an insert into x with a userid =100 and therefore my cursor loop will process this record? If so, is the best way to deal with it (i.e. to prevent the cursor loop from dealing with such records) to put SET TRANSACTION ISOLATION LEVEL SERIALIZABLE before the delete?

Tom Kyte
July 22, 2005 - 12:57 pm UTC

serializable works best for small, short transactions.

Looking at this small fragment, if the 100 is "constant" like that, wouldn't just adding a "where userid <> 100" to the select simply solve it? (solve it simply..?)

Serializable level of isolation

Tzonka Dimova, November 06, 2005 - 3:33 pm UTC

Fantastic explanation!

Serializable - Over database link?

Rahul, January 08, 2007 - 3:10 pm UTC

Hi Tom,

If my query contains a remote table (using database link), I do see commited changes made to the remote table, even if the isolation level is set to SERIALIZABLE.

Example:

alter session set isolation_level=serializable;

select *
from   local_tab, 
       remote_tab@remote_db
where  ...;

--
in another session, insert into remote_tab and commit.
in another session, insert into local_tab and commit.
--

select *
from   local_tab, 
       remote_tab@remote_db
where  ...;


This will now show changes to remote_tab but not local_tab.

Is there a way to have data from the remote table appear frozen too, without explicitly making a copy of the remote data? I can't even imagine how Oracle would be able to do this, but does not hurt to ask :).

Thanks,
Rahul


Tom Kyte
January 08, 2007 - 4:42 pm UTC

I cannot reproduce, we send the alters over - I created a database link from my 9i to 10g database, joined emp@remote to dual locally, logged into other database, updated EMP to set ename=lower(ename), committed.

I do not see the changes over the dblink until I commit the session with the dblink.

Thanks Tom

Rahul, January 08, 2007 - 4:56 pm UTC

My fault. I figured it out. I had executed the:

alter session set isolation_level=serializable;

after I had run some queries in that session. I believe this would only take effect with the next transaction. And that never happened since I did not commit in the remote database.

So the first SQL starts a transaction, not necessarily DML, correct?

Thanks Tom

Rahul, January 08, 2007 - 5:00 pm UTC

And that never happened since I did not commit in the remote database.

Meant to say "I did not commit in the session where I set the isolation level to serializable since I was not doing any DML there"

Appropriate way to handle the transactions

Sunil, February 27, 2007 - 4:56 am UTC

Hi Tom,

What is the appropriate way to handle the transactions in side a procedure?
Weather is should like A) or B)

A)
select @error_no = @@error
If @error_no != 0
Begin
ROLLBACK tran
Raiserror 20300 ¿Operation failed¿
Return @error_no
End

B)
select @error_no = @@error
If @error_no != 0
Begin
Raiserror 20300 ¿Operation failed¿
ROLLBACK tran
Return @error_no
End

Thanks & Regards,
Sunil.


Tom Kyte
February 27, 2007 - 10:46 am UTC

wow, I've been asked to comment on "best practices for t-sql programming"

neat.

time to find a sqlserver programmer I guess?

Update error for locked rows

Mohan, May 08, 2008 - 6:48 am UTC

I am writing a procedure for dataload. I am doing BULK UPDATE of records. Multiple users are using this procedure. Uaer A is loading data and then user B starts the upload before user A finishes. Some rows will be locked. I want the locked rows to be caught in a exception. The "save exceptions" option in bulk update can do this. But I found that the procedure is waiting for the other user to commit.

I tried with
set transaction isolation level SERIALIZABLE. But this also does not give any solution.

I cannot use "for update nowait" due to some limitations.


Is there any way we can achieve this.
Tom Kyte
May 12, 2008 - 8:30 am UTC

this sounds like a nightmarish disaster.

what sort of "logic" is being employed here - I see *massive* lost update potential - I don't see how this sort of "update" can even begin to make sense.

You'll need to explain a lot better what is actually going on there.

error occuring when updating different databases.

ranjith Janardhan, May 19, 2008 - 10:22 am UTC

Hi Tom,
Starting with thanks in advance.
There are two database say X and Y.In X database around 6 tables are continuousy updating online and the data would be around 20kb.The Y database is having the same structure with different sid contains same 6 tables.The tables in Y database is updated one by one using trigger including autonomous transaction.when i do stop of X database and Y is still open,there is some error occuring with ORA-06519-active autonomous transaction detected and rolled back.can you give a solution for it.it will be very helpful if you can give a proper solution with out using autonomous transaction.

Tom Kyte
May 19, 2008 - 6:10 pm UTC

... including autonomous
transaction ..

you have a bug, you have a bug, you have a big big bug.


you have a bug - you cannot use an autonomous transaction like this - do you understand what happens during "rollback"????


how, HOW, could I give you a solution to a completely UNKNOWN PROBLEM?????

serializable and locking

Amir Riaz, June 25, 2008 - 1:28 am UTC

Hi Tom

My understanding about the serializable mode and read commit modes are serializable mode handle scn at transcation level. while read commit handle scn at statement level.

But as soon as the transcation begin we must have to lock those tables for example

Alter session set isolation_level=serializable
Now
select * from emp where deptno =10; -- I think the transcation begins here

now if the transcation begin from the select statement then we must have to lock emp table. I think oracle locks the emp table in share mode and rows in RS share row mode so when the transcation starts no other session can attain Rx lock on the same rows but can attain Rs lock on these rows so the other session can do select on the same table and on the same rows but cannot do DML. Thus preventing lost update or phymtom read problem.

Do you think my concept is right.

Regards
Amir Riaz
Tom Kyte
June 25, 2008 - 8:41 am UTC

It does NOT HAVE TO LOCK ANYTHING

that is the beauty of multi-versioning and read consistency. We give you non-blocking reads that return a consistent set of results. You can query as if no one else is in the database.


Do not think "oracle is like informix, db2, sqlserver <......>"


https://www.oracle.com/technetwork/issue-archive/2005/05-nov/o65asktom-082389.html

*NO LOCKS*

a lock is used for a modification, a read does not lock rows (except for update of course, but that makes the read a modification).

A read is not blocked by writes
Writes are not blocked by reads

for read consistency we need not lock A THING.


We use UNDO to put things back the way they were - we don't need locks for anything like that - we are *not* sqlserver.

ORA-08177: can't serialize access for this transaction

Amir Riaz, June 29, 2008 - 6:31 am UTC

Hi Tom

nice article, thanks for your help. in your article you mentioned about

ORA-08177: can't serialize access for this transaction

exception. I am trying to produce this exception but failed. can you provide me a test case

thanks

Regards
Amir Riaz
Tom Kyte
July 01, 2008 - 7:00 am UTC

ops$tkyte%ORA11GR1> create table t ( x int );

Table created.

ops$tkyte%ORA11GR1> insert into t values ( 1 );

1 row created.

ops$tkyte%ORA11GR1> commit;

Commit complete.

ops$tkyte%ORA11GR1>
ops$tkyte%ORA11GR1> set transaction isolation level serializable;

Transaction set.

ops$tkyte%ORA11GR1> select * from t;

         X
----------
         1

ops$tkyte%ORA11GR1> declare
  2          pragma autonomous_transaction;
  3  begin
  4          update t set x = x+1;
  5          commit;
  6  end;
  7  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR1> select * from t;

         X
----------
         1

ops$tkyte%ORA11GR1> update t set x = x+1;
update t set x = x+1
       *
ERROR at line 1:
ORA-08177: can't serialize access for this transaction


Great!

Valentin, July 08, 2008 - 4:07 pm UTC

Thanks Tom, I had been looking for a clear explanation of "serializable access" and this was it. Thanks a lot!

serializable level with dblink

jula, December 21, 2009 - 2:02 am UTC

Hi Tom,
There is procedure that selects data from remote table, put it to our table and then delete all selected data from remote table. But it happens often that count of deleted records more then count of selected.
For example:

db1:
create table my_tab(a number);

db2:
create table remote_tab(a number);
insert into remote_tab values(1);
insert into remote_tab values(2);
insert into remote_tab values(3);
commit;

Session#1 in db1:
begin
rollback;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
--moment1
dbms_lock.sleep(15);
insert into my_tab select * from remote_tab@dlink;
dbms_output.put_line(sql%rowcount);
delete from remote_tab@dlink;
dbms_output.put_line(sql%rowcount);
commit;
end;

--moment1
Session#2 in db2:
insert into remote_tab values (100);
commit;

output:
3
4

No ORA-08177, i got 3 records in my_tab with a in (1,2,3) and nothing left in remote_tab.
why i lost my 100?

Thank you

Tom Kyte
December 21, 2009 - 2:49 pm UTC

do you have access to support - If so, please file this as a bug:

TEST CASE:

set echo on

/* create table t as select * from all_users where 1=0
   on both sites
*/

delete from t;
delete from t@ora10gr2;
insert into t@ora10gr2 select * from all_users@ora10gr2;
commit;

set transaction isolation level serializable;

set echo off
prompt when logged into remote database, issue:
prompt insert into t select * from all_users;;
prompt commit;;
set echo on
pause
insert into t select * from t@ora10gr2;
delete from t@ora10gr2;
pause



drop table t2;
create table t2 as select * from all_users;
delete from t;
commit;

set transaction isolation level serializable;

set echo off
prompt when logged into local database, issue:
prompt insert into t2 select * from all_users;;
prompt commit;;
set echo on
pause

insert into t select * from t2;
delete from t2;
select count(*) from t2;
commit;
select count(*) from t2;


If not, let me know please and I will.

isolation level with dblink

jula, December 22, 2009 - 8:01 am UTC

Better you do. I found workaround and modified procedures so data should not be lost.
Thanks for help.

Extract losing data

A reader, January 05, 2010 - 2:45 pm UTC

I have a traditional harvest i.e. extract from a table into a file as part of a systems interface. The extract runs every X minutes and follows the following pattern
select ... from table where extracted='N' and <predicates>
Followed by
update table set extracted='Y' where extracted='N' and <predicates>


The problem is that if some transaction commits eligible data to the table right between these 2 statements, it never gets extracted.

What is the best practice to handle this sort of situation? Would a serializable transaction be useful here? Thanks for any tips.
Tom Kyte
January 06, 2010 - 7:31 am UTC

serializable would be one approach yes. another would be:


ops$tkyte%ORA10GR2> create table t as select a.*, 'N' extracted from all_users a;

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> declare
  2      type array is table of rowid index by binary_integer;
  3      l_data array;
  4      l_batch_size number := 100;
  5  begin
  6      for x in (select rowid rid, t.* from t where extracted='N')
  7      loop
  8          /* ... write it out ... */
  9          l_data(l_data.count+1) := x.rid;
 10          if (l_data.count=l_batch_size)
 11          then
 12              forall i in 1 .. l_data.count update t set extracted = 'Y' where rowid = l_data(i);
 13              l_data.delete;
 14          end if;
 15      end loop;
 16      forall i in 1 .. l_data.count update t set extracted = 'Y' where rowid = l_data(i);
 17  end;
 18  /

PL/SQL procedure successfully completed.



however every approach will of course be flawed and broken - there is no way to create your file "transactionally", if an error occurs - the file has to be rebuilt entirely. It will be corrupt otherwise. No way around that.

If you use serializable, the odds of hitting an ora-8177 sometime will be very high, you will be rebuilding the entire file since utl_file (in face - since everything that just writes to the file system) will not participate in ROLLBACK.

I would question "why", "why are we doing, what is the situation that would be necessary for this to have to happen, let's find a way to NOT dump the data out of the database where it is useful, secure, protected, recoverable, transactional"



You call this a traditional harvest, I call it "a bad idea". It is not traditional where I come from.

Huh?

A reader, January 06, 2010 - 2:08 pm UTC

...let's find a way to NOT dump the data out of the database where it is useful, secure, protected, recoverable, transactional"...

Not sure what you mean. Extracting data from a database and writing it to a file for the purpose of sending it to another system is a common use-case, no? How else would one transfer data from one system/database to another system/database (using db links doesn't count since not everyone uses Oracle yet) ?
Tom Kyte
January 06, 2010 - 3:16 pm UTC

..
Not sure what you mean. Extracting data from a database and writing it to a
file for the purpose of sending it to another system is a common use-case, no?
....

in the manner you are doing it? No, not a chance, no way.

It would never be done in an incremental fashion like that - appending to a file unextracted records.

You would extract what the other site needs - at once, upon demand - if at all.


You would have a query that identifies the data to be extracted.


I know of no one trying to do replication via flat files as you seem to be trying.


And, as stated, your algorithms just won't work unless you say to the database 'STOP, STOP I SAY, no modifications - let me work'. You have to serialize to do what you want.


Extracts to another system - sure, done all of the time.

The way you are? No, haven't seen it. It is certainly not traditional

A reader, January 06, 2010 - 3:54 pm UTC

...You would extract what the other site needs - at once, upon demand ...

Let me understand what you are suggesting. You are saying that the best (read: robust, bulletproof) practice, in your opinion, for transferring data from one system to another on an ongoing basis (i.e. in an environment where data collection never stops) is to a) designate a transfer table, b) truncate the table to empty it, c) have some process extract and dump all the data into this table and then have an extract dump out this table to a file?
Tom Kyte
January 06, 2010 - 4:19 pm UTC

No, I'm saying - there are things that have been sent, there are things that have not been sent.

You cannot use append to send things - appending to a file doesn't work. I have to assume you are appending because you say:

... The extract runs every X minutes and follows the following pattern ...

so, I presume that every X minutes, you run a process to add more "unextracted records" to that file.

If so, that is your flaw.

What people do is say:

Hey, we need to send them the extract.

Great, run the extract process - it runs but it remembers in a table in the database the extract TIME. and instead of having an extract flag of Y or N, it has a timestamp - the time of the extract (sort of like a BATCH JOB ID).

The extract runs ONCE to create a single extract file - never appended to. The extract runs, updates the records it unloaded, writes to file, gets confirmation that file is OK, saves the timestamp along with other useful data in an extract table, and then commits.

If the extract file fails - gets erased, gets lost, gets corrupted, whatever - big deal, so what, just run it again - run it in a mode that says "find extract = :THIS_DATE and re-extract it (instead of 'is null' for the extract time, look for extract time = :this_date - repull those records).


It is not traditional to every X minutes try to maintain a file in sync with the database.

It is typical for systems that need to do some sort of EDI (electronic data interchange - been around since dirt was invented) to create an extra file in 'batch' - a one time creation of said file, not trying to replicate to file system like you are now.



Next month, week, day, whatever - you would "run the extract again" to extract whatever records it needs now. It is an ongoing process - but it needs to be batched up like that - so the records extracted at time T can be identified as a group - so you can rerun THAT extract for when (not if, when) that extract file fails or needs to be redone for whatever reason.

A reader, January 06, 2010 - 4:01 pm UTC

So in code that would be

truncate table extract_me;
insert into extract_me select ... from ...;
/* write out data from extract_me to a file */


I would still need to remember the last record extracted so that the next extract starts after that, right? Wouldn't this lead back to the same problem we started with, of losing data? I guess the crux of the problem is that eligible data is continuously being generated in one database and it needs to be extracted and sent to another system for processing (e.g. an external vendor).
Tom Kyte
January 06, 2010 - 4:19 pm UTC

see above.

A reader, January 06, 2010 - 5:52 pm UTC

...so, I presume that every X minutes, you run a process to add more "unextracted records" to that file....

I meant to say the job scheduling system runs the extract/harvest job every N minutes and creates a NEW file each time. I am not appending to the file nor am I trying to replicate a file system. Sorry for the confusion.

...a) The extract runs at time T1, b) updates the records it unloaded at time T2, writes to file, gets confirmation that file is OK, saves the timestamp along with other useful data in an extract table, and then commits...

But then we are back to my original question, what if some data is written to the table between steps (a) and (b) above? Won't it fall between the cracks and never get extracted?
Tom Kyte
January 07, 2010 - 7:34 am UTC

did you read my code?

As I said, extract will be a date/time or a BATCH_ID column (it will not be yes/no)

when you extract:

ops$tkyte%ORA10GR2> create table t as select a.*, cast( null as timestamp(6) ) extracted from all_users a;

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create or replace procedure do_extract( p_which in timestamp, p_set_to in timestamp )
  2  as
  3      type array is table of rowid index by binary_integer;
  4      l_data array;
  5      l_batch_size number := 100;
  6      l_cnt number := 0;
  7  begin
  8      for x in (select rowid rid, t.* from t where (p_which is not null and extracted= p_which)
  9                union all
 10                select rowid rid, t.* from t where (p_which is null AND extracted is null ) )
 11      loop
 12          /* ... write it out ... */
 13          l_data(l_data.count+1) := x.rid;
 14          if (l_data.count=l_batch_size)
 15          then
 16              forall i in 1 .. l_data.count update t set extracted = p_set_to where rowid = l_data(i);
 17              l_cnt := l_cnt + l_data.count;
 18              l_data.delete;
 19          end if;
 20      end loop;
 21      forall i in 1 .. l_data.count update t set extracted = p_set_to where rowid = l_data(i);
 22      l_cnt := l_cnt + l_data.count;
 23      dbms_output.put_line( 'wrote ' || l_cnt || ' records' );
 24  end;
 25  /

Procedure created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create or replace package my_data
  2  as
  3      g_ts timestamp(6);
  4  end;
  5  /

Package created.

ops$tkyte%ORA10GR2> exec my_data.g_ts := systimestamp

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec do_extract( null, my_data.g_ts );
wrote 46 records

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec dbms_lock.sleep(3)

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec do_extract( null, systimestamp );
wrote 0 records

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec dbms_lock.sleep(3)

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec do_extract( my_data.g_ts, my_data.g_ts );
wrote 46 records

PL/SQL procedure successfully completed.



as you can see, you only delete that which you have read, you stamp it with a batch id, so WHEN (not if) you need to re-extract some records, you can.


Re: Extract losing data

Narendra Prabhudesai, January 07, 2010 - 3:57 am UTC

I have a traditional harvest i.e. extract from a table into a file as part of a systems interface. The extract runs every X minutes and follows the following pattern

select ... from table where extracted='N' and <predicates>

Followed by

update table set extracted='Y' where extracted='N' and <predicates>



The problem is that if some transaction commits eligible data to the table right between these 2 statements, it never gets extracted.


"Reader",

I might be missing something obvious here but how is it possible in the above situation that some data never gets extracted? If you are doing a
SELECT...WHERE extracted = 'N'
followed by
update table set extracted='Y' where extracted='N'
and some eligible rows get inserted in between the SELECT and UPDATE, they will not be extracted in the currenct cycle of extract, which looks perfectly OK. But surely, these records will be picked up during the next cycle of extract as they will have their extracted flag as 'N'. How is it possible to "miss" any records?
As Tom mentioned, it is, of course, possible that same record is extracted multiple times if the multiple extract processes are run simultaneously.

Thanks!

A reader, January 07, 2010 - 12:47 pm UTC

@Tom, thanks for your patience, yes, I get it now. I missed the part where you were saving up the rowids and only updating those rowids that were actually extracted/written.

@Narendra Prabhudesai from UK: That was the flaw in my script. If some new data (extracted=N) was added to the table in the instant after the SELECT starts, the UDPATE will (erroneously) mark it as extracted and so it will be missed in both the current cycle (SELECT already executed) and the next cycle (since it is already marked as extracted).

Tom - One last question, if you don't mind. Is there a way to use this technique with simply SQL*Plus scripts instead of a PL/SQL module since using the latter forces us to use utl_file to write to the file. SQL*Plus has lots of useful "set" options for formatting the data which would need to be re-invented by using the utl_file method. Some way to store the list of rowids in a SQL*Plus variable or something? Thanks
Tom Kyte
January 11, 2010 - 8:37 pm UTC

please do not use sqlplus to write a file for something like this, you want something you have a tad bit of control over - at least utl_file will RAISE AN ERROR when an error happens. what happens when your file system fills up?

re-invent for this, it ain't that hard, it should actually be very very very VERY easy. You have to_char and formats galore, creating a flat file would be *trivial* with utl_file. Explain how sqlplus would be "so much easier" - especially since you are not just printing a cursor.


OK

Rama, January 11, 2010 - 5:08 am UTC

Hi Tom,
Does the data dictionary get updated when we set the transaction as READ ONLY/READ WRITE and also when we start autonomous transactions? As far as I know,I tried to get this information in v$transaction but it's not there. Where we can find this?
Thanks for your time.
Tom Kyte
January 18, 2010 - 12:03 pm UTC

it is not exposed to my knowledge - the session that did it "knows" it (because it did it) but it isn't exposed as an attribute for another session to see.

utl_file vs sqlplus

A reader, January 12, 2010 - 5:12 pm UTC

Well, sqlplus has whenever sqlerror/oserror exit rollback which comes in handy when there is a ORA- or a OS error. As far as formats are concerned, I guess the only sqlplus settings that really apply here are things like colsep, space which can be easily done in pure sql/url_file. The rest of the formatting stuff is really to_char and other sql formatting functions.

...especially since you are not just printing a cursor...

Not sure what you mean by this. I am just printing a cursor, right? The data to be extracted is retrieved using a SELECT statement and the output needs to be dumped into a file.
Tom Kyte
January 18, 2010 - 3:53 pm UTC

so what, it exits? that is pretty useless, you have to have a human being involved.


you are NOT just printing a cursor, what have we been talking about this entire time? We've been talking about how actually complex this really is.

ops$tkyte%ORA10GR2> create or replace procedure do_extract( p_which in timestamp, p_set_to in 
timestamp )
  2  as
  3      type array is table of rowid index by binary_integer;
  4      l_data array;
  5      l_batch_size number := 100;
  6      l_cnt number := 0;
  7  begin
  8      for x in (select rowid rid, t.* from t where (p_which is not null and extracted= p_which)
  9                union all
 10                select rowid rid, t.* from t where (p_which is null AND extracted is null ) )
 11      loop
 12          /* ... write it out ... */
 13          l_data(l_data.count+1) := x.rid;
 14          if (l_data.count=l_batch_size)
 15          then
 16              forall i in 1 .. l_data.count update t set extracted = p_set_to where rowid = 
l_data(i);
 17              l_cnt := l_cnt + l_data.count;
 18              l_data.delete;
 19          end if;
 20      end loop;
 21      forall i in 1 .. l_data.count update t set extracted = p_set_to where rowid = l_data(i);
 22      l_cnt := l_cnt + l_data.count;
 23      dbms_output.put_line( 'wrote ' || l_cnt || ' records' );
 24  end;
 25  /



there is your "just printing a cursor" code - it does NOT just print a cursor, there is some logic there.


ANSI compliancy

A reader, March 01, 2010 - 2:50 pm UTC

Tom, this is from wikipedia article. Would like to hear your comments on the statement that MS SQL is more compliant with ANSI than Oracle :-)

SERIALIZABLE

This isolation level specifies that all transactions occur in a completely isolated fashion; i.e., as if all transactions in the system had executed serially, one after the other. The DBMS may execute two or more transactions at the same time only if the illusion of serial execution can be maintained. At this isolation level, phantom reads cannot occur.

However, many databases (e.g. Oracle [1], PostgreSQL [2]) do not guarantee that there is some serial ordering of the transactions that will result in the same outcome, instead implementing snapshot isolation. This explicitly contradicts the ANSI/ISO SQL 99 standard (see ISO/IEC9075-2:1999(E), page 83). Other databases (MS SQL Server [3]) support both serializable and snapshot isolation modes.
Tom Kyte
March 02, 2010 - 7:08 am UTC

read http://www.dbazine.com/oracle/or-articles/kyte3

the sql standard defines the isolation levels in terms of "phenomena", we support serializable mode transactions in the sense that is specified.

Reports with DDL

Ranjan Pathania, May 21, 2010 - 3:03 pm UTC

We use Microstrategy for reporting and very often, there are multi passes( many sqls run before the result comes out). Microstrategy creates temp tables. During our weekly refresh of summary tables, if a user run a report, he might see wrong data based on the fact how far we are in the refresh.
We were looking into option of using "serializable", but because of multi-passes, the transaction ends as soon as "create table" DDL is issued.
Is there any other option?
We are looking into Logical Standby database with bigger refresh interval over the weekend.

Thanks
Tom Kyte
May 24, 2010 - 12:45 pm UTC

microstrategy DOES NOT create temp tables (oh, that they would, life would be grand)

they create REAL tables and pretend to themselves that they are temporary.


Anyway...



I'm not sure what user you are talking about there - the microstrategy account or end users.

There is a chance that flashback query could be used - but I hesitate to suggest that because I don't want you to run flashback query ALL of the time - just when you need to, but that would involve query modification..

Anton, June 08, 2010 - 9:03 am UTC

Hello, Tom.

I wanted to ask, how did you make those parallel profiles of the execution of transactions? With what tools?

Thank you in advance,
Anton
Tom Kyte
June 09, 2010 - 9:00 am UTC

I just used two terminals and sqlplus, nothing fancy

Serializable

Ramesh, June 14, 2010 - 3:38 am UTC

Isn't the definition of serializable used in the article really a new level of isolation called snapshot isolation and not really serializable isolation as defined by ANSI?

THanks
Tom Kyte
June 22, 2010 - 8:15 am UTC

it fit the description of ANSI as of the time of the implementation (version 7.3). At that time, ANSI described serializable purely in terms of 4 phenomena - dirty read, committed read, repeatable read, no phantom reads. Serializable was simply no phantom reads.

Serializable

Ramesh, June 26, 2010 - 6:57 am UTC

Thanks Tom for a crisp answer.

Oracle Docs

Jason Bucata, October 12, 2010 - 4:20 pm UTC

This paragraph from the "App. Dev. Guide--Fundamentals" probably explains much of the confusion regarding serializable transactions running one at a time:

"For most applications, this concurrency model is the appropriate one, because it provides higher concurrency and thus better performance. But some rare cases require transactions to be serializable. Serializable transactions must execute in such a way that they appear to be executing one at a time (serially), rather than concurrently. Concurrent transactions executing in serialized mode can only make database changes that they could have made if the transactions ran one after the other."

This also pertains to what the earlier poster from March said re MS SQL Server...

I see that exact text in the docs for 7.3.4, 8.1.7, 9.2, and 10.2. The 11.2 docs seem to be a lot better.

So the next time this comes up in a discussion somewhere, you might want to point out that the Oracle docs were flat-out wrong for over 10 years. It might help to reorient everybody.


read committed

A reader, January 30, 2011 - 4:32 am UTC

Hi Tom,

I have a small query regarding read committed transaction isolation level.
Scenario: Let's suppose a query on a large table at SCN 12000 at T1 time at Session-1, then after at T2(>T1) time another session-2 commits a DML on the last row of that table at SCN 12500(say the data last modified on that last row was at SCN 11990).
Question: So if we set isolation level as read committed, will the query read that committed data on the last row at a SCN higher in value when the query execution began? On what condition will it read the value from undo segment which should store the prior data of SCN 11990? Do we have to specify isolation level as serializable?
Tom Kyte
February 01, 2011 - 4:22 pm UTC

... will the query read that committed data on the last row at a SCN higher in value when the query execution began? ...

No, Oracle always implements read consistency. In read committed, when you OPEN the cursor - the result set is pre-ordained as of the point in time you opened the cursor. Any commits that happen AFTER the cursor is opened will not be seen by that cursor.

In serializable - the cursor result set is pre-ordained as of the point in time YOUR TRANSACTION began - you will see no other sessions committed work that was committed after your transaction began.

... On what condition will it read the value from undo segment which should store the prior data of SCN 11990? ...

I don't know what you mean there.

read comitted

A reader, February 01, 2011 - 9:01 pm UTC

Hi Tom,

Actually if my understanding is correct, I got to know from your book that in read committed cases, the reader of any session waits till the lock on that table by writer of other session releases after DML. Then if read committed will maintain read consistency then why it will wait for that block, it should read from undo block?
Tom Kyte
February 02, 2011 - 7:27 am UTC

you got that from my book????? ouch. I must not be as good at writing and explaining things as I thought :(

In Oracle - reads do not block writes, writes do not block reads. Using multi-versioning and read consistency we give you read committed results without blocking. In addition - you get read committed with READ CONSISTENCY - meaning you get an answer that existed in the database. Many other databases - the ones that employ shared read locks and where updates block reads - give you answers that NEVER existed in the database.

Perhaps you were reading the chapter on Concurrency where I *compare* Oracles read consistent approach with the OTHER databases approach. Please tell me where I wrote that:

"... that in read committed cases, the reader of any session waits till the lock on that table by writer of other session releases after DML ..."

Read committed vs Serializable

mircea, May 17, 2011 - 6:30 am UTC

Hi Tom,

One question that I didn't find the answer yet.Are there any performance differences between a read committed transaction and a serializable one ?
From my tests it seems to be the same from performance point of view but maybe I am missing something and I need a confirmation.

Thank you!
Tom Kyte
May 18, 2011 - 9:25 am UTC

Since serializable is designed and anticipated to be used in short duration transactions only (seconds/minutes at most), no there isn't really a performance difference.


However, if you try to use serializable for a long duration transaction - there well could be.

That is due to the fact that in general - the further back in time you "flashback", the longer a flashback query will take to execute (and serializable is just like using flashback query - all statements in a transaction will be "as of" the same old point in time).

To query some set of data "as of" one minute ago will involve rolling back all changes to the accessed data for one minute. That will be a 'small' set of undo. To query that same set of data "as of" five minutes ago will involve rolling back all changes to the accessed for five minutes. That will likely be a larger set of undo to apply. Going back ten minutes - even more so and so on.



In general - no real difference when used for short duration transactions. Possibly a measurable impact for long duration transactions. it is a matter of how much undo has to be applied.

Still not clear on how SERIALIZABLE works

Dave, September 07, 2011 - 12:35 pm UTC

In the book, you say, "Instead of results being consistent with respect to the start of a statement, they are pre-ordained at the time you begin the transaction".

But I'm not clear on this. Example:

The following steps are all performed sequentially:

*** In first session:

SQL> create table t (col1 int);

Table created.

SQL>

*** In second session:

SQL> alter session set isolation_level = serializable;

Session altered.

*** In first session:

SQL> insert into t values (1);

1 row created.

SQL> commit;

Commit complete.

*** In second session:

SQL> select * from t;

      COL1
----------
         1

In the second session why do I get the value back from table t that I inserted using the first session? This value was NOT present in the table when the transaction started. We are using 11g.

Tom Kyte
September 08, 2011 - 4:53 pm UTC

transactions start with the first DML statement (insert,update,delete,select,merge) - ALTER is not the start of the transaction, you just set a session level setting, your transaction started with the SELECT.

If you would have used:


set transaction isolation level serializable;


instead, that would have started a transaction - and you would have seen what you were apparently expecting to see...

A reader, September 17, 2011 - 6:03 am UTC

Hi Tom,
from your Book :
<
Oracle explicitly supports the READ COMMITTED and SERIALIZABLE isolation levels as they are defined in
the standard. However, this doesn’t tell the whole story. The SQL standard was attempting to set up
isolation levels that would permit various degrees of consistency for queries performed in each level.
REPEATABLE READ is the isolation level that the SQL standard claims will guarantee a read-consistent
result from a query. In their definition, READ COMMITTED does not give you consistent results, and READ
UNCOMMITTED is the level to use to get non-blocking reads.
However, in Oracle, READ COMMITTED has all of the attributes required to achieve read-consistent
queries. In many other databases, READ COMMITTED queries can and will return answers that never existed
in the database at any point in time.>


1.Is it not MUST for the database vendor to follow SQL standards AS IT IS - ?

For example : SQL standard says that READ COMMITTED does not give you consistent results but with ORACLE - it is possible ?

Thanks

Tom Kyte
September 17, 2011 - 11:21 am UTC

keep reading the book - in the book I told you what must be in place at each level, SQL describes the levels in terms of "phenomena" - you must not allow a given phenomena to exist at each level. We do that - we satisfy the standard.


http://www.oracle.com/technetwork/issue-archive/2005/05-nov/o65asktom-082389.html

sql standard does NOT say that read committed "does not" give you consistent results, it DOES NOT IN ANY WAY say that. It says what read committed must do (you get to read only committed data) - but it does not say "but give the wrong answer". If you believe it does - quote the standard verbatim

Serializable does not mean sequential

Neil Swartz, December 08, 2011 - 7:47 am UTC

So a serializable transaction does not mean sequential. Two transactions can run
at the same time and see the same DB data.
If both modify a record or insert a record that violates a unique constraint
(because of the other transaction) then one succeeds and the other rolls back? Yes?
How can we get sequential operation if that is what we want?

For example, I want both transactions to do
"insert into A select max(x)+1 from A" -- I want a counter.
With Serializable I will get one transaction sets it to 1 (assuming the record had 0) and the other would fail the transaction. Is that correct? Or would we get 2 records with 1?
What if there were a unique constraint on the field x?
Tom Kyte
December 08, 2011 - 12:53 pm UTC

yes, if they were to violate a unique constraint together - one would block on the other. But that isn't an attribute of serializable, that would be true in all isolation levels.

"insert into A select max(x)+1 from A" -- I want a counter.


that would be just about the worst thing in the world you could possibly even consider thinking about doing - seriously. It would be what I would term "not smart"

to do it however, you would

lock table a in exclusive mode;
insert into a ....;



but it would be something you DO NOT WANT TO EVER DO IN REAL LIFE. Use a sequence. use a sys_guid(). use something else - but do not do this.


distributed serializable transactions

luckybear, March 27, 2012 - 10:17 am UTC

Hello,

here http://www.dba-oracle.com/t_oracle_isolation_level.htm
I found this statement

The serializable transaction isolation level is not supported with distributed transactions.

but I cannot find such an information in Oracle documentation, so is this information correct or it is not true?

Example:
Two schemas (schA,schB) on same database.
table_a in schA, table_b in schB.
readonly view_b on table_b.
grant read on view_b to schA.

Session sesA to schA isolation level serializable.
Session sesB to schB isolation level serializable.

Start of distributed transaction.
sesB: INSERT INTO table_b (1);
some time...
sesA: SELECT count(*) as cnt FROM view_b;
sesA: INSERT INTO table_a (cnt);
commit;

Will session sesA 'see' row inserted by session sesB as a part of current distributed transaction?
(for me it does not seem to work, but maybe I have error somewhere else)

thanks

Tom Kyte
March 27, 2012 - 11:34 am UTC

wouldn't it be so very very cool if websites that provided "information" also had a method to question such information? That way, you wouldn't have to go elsewhere to get the material refined, updated, etc.

My suggestion to you in the future is:

if a site doesn't have dates on their articles and/or if they do not make any mention of a version - beware, be very suspicious of that material.

if a site doesn't allow for commenting on articles - some method of providing feedback - beware, be very very suspicious of that material.

if a site doesn't have any references to support their claims or does not include some evidence of what they say is true - beware, be very very very VERY suspicious of that material.


Think about how hard it would have been for the author of that article to provide evidence that what they say is probably true?

A link to the documentation? That would have been nice.

Here is an "anti" link I found
http://docs.oracle.com/cd/A57673_01/DOC/server/doc/SCN73/ch10.htm#index1477
(yes that is version 7 documentation, it goes back that far)


An example showing the error you get when doing distributed with serializable? That would have been cool.

However...


ops$tkyte%ORA11GR2> set transaction isolation level serializable;

Transaction set.

ops$tkyte%ORA11GR2> insert into t1 values ( 1 );

1 row created.

ops$tkyte%ORA11GR2> insert into t2@loopback@ora11gr2 values ( 2 );

1 row created.

<b>seems to work so far - no errors - but maybe the database is just lying to us and we are not serializable.  One way to see would be to have another transaction create data:</b>

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> declare
  2          pragma autonomous_transaction;
  3  begin
  4          insert into t1 values ( 3 );
  5          insert into t2@loopback@ora11gr2 values ( 4 );
  6          commit;
  7  end;
  8  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select * from t1;

         X
----------
         1

ops$tkyte%ORA11GR2> select * from t2@loopback@ora11gr2;

         X
----------
         2

<b>Nope, cannot see it - it was serializable, if we commit:</b>

ops$tkyte%ORA11GR2> commit;

Commit complete.

ops$tkyte%ORA11GR2> select * from t1;

         X
----------
         1
         3

ops$tkyte%ORA11GR2> select * from t2@loopback@ora11gr2;

         X
----------
         2
         4

<b>viola - we see it... further evidence:</b>
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> commit;

Commit complete.

ops$tkyte%ORA11GR2> set transaction isolation level serializable;

Transaction set.

ops$tkyte%ORA11GR2> select * from t1 where x = 1;

         X
----------
         1

ops$tkyte%ORA11GR2> select * from t2@loopback@ora11gr2 where x = 2;

         X
----------
         2

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> declare
  2          pragma autonomous_transaction;
  3  begin
  4          update t1 set x = -1 where x = 1;
  5          update t2@loopback@ora11gr2 set x = -2 where x = 2;
  6          commit;
  7  end;
  8  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> update t1 set x = -1 where x = 1;
update t1 set x = -1 where x = 1
       *
ERROR at line 1:
ORA-08177: can't serialize access for this transaction


ops$tkyte%ORA11GR2> update t2@loopback@ora11gr2 set x = -2 where x = 2;
update t2@loopback@ora11gr2 set x = -2 where x = 2
*
ERROR at line 1:
ORA-08177: can't serialize access for this transaction
ORA-02063: preceding line from LOOPBACK@ORA11GR2




you only get ora-8177's from serializable transactions




Challenge for you - that article is relatively current (april 2011, less than a year), why don't you try to get it updated and let us know what happens.

distributed serializable transactions - followup

luckybear, March 28, 2012 - 5:57 am UTC

Hello,

thanks for answers

I managed to get this answer from author
( http://dbaforums.org/oracle/index.php?showtopic=21649
for some strange reasons, I am not allowed to reply to this topis anymore... :-)

"As I understand it, it's possible in a distributed transaction to get a non-repeatable read bnecause the read consistency does not work across databases."

followed by other user by pointing on
http://docs.oracle.com/cd/B10501_01/server.920/a96521/ds_txnman.htm#9511
An important restriction exists in Oracle's implementation of distributed read consistency. The problem arises because each system has its own SCN, which you can view as the database's internal timestamp. The Oracle database server uses the SCN to decide which version of data is returned from a query.
. . .
One consequence of the SCN gap is that two consecutive SELECT statements can retrieve different data even though no DML has been executed between the two statements.

---

Meanwhile I discovered that in 10g documentation
http://docs.oracle.com/cd/B19306_01/server.102/b14220/consist.htm

there is (for me) useful chapter Distributed Transactions

In a distributed database environment, a given transaction updates data in multiple physical databases protected by two-phase commit to ensure all nodes or none commit. In such an environment, all servers, whether Oracle or non-Oracle, that participate in a serializable transaction are required to support serializable isolation mode.

BUT this chapter seems to go missing in 11g documentation

http://docs.oracle.com/cd/E14072_01/server.112/e10713/consist.htm

thanks again
Tom Kyte
March 28, 2012 - 9:22 am UTC

serializable transactions see the data as of the beginning of the transaction - not statement - so this wouldn't affect them.


To say this means "serializable doesn't work with distributed" is quite simply "wrong"

quote:
The serializable transaction isolation level is not supported with distributed transactions.
The article is wrong, plain and simple.


and did the article get updated to reflect this, if the author wanted to make a point, it wouldn't be "serialiable doesn't work", it would be that documented caveat - which is a caveat, not a "does not work"

Implementation vs Intent

Beldaz, October 21, 2012 - 5:00 pm UTC

Hi Tom,

Your give an excellent demonstration about how the effect of Oracle's SERIALIZABLE isolation level is not the same running transactions in serial. But I want to pick you up on your definition. I agree that serializable does not mean transactions operate one after the other, but it does mean that the effect is the same as if the the transactions occurred one after the other, though you state that this is not so. The problem is that in Oracle (or Postgres, or any other DBMS provided snapshot isolation) transactions with SERIALIZABLE isolation level are not actually serializable.

You give your definition of serializable as:
"A serializable transaction operates in an environment that makes it appear as if there are no other users modifying data in the database.[...]"
That's isolation, not serializabilty. And to be fair, that really what we care about (we want ACID, not ACSD). The SERIALIZABLE isolation level is the most isolated level available is SQL, but the ANSI standard was too prescriptive in how this highest level should be achieved. Oracle have been pragmatic in implementating this level with something that is just as good as, but not equivalent to serializable transactions. But that doesn't change the definition of serializable.


Tom Kyte
October 23, 2012 - 12:10 pm UTC

we took the ANSI definition, the set of properties given to serializable transactions and implemented it.. It was not too prescriptive, if anything - it didn't have enough definition.

Difference between "Serializable", "read-only" and "read-commited"

Abdelmoula Mahdi, January 05, 2013 - 10:03 am UTC

Hi Tom,

I want to know the difference between "Serializable", "read-only" and "read-commited"

Thanks

Shearable cursors

Todor, December 03, 2013 - 10:41 am UTC

Hi Tom,
I need to know if two transactions are serializable, will they continue to share the cursors. My knoledge is not complete in this area, the problem for which I want to find a solution is that there are a lot of Cursor: pin S wait on X waits when I am selecting from the same view in the same time. In our environment is possible to use this isolation level, but I am not sure will this be efficient.
Thanks!
Todor

What the standard says about SERIALIZABLE

Paul, February 22, 2014 - 6:43 am UTC

Section 4.28 of the standard:

"The execution of concurrent SQL-transactions at isolation level SERIALIZABLE is guaranteed to be serializable. A serializable execution is defined to be an execution of the operations of concurrently executing SQL-transactions that produces the same effect as some serial execution of those same SQL-transactions. A serial execution is one in which each SQL-transaction executes to completion before the next SQL-transaction begins."

This seems quite clear, yet contrary to your answer:

"Serializable does not mean that all transactions executed by the users are the same as if they were executed one right after another in a serial fashion. It does not imply that there is some serial ordering of the transactions that would result in the same outcome."

How do we resolve this apparent contradiction?

A reader, August 27, 2017 - 6:01 pm UTC

Perfect example to clarify.

ORA-08176: consistent read failure; rollback data not available

John, September 23, 2021 - 8:50 pm UTC

Hi,
Got the below error and the query involved accesses both local tables as well as remote tables via database link. Its purely read-only transaction. We use default isolation level (read-committed).

ORA-08176: consistent read failure; rollback data not available
ORA-02063: preceding 2 lines from EVAL
ORA-02063: preceding 3 lines from EVAL2DWH

Here, EVAL is Local database (the query initiated from) is on Oracle 12.2.0.1
DWH is Remote database which is running on Oracle 12.1.0.2

From database side,
Checked undo_tablespace usage /free space and undo_retention and they were sized properly. Also, checked alert log and no undo related errors reported.

Besides these basic checks, Can you please point in the right direct ion to investigate and fix the error.

1) Is this error coming from local database or remote database .?

2) Also as the error cause says, checked for any DDL operation on the objects involved but there was no such activity around that time.

SQL> !oerr ora 08176
08176, 00000, "consistent read failure; rollback data not available"
// *Cause: Encountered data changed by an operation that does not generate
// rollback data : create index, direct load or discrete transaction.
// *Action: In read/write transactions, retry the intended operation. Read
// only transactions must be restarted


3) Any suggestion on how to prevent this error ?.


Thank you.
Chris Saxon
September 24, 2021 - 4:02 pm UTC

1 The error stack starts at the top from the statement that raised the error, down to the client call at the bottom. So

"Something" happens which raises ORA-08176
This was raised on EVAL
Which was then raised on EVAL2DWH

2 The error raises two other possible reasons:

direct load or discrete transaction.

Did you have any direct path loads during this period? How are you managing the transactions?

3. There are some bugs listed against this error; check MOS to see if any of them apply to you.

To help further we need to see more details of what you're doing - ideally a complete test case (create table + insert into + queries) that reproduce the problem.

ORA-08176: consistent read failure; rollback data not available

John, September 27, 2021 - 3:44 am UTC

Hi Chris,

Thanks for your quick response and pointers.

1. I checked on the 2 tables involved in the query on EVAL database side for direct operation but none of them uses either direct load or discrete transaction.

>> 2 The error raises two other possible reasons:
direct load or discrete transaction.
Did you have any direct path loads during this period? How are you managing the transactions?


Yes, there are direct loads (INSERT /*+ APPEND */) happening on other tables (BTW, the query hit with ORA-08176 does not refer those tables) in EVAL database during this period.

Can this cause ORA-08176 error ?

This direct load on other tables has been running for a while though.

Also, the DBLINK query uses /*+ PARALLEL(8) */ in the select statement..


>> 3. There are some bugs listed against this error; check MOS to see if any of them apply to you.

I could not find bug matching this scenario on MOS. if you came across any known bug,, Can you mention the bug number.. I can double check it.

Thank you
Chris Saxon
September 27, 2021 - 1:41 pm UTC

I'm not sure why you're getting this error - a complete test case of the process so we can reproduce it will help massively!

Search MOS for ORA-08176 - you'll find several articles/bugs. I don't know enough about your system to say if these apply to you or not