Skip to Main Content

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question, alex.

Asked: November 05, 2004 - 6:40 pm UTC

Last updated: June 08, 2016 - 12:49 am UTC

Version: 9

Viewed 10K+ times! This question is

You Asked

1)
i have read that oracle guarantees read consistent view..i have read this but have the feeling that it is not exactly clear...could u please explain in ur wordS what is read consistent view....could u please tell me how it is diferent from write consistent view

2)
i am unable to understand the diference between "transaction" and "plsql block" in oracle...please explain

and Tom said...

1) i have done this in "Expert One on One Oracle". I wrote *alot* about it in fact, this is a short extract (there is alot more in the book)

<quote>
Multi-Versioning and Concurrency

This is a topic very closely related to concurrency control, as it forms the foundation for Oracle's concurrency control mechanism - Oracle operates a multi-version read consistent concurrency model. In the following chapter, "What you need to know", we'll cover the technical aspects of this in more detail but, essentially, it is the mechanism by which Oracle provides for:

o Read-consistent queries: Queries that produce consistent results with respect to a point in time.
o Non-blocking queries: Queries are never blocked by writers of data, as they would be in other databases

Oracle uses the information stored in rollback segments to provide this read consistent view of information. Since the rollback segments are where Oracle stores "old values" or before images of data during transaction processing, it can use these old values to provide a query with the value of data as it looked in the database when the query began. As a query processes the blocks of data in a table, it will see if the data on that block has changed since the query began. If it has Oracle will read the information from the rollback segment to determine what the block looked like when the query began, that is the view of the data the query will see. This is how non-blocking reads are implemented as well - Oracle only looks to see if the data changed, it does not care of the data is currently locked (which implies that it has changed). It will simply retrieve the old value from the rollback segment and proceed onto the next block of data. This is where the term multi-versioning comes in - there are multiple versions of the same piece of information, all at different points in time, available in the database. These multiple versions are provided by the information stored in the rollback segments. Oracle is able to make use of these snapshots of data at different points in time to provide us with read consistent queries and non-blocking queries.

These are two very important concepts in the Oracle database. If you understand how multi-versioning works, you will always understand the answers you get from the database. Here is the simplest way I know to demonstrate multi-versioning in Oracle:

tkyte@TKYTE816> create table t
2 as
3 select * from all_users;
Table created.

tkyte@TKYTE816> variable x refcursor

tkyte@TKYTE816> begin
2 open :x for select * from t;
3 end;
4 /
PL/SQL procedure successfully completed.

tkyte@TKYTE816> delete from t;
18 rows deleted.

tkyte@TKYTE816> commit;
Commit complete.

tkyte@TKYTE816> print x

USERNAME USER_ID CREATED
------------------------------ ---------- ---------
SYS 0 04-NOV-00
SYSTEM 5 04-NOV-00
DBSNMP 16 04-NOV-00
AURORA$ORB$UNAUTHENTICATED 24 04-NOV-00
ORDSYS 25 04-NOV-00
ORDPLUGINS 26 04-NOV-00
MDSYS 27 04-NOV-00
CTXSYS 30 04-NOV-00
?
DEMO 57 07-FEB-01

18 rows selected.

In the above example, we created a test table, T, and loaded it with some data from the ALL_USERS table. We opened a cursor on that table. We fetched no data from that cursor: we just opened it. Bear in mind that Oracle does not "answer" the query, does not copy the data anywhere when you open a cursor - it answers the query as it goes along. Consider if T was a table with 1 billion records in it - Oracle would not "answer" that query and copy the data anywhere, it would just read it from the table as you fetched from it. Now, in the same session (or maybe another session would do this), we then proceeded to delete all data from that table. We even went as far as to Commit work on that delete. The rows are gone - but are they? In fact, they are retrievable via the cursor. The fact is the result set returned to us by the OPEN command was pre-ordained at the point in time we opened it. We had touched not a single block of data in that table during the open - but the answer was already fixed in stone. We have no way of knowing what the answer will be until we fetch the data - however the result is immutable from our cursor's perspective. It is not that Oracle copied all of the data above to some other location when we opened the cursor (imagine how long it would take to open cursor on that one billion row table if it did; that cursor opens instantly - it does not copy the data). It was actually the delete that preserved our data for us (yes, the delete preserved the data for us) by placing it into the rollback segment.

This is what read consistency is all about. Oracle achieves read consistency via a multi-versioning scheme. In the following chapter, "What you need to know", we'll cover the technical aspects of this in more detail. Suffice it to say, however, that until you understand what this Oracle feature implies, you will not be able to take full advantage of Oracle nor will you be able to write correct applications in Oracle (ones that will ensure data integrity).
</quote>


</code> http://docs.oracle.com/cd/B10501_01/server.920/a96524/c21cnsis.htm#2570 <code>

The concepts guide is really good for this as well.


2) a plsql block has no relationship to a transaction whatsoever, a plsql block is just a block of code, nothing more -- nothing less. It might decide to to a transaction (execute some SQL, change the database and commit or rollback) or it might just add two number together -- whatever. It is just "code"


If you are interested in hearing about "plsql, transactions, concurrency control, locking, multi-versioning, oracle architecture, etc" -- in my words -- that would be "Expert One on One Oracle"

Rating

  (36 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

From your book: multiversioning

Totu, April 05, 2006 - 3:02 am UTC

Dear Tom.
There is table in your book (Chapter 1, Page 36) that shows steps of (t1, t2, t3, t4, t5, t6, t7) Oracle processesing.

If step T7 took place before before T6, Oracle would return 1600$, or as your example above, even after commit it wiil read rollback segments? I use Oracle 10G.

Thanks in advance.

Tom Kyte
April 05, 2006 - 5:47 pm UTC

I have more than one book?

Expert one-on-One

Totu, April 06, 2006 - 1:38 am UTC

isbn 1-59059-243-3
2001

Tom, it is interesting for me your thoughts regarding "Best isolation level in banking system".

What do you think?
Thanks in advance

Tom Kyte
April 06, 2006 - 10:02 am UTC

depends always on the need of the appliation and how the transactions are constructed. For most applications read committed in Oracle is sufficient. But you need to understand the transaction semantics (what you are trying to accomplish) and HOW the isolation level is implemented in the underlying database (to avoid side effects)

Multi Versioning Question?

Dihan Cheng, July 26, 2006 - 5:30 pm UTC

Hi Tom,

On your expert one on one oracle book P187 - P188, you give an example concerning multi-versioning. There you artifically create a small rollback segment and add an index and where constraint to make sure consecutive reads have a good probability on different blocks. Then you write:

Block 1 might hold, say, rows A, M, N, Q, and Z. So we could hit the block four times with long time intervals between hits. Since we are committing frequently and reusing rollback space, we eventually revisit a block we can simply no longer reproduce and get the error.

This statement really confuses me, since previously I think multi-versioning is acting on a single row. But according to your statement, after updating row 'A' and commit; we actually put all of the rows in the same block(A, M, N, Q, and Z) into the rollback segment, thus later on, when we want to query row say 'Q' from that same block but the rollback segment has been reused we'll get an ORA-01555 error. It seems that what you mean is multi-versioning is working on a block, NOT a row.

Could you please kindly resolve this problem for me? Thanks!

warm regards,

Dihan

Tom Kyte
July 26, 2006 - 5:33 pm UTC

everything is done at the block level.

but only the changes are logged to the undo segment.

If you insert a row into a block, we put a "delete this rowid" into undo.
If you delete a row from a block, we put a "insert this row into this block" into undo.
If you update a row in a block, we put an "anti update this rowid with these values to put it back" into undo.

BLOCKS are rolled back.
Changes to the block are logged and recorded in undo.

multi versioning problem

Dihan Cheng, July 26, 2006 - 6:50 pm UTC

Hi Tom,

Do you mean if the rollback segment has been reused, if we query row 'Q' in the same block as row 'A', we will get an ORA-01555 error? But row 'Q' has nothing to do with row 'A', although they are in the same block but they are two different rows. How can updating on row 'A' affect row 'Q'?

warm regards,

Dihan

Tom Kyte
July 26, 2006 - 7:05 pm UTC

we do it at the block level - pure and simple.

That is how updating row 'a' can affect your ability to query row 'q'.

multiversioning is a block thing, if a block has been changed since your statement began (since the point in time your query is "as of"), we have to roll back that block. rolling back is a block thing.

multi versioning problem

Dihan Cheng, July 26, 2006 - 7:18 pm UTC

Hi Tom,

I understand what you say, but I still think it doesn't make sense to do in this way. Say we have 10 rows in the same block, if we update the first row, then we not only multi-version the first row, we also have to multi-version the rest 9 rows. And afterwards, we have to refer to the rollback segment to get the rest 9 rows; actually, the rest 9 rows are right in the block unaffected.
I hope it's my misunderstanding, since I don't say benefits in this way Oracle approaches.

warm regards,

Dihan

Tom Kyte
July 26, 2006 - 8:05 pm UTC

sorry, things are done at block level.

It is like this

o we cache blocks
o we read blocks
o blocks have transaction tables on them, this is what drives the multi-versioning

relational databases tend to do things "to blocks", it is basically the way they work.

We do not multi-version ROWS.
We multi-version BLOCKS.

You don't have to get the "rest of the 9 rows".

You have a block.
It has 10 rows.
You update row 1. Into the undo segment goes the undo to undo that 1 row change..

Someone else queries that block, they want a row on it. ANY ROW, pick one, don't care which one.

They need to rollback your changes to that block.

So, they go to the undo segment and get the undo for that one row, apply it.

Now they are happy, they have the block as of the point in time you need it as of.


The benefit? Non-blocking reads, consistent (correct) results. I listed the benefits in the book.




multi versioning problem

Dihan Cheng, July 27, 2006 - 2:05 am UTC

Thanks Tom, this almost makes me clear.

But what do you mean by

"They need to rollback your changes to that block."

Is the word "rollback" here having the same meaning as the transaction control statement "rollback"? If they have the same meaning, then the second session physically modifies back the changes made by the first session. But this doesn't make sense, since suppose afterwards the first session commits, it will commit the unmodified data(since it's already modified back by session 2). Thanks for clarifying.

warm regards,

Dihan

Tom Kyte
July 27, 2006 - 11:46 am UTC

by rolling back - I mean "undo", "to make as if it didn't happen", "to revert".

We are multi-versioning. In the buffer cache, you will find multiple versions of the same block at different points in time.

We only "save" the "current, most recent" version of the block - but we'll have more than one copy of that block as of different points in time.

Hence the term "multi-versioning", multiple versions.

multi versioning problem

Dihan Cheng, July 27, 2006 - 4:08 pm UTC

Thanks a lot!

warm regards,

Dihan

Fuzzy reads

Arindam Mukherjee, August 03, 2006 - 12:19 am UTC

Respected Mr. Tom,

Today I have a question on your favorite topic in Oracle – “Data Concurrency and Consistency” – Chapter-20: Oracle 9i Database concepts.
First off, I admit, English is not my native language, may be I am wrong to get the meaning of what I am going to ask you.

The following lines are written in that document.

Nonrepeatable (fuzzy) reads: A transaction REREADS data it has previously read and finds that another committed transaction has modified or deleted the data.

My question is when it uses “REREAD”, how it calls “Nonrepeatable reads” since it rereads or reading is repeatable.

If you again see the Table 20–1 Preventable Read Phenomena by Isolation Level, you will see that “Repeatable read” is “Not possible” under “Nonrepeatable Read”.

I agree with the Table 20–1 fact but could not understand the declaration “Nonrepeatable” when its definition clearly says “rereads”.

Please help me get it.


Tom Kyte
August 03, 2006 - 9:17 am UTC

<quote src=Expert Oracle Database Architecture>
Transaction Isolation Levels
The ANSI/ISO SQL standard defines four levels of transaction isolation, with different possible outcomes for the same transaction scenario. That is, the same work performed in the same fashion with the same inputs may result in different answers, depending on your isolation level. These isolation levels are defined in terms of three “phenomena” that are either permitted or not at a given isolation level:
* Dirty read: The meaning of this term is as bad as it sounds. You are permitted to read uncommitted, or dirty, data. You would achieve this effect by just opening an OS file that someone else is writing and reading whatever data happens to be there. Data integrity is compromised, foreign keys are violated, and unique constraints are ignored.
* Non-repeatable read: This simply means that if you read a row at time T1, and attempt to reread that row at time T2, the row may have changed. It may have disappeared, it may have been updated, and so on.
* Phantom read: This means that if you execute a query at time T1, and re-execute it at time T2, additional rows may have been added to the database, which will affect your results. This differs from the nonrepeatable read in that with a phantom read, data you already read has not been changed, but rather that more data satisfies your query criteria than before.
</quote>


Re-Reading the data means you issue the same/similar query twice, you "re-read" the data.


Sorry!! Still confused

Arindam Mukherjee, August 03, 2006 - 9:54 am UTC

Respected Mr. Tom,

I am sorry I could not get my point across to you. As said by your last comment, “Re-Reading the data means you issue the same/similar query twice, you "re-read" the data.” Yes, I accept but my confusion about the word “Nonrepeatable”. If I issue twice that means I re-read. I can also call it “Repeatable” read. How does the word “Nonrepeatable” read (fuzzy) come into picture?

May be, there is “Re-read / Repeatable” by issuing same SQL but results are “Nonrepeatable” as the result set must be changed between two Reads or Repeatable read.

Now am I correct to get the meaning of “Nonrepeatable” in light of “Re-read”? Please don’t get annoyed as I have already spend so many hours on it but still I get in a muddle over these two words - “Nonrepeatable” and “Re-read”.

I am truly waiting for your response.


Tom Kyte
August 03, 2006 - 10:14 am UTC

there is repeatable read, where by every time you read a given row in a transaction, it will have the same value (a GIVEN row, not a set). Unless you yourself change the row of course.

there is non-repeatable read. here, you could read the row where empno = 1234 and see the ename is 'X', and later in the same tranaction you read the row where empno = 1234 and the ename is 'Y'. You did not have a repeatable read.




Excellent

Arindam Mukherjee, August 03, 2006 - 10:34 am UTC

Respected Mr. Tom,

Now your explanation comes as a complete revelation to me. The definition of "Repeatable read" clears all my doubts. Thanks a lot! I am openly grateful to you.


mrora, May 04, 2007 - 2:15 pm UTC

Dear Tom. My firend show exmple me that:

if we do insert into table after we open cursor:

 cursor my_cur(v_param number)
         select t.* from my_table t where t.param = v_param;
  my_rec my_cur%rowtype;

  open my_cur(my_var);
    loop
         fetch my_cur into my_rec;
         exit when my_cur%notfound;
       
        my_proc(v_yeni_id, my_rec.param1, ...); -- this proc inserts new row to mytable

     end loop;
    close my_cur; 

and view is that the fetch never ends...
The same code worsk at another database with the same version. 9.2.0.1.0. Really, i didn't see that live example. But what do you think? which symptom???
Thanks in advance.
Tom Kyte
May 04, 2007 - 4:00 pm UTC

your friend is wrong.

give me an example in Oracle - true, it could happen in sqlserver and maybe others, but not in Oracle.

Not since version 4.0 anyway.

Why?

A reader, May 05, 2007 - 6:34 am UTC

Why nit true? He says that it works at another database exactly the same version. That is why I ask what it can be? I dont mean it is bug, but may we say that that version is unstable. so, may work and not... But I agree with you, he is doing something wrong...


Tom Kyte
May 08, 2007 - 10:05 am UTC

why not true?

because ORACLE DOES NOT WORK THAT WAY.

and has not since version 3.0.

that is why "not true". Many people say many things - that does not make them true.

Statement-Level Read Consistency

Sergey Trotsky, May 05, 2007 - 10:12 am UTC

Unfortunately Oracle never enforces statement-level read consistency. They are trying to fix this problem only in 10G. Just try to use a procedure or function on early versions. This is an example for Oracle9i Enterprise Edition Release 9.2.0.6.0 for 32-bit Windows:

create table Table1(Field1 INTEGER)
/

insert into Table1 values(1)
/
insert into Table1 values(1)
/
insert into Table1 values(1)
/
insert into Table1 values(1)
/

create or replace function fTotal return INTEGER as
  Total INTEGER;
begin
  select sum(Field1) into Total from Table1;
  return(TOTAL);
end;
/

begin
  for C in (select ROWID as RI, Field1, fTotal as NewValue from Table1) loop
    update Table1 set Field1=C.NewValue where ROWID=C.RI; 
  end loop;
end;
/

select * from Table1
/

Tom Kyte
May 08, 2007 - 10:19 am UTC

umm, you have statement level read consistency there dude, you just have multiple statements.

There is nothing changing or being fixed, nothing is broken, this is the way it is documented to work.

Each STATEMENT is read consistent.

In 10g, use 101 rows - see what happens there (you are seeing a side effect in 10g of an implicit cursor silently array fetching 100 rows - if you added 101 rows, you would see the "9i" behavior because nothing changed read consistency wise...)

or use an explicit cursor in 10g.

This is, will be the way it works - you have STATEMENT LEVEL read consistency by default. If you need transaction level read consistency, you use serializable.

You Have Just Demonstrated Read Consistency!!!

Phil Singer, May 07, 2007 - 9:47 am UTC

(where I suspect you are a little unclear is to just what is a statement).

Remember, a PL/SQL block is not "a Statement". It is a set of statements. In particular, your PL/SQL block:

begin
  for C in (select ROWID as RI, Field1, fTotal as NewValue from Table1) loop
    update Table1 set Field1=C.NewValue where ROWID=C.RI; 
  end loop;
end;
/


realy consists of 9 statements: A select, 4 function calls, and 4 updates. Or, to put it another way, each call to fTotal in the cursor C is a unique statement. Which will (and does) "see" the effect on table1 of the update statement.

To convert everything into a single statement, it is necessary to go from PL/SQL to SQL.

SQL> update table1 set field1=ftotal;


is a reasonable try, but that results in

update table1 set field1=ftotal
                         *
ERROR at line 1:
ORA-04091: table TABLE1 is mutating, trigger/function may not see it
ORA-06512: at "FTOTAL", line 4


which should serve as a warning that you are trying to do something you shouldn't. A better way is

SQL> update table1 set field1 = (select sum(field1) from table1);

4 rows updated.

SQL> select * from table1;

    FIELD1
----------
         4
         4
         4
         4


which looks consistent to me.

Statement-Level Read Consistency

Sergey Trotsky, May 07, 2007 - 12:16 pm UTC

imho

select ROWID as RI, Field1, fTotal as NewValue from Table1

it is a simple select statement inside the PL/SQL block,
there is no any "function calls", there is a single SQL cursor

all of the following PL/SQL blocks are equivalent (there is only single select statement in each):

begin
  update Table1 set Field1=1;
  for C in (select ROWID as RI, Field1, fTotal as NewValue from Table1) loop
    update Table1 set Field1=C.NewValue where ROWID=C.RI; 
  end loop;
end;
/
select * from Table1
/

begin
  update Table1 set Field1=1;
  for C in (
    select /*+LEADING(Table1) USE_NL(Table1 Table2)*/ Table1.ROWID as RI, Field1, NewValue
      from Table1, (select /*+NO_MERGE*/ fTotal as NewValue from Dual) Table2
  ) loop
    update Table1 set Field1=C.NewValue where ROWID=C.RI; 
  end loop;
end;
/
select * from Table1
/

begin
  update Table1 set Field1=1;
  for C in (
    select /*+LEADING(Table2) USE_NL(Table2 Table1)*/ Table1.ROWID as RI, Field1, NewValue
      from Table1, (select /*+NO_MERGE*/ fTotal as NewValue from Dual) Table2
  ) loop
    update Table1 set Field1=C.NewValue where ROWID=C.RI; 
  end loop;
end;
/
select * from Table1
/

begin
  update Table1 set Field1=1;
  for C in (
    select /*+LEADING(Table1) USE_MERGE(Table1 Table2)*/ Table1.ROWID as RI, Field1, NewValue
      from Table1, (select /*+NO_MERGE*/ fTotal as NewValue from Dual) Table2
  ) loop
    update Table1 set Field1=C.NewValue where ROWID=C.RI; 
  end loop;
end;
/
select * from Table1
/
last statements differ only in comments! results are very different.
this is a 100% bug! it is fixed in Oracle 10g and never supported in Oracle 9i.
Tom Kyte
May 08, 2007 - 11:11 am UTC

this is "NOT FIXED", it was NEVER BROKEN.

IT IS DONE EXACTLY THE SAME in 10g as it was in 9i.


ops$tkyte%ORA10GR2> declare
  2          cursor c is select ROWID as RI, Field1, fTotal as NewValue from Table1;
  3          l_rec c%rowtype;
  4  begin
  5          open c;
  6          loop
  7                  fetch c into l_rec;
  8                  exit when c%notfound;
  9          update Table1 set Field1=l_rec.NewValue where ROWID=l_rec.RI;
 10    end loop;
 11    close c;
 12  end;
 13  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select * from Table1
  2  /

    FIELD1
----------
         4
         7
        13
        25

Not all data sources may be read consistable

Gary, May 07, 2007 - 7:52 pm UTC

"it is a simple select statement inside the PL/SQL block, "
No, it's a SQL SELECT that calls a PL/SQL function (ftotal) that issues its own separate SQL SELECT. The SELECT in the function (and any other procedural login in the function) is executed independently of the SELECT in the cursor loop.

As you have demonstrated, read-consistency does not apply to columns that are dependent on non-transactional logic. I can think of procedural logic (including PL/SQL, Java and external procedures), external tables, v$-style view and probably DB links to some non-Oracle data sources. Any others ?

The change in 10g that gives different results is the implicit bulk fetch from the cursor in the loop (which means the update isn't actually executed until after the first hundred rows are picked up from the cursor). And possibly it may tend towards different execution plans.

Keep it in SQL

Gary, May 08, 2007 - 8:02 pm UTC

"If you need transaction level read consistency, you use serializable. "
Tom, serializable won't resolve that particular example as the updates are being done by the same transaction so the changes would still be visible. In 10G, you can go a step further and use flashback query to exclude updates from your transaction.
create table Table1(Field1 INTEGER)
/

insert into Table1 values(1)
/
insert into Table1 values(1)
/
insert into Table1 values(1)
/
insert into Table1 values(1)
/

create or replace function fTotal (p_in_scn in number) return INTEGER as
  Total INTEGER;
begin
  select sum(Field1) into Total from Table1 as of scn p_in_scn;
  return(TOTAL);
end;
/
commit;
begin
  for C in (select ROWID as RI, Field1, 
             fTotal(DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER) as NewValue from Table1) loop
    update Table1 set Field1=C.NewValue where ROWID=C.RI; 
  end loop;
end;
/

select * from Table1
/

That said, I think this has demonstrated the benefits of keeping things in SQL and a potential pitfall when extending into PL/SQL.

Why is the difference?

Albert Nelson A, May 17, 2007 - 8:18 am UTC

Hi Tom,

Why is the difference when using explicit cursor against implicit cursor? Am I missing something here?

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bi
PL/SQL Release 10.2.0.2.0 - Production
CORE 10.2.0.2.0 Production

TNS for HPUX: Version 10.2.0.2.0 - Production
NLSRTL Version 10.2.0.2.0 - Production

SQL> create table Table1(Field1 INTEGER)
  2  /

Table created

SQL> insert into Table1 values(1)
  2  /

1 row inserted

SQL> insert into Table1 values(1)
  2  /

1 row inserted

SQL> insert into Table1 values(1)
  2  /

1 row inserted

SQL> insert into Table1 values(1)
  2  /

1 row inserted

SQL> create or replace function fTotal return INTEGER as
  2    Total INTEGER;
  3  begin
  4    select sum(Field1) into Total from Table1;
  5    return(TOTAL);
  6  end;
  7  /

Function created

SQL> commit
  2  /

Commit complete

SQL> declare
  2          cursor c is select ROWID as RI, Field1, fTotal as NewValue from Table1;
  3          l_rec c%rowtype;
  4  begin
  5          open c;
  6          loop
  7                  fetch c into l_rec;
  8                  exit when c%notfound;
  9          update Table1 set Field1=l_rec.NewValue where ROWID=l_rec.RI;
 10    end loop;
 11    close c;
 12  end;
 13  /

PL/SQL procedure successfully completed

SQL> select * from table1
  2  /

                                 FIELD1
---------------------------------------
                                      4
                                      7
                                     13
                                     25

SQL> rollback
  2  /

Rollback complete

SQL> begin
  2    for C in (select ROWID as RI, Field1, fTotal as NewValue from Table1) loop
  3      update Table1 set Field1=C.NewValue where ROWID=C.RI;
  4    end loop;
  5  end;
  6  /

PL/SQL procedure successfully completed

SQL> select * from Table1
  2  /

                                 FIELD1
---------------------------------------
                                      4
                                      4
                                      4
                                      4




Regards,

Albert Nelson A.
Tom Kyte
May 17, 2007 - 11:27 am UTC

in 10g, implicit cursors of the style:

for x in ( query | cursor )


implicitly ARRAY FETCH 100 rows


explicit cursors do not.

So, in 10g

for x in (select * from t)
loop
 ...
end loop;


is just like:

open c for select * from t;
loop
   fetch c BULK COLLECT into array limit 100;
   for i in 1 .. array.count
   loop
     ...
   end loop;
   exit when c%notfound;
end loop;
close c;

Something new

Albert Nelson A, May 18, 2007 - 7:19 am UTC

Tom,

Whew! I had always thought implicit and explicit cursors can be used interchangeably. I had not known about this difference. Thanks.

(Is it documented? I don't seem to find.)

Regards,

Albert Nelson A.
Tom Kyte
May 18, 2007 - 4:06 pm UTC

you can disable it with the different plsql optimizer settings.

But still...

Albert Nelson, May 18, 2007 - 3:10 pm UTC

Tom,

Upon further pondering I am still not clear. Should not the cursor return values as of time it was opened? Here each time it is fetched, why is it returning values as of time of fetch and not as of time of open?

Regards,

Albert Nelson A.
Tom Kyte
May 18, 2007 - 4:29 pm UTC

there are MORE THAN ONE CURSOR here, you are calling functions - they have their own, separate distinct cursors.

OK.

Albert Nelson, May 19, 2007 - 5:02 am UTC


on your answer "there are MORE THAN ONE CURSOR here,you are calling functions- they have their own

Andriy, April 05, 2008 - 11:16 am UTC

>>there are MORE THAN ONE CURSOR here, you are calling
>> functions - they have their own, separate distinct cursors.

How can we to have those nested cursor(which select data in function to return some calculation or rows to query) to have constistent result to the time of starting of parent cursor?

or always those nested-selects(cursor opened in function) will be inconsistent with parent select?
Tom Kyte
April 07, 2008 - 8:49 am UTC

just use SQL

select ..., (select ...) 
  from t



will be read consistent


else - use a READ ONLY or SERIALIZABLE transaction and all SQL is read consistent as of the same point in time

or - use flashback query - pass the SCN to be used to the functions and have them use "as of" - beware however that you will NOT see your own modifications of course - since "as of" queries only see data that was committed "as of" that time.


in answer to "Followup April 7, 2008 - 8am US/Eastern:"

Andriy, April 07, 2008 - 12:48 pm UTC

>else - use a READ ONLY or SERIALIZABLE transaction and all
>SQL is read consistent as of the same point in time

>or - use flashback query - pass the SCN to be used to the
>functions and have them use "as of" - beware however that you
>will NOT see your own modifications of course - since "as of"
>queries only see data that was committed "as of" that time.

I ask because I see problem in using pipeline finction which select data by-turn.

if we have
select * from some_table
where field in
( select field from table(pipeline_function())

we can have nested loop in some query plans so the pipeline_function() can be called more the one time and as result filtering of the some_table will be done with different results of sub_query because each next select from within pipeline_function will be done with diffenrent scn.

Am I correct?
So the safe use of those functions is only with readonly or serializable transaction. ?







Tom Kyte
April 09, 2008 - 9:03 am UTC

or with flashback query....


Thanks for your answer

Andriy, April 09, 2008 - 10:32 am UTC

Thanks for your answer

It was realy usefull for me to clarify those issue with pipipeline functions.

consistent read

lalu, September 16, 2008 - 12:40 am UTC

Hi Tom,

I have a report run at 10:00 hours and runs for 30 minutes.
For updates and deletes in the tables , the report reads from the UNDO for consistency.

How inserts are handled?I am very much sure the report takes a snap of all the involved tables at 10:00.But how the new records inserted are treated in the reporting?
Oracle does not consider those new records coming after 10:00,how its handled?

Thanks.
lalu....
Tom Kyte
September 16, 2008 - 10:22 pm UTC

all modifications are just that - modifications.

insert,update, delete - all modifications.


anything that changes the block after your query was opened will cause your query to roll the block back to as of the time your query began.

MVRC

A reader, May 10, 2009 - 1:10 pm UTC

Tom:

Oracle does not use "Read locks" at all which distiguishes from other all databases and uses MVRC instead.

The whole purpose of this is to reduce contention.

But does not these cause integrity problems. Do not you always want the latest update to a row or lock changes to a row when someone is changing the data until he finishes and then when you run the report you freeze the data until all updates are done and then users can see the freshest copy.

Because I can run a report at 12:00 AM (version 0 of row). My reports updates the credit field of a customer (version 1). Then while report is running a user updates the credit field based on the old data (version 2) and overwrites what report does because he did not see the report version.

My point is that there cna all kinds of issues when you have 10 version of a row and each user is working on his own version. There should be only one lat version everyone should be working on.

am i missing something
Tom Kyte
May 11, 2009 - 5:19 pm UTC

SMK - aka Sam,

I don't think you get what MVRC is. It stands for

Multi
Versioning
Read
Consistency


The entire goal is to return a CONSISTENT result, that is precisely and EXACTLY what it does. That is what the feature is - READ CONSISTENCY.


People only modify the "current version" of the row.

Queries and Transactions see consistent versions of data.


Let me give you a simple example. Pretend you have a table with 3 rows on 3 blocks (they are big rows)

On block 1 - your savings account information exists, you have $100 in savings
On block 2 - my savings account exists. I have $5 in savings.
On block 3 - your checking account exists. You have $100 in checking.

since we are one row per block, I can use block/row - either one means the other - so we don't have to deal with row level locking vs block level locking, we'll just deal with CONSISTENT CORRECT answers.

Now, you cannot argue that there is more or less than $205 in the bank right now.


Say I start running your report - you "select sum(balance) from accounts". That will read three rows on three blocks.


That query will full scan, and it'll read block 1. The total right now is $100. The query continues and gets to block 2. The total right now is $105.

Now, you hit an ATM (automated teller machine) and decide you need to transfer $50 from your savings into your checking to cover a check you are just about to write. So, you update block 1 and change $100 to $50. You update block 3 and change that $100 to $150. You commit and walking away.


Now, my query is still running - it hits block 3. We detect that block three was modified since our query began - we cannot use this version of the block (if we did - we would report a balance of $255 - a value that never existed EVER in the bank). So, we roll that block back to the way it was when our statement began - when our query began, that block had $100 on it. So that is what we report - our total is $205.


You can safely say on your report that "as of the time the query began, the amount in the bank was $205". If you did not do a consistent read (as we do), you would have reported $255 (maybe this is why we are in a financial crisis, too many sqlserver instances reporting made up numbers??).



So, what happens in a database without multi-versioning/read consistency???

If you want the CORRECT answer, you have to use shared read locks. You would go into an isolation mode that would leave behind a shared read lock on each block you read. In DB2 for example - running that scenario - your ATM transaction would have BLOCKED and waited for my query to complete. I would have locked block 1 from updates when I read it - your transaction would have tried to update that block and gotten blocked.

I would have reported the correct answer this time - the answer of $205 - but your ATM transaction would have taken forever (because remember in real life, this table is not 3 blocks, it is 30,000 blocks and the full scan takes a while).


Or, what if you reversed blocks 1 and 3 - put your checking on block 1 and savings on block 3. Now, in DB2 what would happen is I would read and lock block 1, then block 2. You would do the ATM thing and get an exclusive lock on block 3 (your savings account) and decrement it. You would try to get an exclusive lock on block 1 but you cannot since I have a shared read lock on it. So your ATM blocks once again. Now my query progresses and tries to read block 3 - but it cannot - you have an exclusive lock on it - so I block. In fact, we are deadlocked and either

a) your ATM will display a nice error message to you
b) my query will terminate and I won't get my answer




You are missing something - that something is "how it all works and what it all means"

If you want the long story - Expert Oracle Database Architecture (my book) has chapters on redo, on undo, on transactions, on read consistency and concurrency controls.

Your sense of humor rocks too....

Reader, May 12, 2009 - 2:13 am UTC

[Quote] May be this why we are in a financial crisis, too many sqlserver instances reporting made up numbers?? [/Quote]

LOL !!!!!!!

Number of CR ?

Auge, August 16, 2009 - 9:21 am UTC

Hi Tom,

Why oracle keeps multiple CR blocks (default is 6) ? What is the cons of having 6 rather than 1 CR ?

Thank you,

Auge
Tom Kyte
August 24, 2009 - 8:08 am UTC

many sessions are actively querying the database.

they each need blocks of different possible 'ages', we store multiple versions of a block in the cache to satisfy this need.

If we did not, we'd have to constantly apply undo to the most current version over and over and over - doing the same bit of work time after time - multiple versions allows us to skip doing work.

A reader, October 17, 2009 - 6:27 pm UTC

Hi Sir;

My transaction started at t1.
at t2 some of the rows are modified in block2.
My transaction reads block2 and notice that the block scn is higher.
and read the before image of the block from undo.

What I wanna ask is, since not all of the rows were modified in this block,
Why does oracle needs to reconstruct the before image of entire block rather than just reading
before image of the rows?
Tom Kyte
October 22, 2009 - 3:53 pm UTC

because the smallest unit of "stuff" in the database is a block. We do not do "rows", we do blocks. The block is our unit of organization and the only thing we can work on.

SCN and read consistency

A reader, November 03, 2009 - 2:26 am UTC

hi tom,

i am just wondering..

BLOCK 1 current block scn is (e.g SCN100)
SESSION 1 done a update without committing

q1)will this actually cause the block SCN to change?

SESSION 2 query on BLOCK 1 (e.g query SCN101)

In this case, how does oracle knows when to get an older version of BLOCK 1 ?

q2) So can i said that, beside checking the BLOCK SCN, the block ILT list is also checked for uncommited transaction ?

am i right ?
Hope to hear your advice soon.

Regards,
Noob





Tom Kyte
November 09, 2009 - 2:18 pm UTC

blocks do not really have an "scn", they have information in the transaction header that can be used to derive that information.


the transaction header on block 1 will definitely be updated with the information about the transaction (session1) that has updated but not yet committed. Since it hasn't committed - we *know* for a fact that change must be rolled back.

Query on Read Consistency example in Oracle one on one

Sattam, July 04, 2010 - 6:33 am UTC

Hi Tom,

I was reading your oracle one on one expert. There is an example on summing up bank acount balances at the end of the day, starting at page 52. How Oracle is working is explained in page 56.
With reference to that explanation, bolI was thinking if T7 would have had occurred before T6 (Which can very well be the case), then how would have oracle got the answer to the query right.
I noticed that, the immediately preceding paragraph says oracle does not care if data is locked it only checks if the data has changed. My question is how is oracle knowing that the data in the block has changed from the time when the read query (sum of balance in this case) had started.
This question was asked before in this same thread but due to lack of clarity it was not answered.
Regards,
Sattam.
Tom Kyte
July 06, 2010 - 2:44 pm UTC

there is information in the block header that tells us the transaction history of the block. It is easy for us to tell when a block was last modified based on that.


Is this write inconsistency from Oracle?

Oracle Learner, December 10, 2012 - 3:15 pm UTC

Hi Tom,

On the read consitency subject, here is a, seemingly simple, example that I have been scratching my head over for a while.

Please can you look at the URL

http://jonathanlewis.wordpress.com/2011/03/20/upgrade-whoa/


specifically the example at comment 13 "Here is example of write inconsistency:".

I thought I had this straight in my head but it still puzzles me.

There appears to be 2 "reads of committed" data by the update in session 2.

The first read identifies that row with ID of 1 is the only row to be updated (due to the implicit commit in the Create Table statement of session 1 and the default "READ COMMITTED isolation level).

Session 2 then waits on a lock on this row (has it logged a slot in the ITL at this point?) until session 1 has issued the "commit" statement, in reality session 1's second (or third?) commit.

Once Session 2 has its lock, to me it seems to say "I only saw the first row at the time my update was submitted and I will just add 1 to is current value and not the value it had when I first read the block" i.e. the update seems to depend on state defined by session 1 over 2 transactions ( 2 commits).

I'd settle for saying that Oracle is doing the right thing, but concurrency has to be thought out very carefully from an applications point of view

This seems reasonable but would like to know your thoughts and how I go about tracing this to convince myself - would I have to look at v$lock,statistics, dump blocks etc.

Thank-you




Tom Kyte
December 14, 2012 - 2:39 pm UTC

that posting shows a bug that was fixed. If you review the comments - the bug number and everthing is there. It is fixed in 11.2.0.3.

Oracle Learner, December 15, 2012 - 9:25 am UTC

Tom,

Appreciate you checking this out.

I have upgraded by installation from 11.2.0.1 to 11.2.0.3 and there is no difference -

1,2
2,1


I would be obliged if you would try it to confirm.

Don't mean to be pedantic about it but it seems to be such a basic, small example and I would have thought it is not so good to have this open to doubt on another well respected site.

Either the behaviour is consistent or its not.

You say in your book 'Expert Oracle Database Architecure 2nd Ed.' (P 246 on my copy) that

"Update statements have two components: a read component as defined by the WHERE clause and a write component as defined by the SET clause".

It seems to me that the read component has identified Row id=1 in a "READ COMMITTED" fashion, and has decided that this is all it is going to update, waits for a lock then updates the row.

Perhaps the more fundamental question is looking at the sql statements with a vendor neutral hat on, what should this do?

So to sum up, just wanted some clarification on
1) Oracle as it is, is doing the "right thing" - users should take note and code apps accordingly
or
2) This may need looked at

I beat the drum for Oracle in my institution, but a beginner in Oracle discovering this may be put off a little.

PS if this has been modified in a release later than 11.2.0.3, apologies for wasting your time.

Thanks

Tom Kyte
December 18, 2012 - 7:51 am UTC

post the entire example you yourself tried.

I don't know what

1,2
2,1

is supposed to mean. here is my outcome on 11.2.0.3, it is what should be expected with read consistency.


a select for update is really an update - it locks (modifies) data just like an update does.

the 'read' component of this query in the 2nd session would find that 1 was the minimum and it would read the riw frmo Tab1 where id = 1 and try to lock it. It would block on that lock. When the other session deletes id=1 and commits - the first session is free from the lock - and when it finds the row it read with a consistent read is gone - the select for update restarts - the entire select for update restarts.

there was a bug for a short time where by the subquery did not restart, that was a bug, it does not exist in the current patchsets of the datatabase - 11.2.0.3 included.


here is my entire run of the example in 11.2.0.3:

ops$tkyte%ORA11GR2> create table tab1(
  2      id  number,
  3      info    varchar2(10),
  4      constraint tab1_pk primary key (id)
  5          using index (create index idx_tab1_pk on tab1(id))
  6  );

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into tab1 values(1,'a');

1 row created.

ops$tkyte%ORA11GR2> insert into tab1 values(2,'a');

1 row created.

ops$tkyte%ORA11GR2> insert into tab1 values(3,'a');

1 row created.

ops$tkyte%ORA11GR2> commit;

Commit complete.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> execute dbms_stats.gather_table_stats(user,'tab1',cascade=>true)

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> column id new_value m_id
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> set autotrace on explain
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select  id
  2  from    tab1
  3  where   id = (
  4              select  min(id)
  5              from    tab1
  6          )
  7  for update
  8  ;

        ID
----------
         1


Execution Plan
----------------------------------------------------------
Plan hash value: 2158735136

---------------------------------------------------------------------------------------------
| Id  | Operation                     | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |             |     1 |     3 |     1   (0)| 00:00:01 |
|   1 |  FOR UPDATE                   |             |       |       |            |          |
|   2 |   BUFFER SORT                 |             |       |       |            |          |
|*  3 |    INDEX RANGE SCAN           | IDX_TAB1_PK |     1 |     3 |     0   (0)| 00:00:01 |
|   4 |     SORT AGGREGATE            |             |     1 |     3 |            |          |
|   5 |      INDEX FULL SCAN (MIN/MAX)| IDX_TAB1_PK |     1 |     3 |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("ID"= (SELECT MIN("ID") FROM "TAB1" "TAB1"))

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> set autotrace off
ops$tkyte%ORA11GR2> set echo off
=============================================================
Now repeat the query in another session and watch it lock
And use a third session to check v$lock
Then delete here, commit and see what the second session does
=============================================================
Press return to delete and commit
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> set verify on
ops$tkyte%ORA11GR2> delete from tab1 where id = &m_id;
old   1: delete from tab1 where id = &m_id
new   1: delete from tab1 where id =          1

1 row deleted.

ops$tkyte%ORA11GR2> commit;

Commit complete.




the other session received:

[tkyte@localhost ~]$ plus

SQL*Plus: Release 11.2.0.3.0 Production on Tue Dec 18 08:46:12 2012

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

select  id
from    tab1
where   id = (
            select  min(id)
            from    tab1
        )
for update
  8  ;

<b>[got blocked here]</b>

        ID
----------
         2



which is what I expected.

Code now added

Oracle Learner, December 18, 2012 - 4:17 pm UTC

Hi Tom,

Thanks for replying again - and what you have explained makes sense and apologies, I should have posted the code for the example.


The example I was pointing to was from a subsequent post further down the same page (comment 13 "Here is an example of write inconsistency")

Session 1
drop table t purge;
create table t as select 1 id, 0 val from dual;
insert into t values(2, 0);
update t set val = 1;
 
-- Then in second session
update t set val = val + 1;
-- Session waits for the first
  
-- Then commit first
commit;

-- And query in second
select * from t;


The result will be:

 select * from t;

       ID        VAL
---------- ----------
        1          2
        2          1


The discussion was inconlusive but I thought I had in worked out that Oracle was doing the right thing, then wasn't quite so sure.

The example itself (even as a stripped down example) is probably a little artificial.

Session 2 seems to lock in one consistent state (where the 'Read Committed' picks up only row id=1, created by the 'create table' statement in session 1) then updates in another consistent state (it picks up the committed effect of the 'set val = 1' from session 1).

Interested on your thoughts.

Thanks again.




Tom Kyte
December 19, 2012 - 8:32 am UTC

yes, I'm aware of that issue - we had discussed it at length on the newsgroups about 10 years ago...

the set of columns that "trigger" the restart are the ones used to locate rows (of which there are none, the where clause doesn't exist) plus any columns referenced in a trigger.

If you were to add:

create trigger t before update on t for each row
begin
    dbms_output.put_line( 'updating ' || :old.id || ',' || :old.val || ' to ' || :new.id || ',' || :new.val );
end;
/


after the create table - you'd actually see different results... Since the trigger refers to ID and VAL - they become part of the set of columns responsible for triggering the restart of the update.

I've decided to file a bug on this - I don't think it should work the way it currently is working myself.

lados, June 06, 2016 - 1:42 pm UTC

Hi,
I would like to ask about your last followup, at "December 19, 2012 - 8:32 am UTC". Did you file a bug? If so, what was the "judge" of the bug? Because as I see, the effect is same in 12c. (Anyway the insert of the first row is commited when the "session 2 update" is started. While the insert of the second row is not commited yet)
B.r.: lados.
Chris Saxon
June 07, 2016 - 12:22 am UTC

Yes a bug was filed. It is still unresolved.

Bug id please ?

Rajeshwaran, Jeyabal, June 07, 2016 - 11:29 am UTC

It is possible to share the Bug id here ?
Chris Saxon
June 08, 2016 - 12:49 am UTC

Internal only currently.

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library