div.b-mobile {display:none;}

Tuesday, August 30, 2005

Something Different Part I of III


I was giving my seminar today and did the read/write consistency section. This is by far my favorite part of Oracle. It is the feature that made me choose Oracle above all other databases back in the early 1990’s. The following is from the forthcoming book – but I find it interesting enough to print here as well. You should also check out http://asktom.oracle.com/~tkyte/wc.html for links to some other “more raw” (not edited) reading on the topic. I think it is important for us all to understand this with regards to using Oracle. The following is quoted from the book

Part I of III; Write Consistency
So far, we’ve looked at read consistency: Oracle’s ability to use undo information to provide non-blocking query and consistent (correct) reads. We understand that as Oracle reads blocks for queries out of the buffer cache, it will ensure that the version of the block is “old” enough to be seen by that query.

But that begs the following question: What about writes/modifications? What happens when you run the following UPDATE statement:

Update t set x = 2 where y = 5;


and while that statement is running, someone updates a row it has yet to read from Y=5 to Y=6 and commits? That is, when your UPDATE began, some row had the value Y=5. As your UPDATE reads the table using consistent reads, it sees that the row was Y=5 when the UPDATE began. But, the current value for Y is now 6—it’s not 5 anymore—and before updating the value of X, Oracle will check to see that Y is still 5. Now what happens? How are the updates affected by this?
Obviously, we cannot modify an old version of a block—when we go to modify a row, we must modify the current version of that block. Additionally, Oracle cannot just simply skip this row, as that would be an inconsistent read and unpredictable. What we’ll discover is that in such cases, Oracle will restart the write modification from scratch.

Consistent Reads and Current Reads


Oracle does do two types of block gets when processing a modification statement. It performs

* Consistent reads: When “finding” the rows to modify
* Current reads: When getting the block to actually update the row of interest

We can see this easily using TKPROF. Consider this small one row example, which reads and updates the single row in table T from earlier:

ops$tkyte@ORA10GR1> alter session set sql_trace=true;

Session altered.

ops$tkyte@ORA10GR1> select * from t;

X
----------
10001

ops$tkyte@ORA10G> update t t1 set x = x+1;
1 row updated.

ops$tkyte@ORA10G> update t t2 set x = x+1;
1 row updated.

When we run TKPROF and view the results, we’ll see something like this (note that I removed the ELAPSED, CPU, and DISK columns from this report):

select * from t


call ...count query current rows
------- ----- ----- ------- ----
Parse.......1 ....0 ......0 ...0
Execute.....1 ....0 ......0 ...0
Fetch.......2 ....3 ......0 ...1
------- ----- ----- ------- ----
total.......4 ....3 ......0 ...1

update t t1 set x = x+1

call ...count query current rows
------- ----- ----- ------- ----
Parse ......1 ....0 ......0 ...0
Execute ....1 ....3 ......3 ...1
Fetch ......0 ....0 ......0 ...0
------- ----- ----- ------- ----
total ......2 ....3 ......3 ...1

update t t2 set x = x+1

call ...count query current rows
------- ----- ----- ------- ----
Parse ......1 ....0 ......0 ...0
Execute ....1 ....3 ......1 ...1
Fetch ......0 ....0 ......0 ...0
------- ----- ----- ------- ----
total ......2 ....3 ......1 ...1

So, during just a normal query, we incur three query (consistent) mode gets. During the first UPDATE, we incur the same three I/Os (the search component of the update involves finding all of the rows that are in the table when the update began, in this case) and three current mode gets as well. The current mode gets are performed in order to retrieve the table block as it exists right now, the one with the row on it, to get an undo segment block to begin our transaction, and an undo block. The second update has exactly one current mode get—since we did not have to do the undo work again, we had only the one current get on the block with the row we want to update. The very presence of the current mode gets tells us that a modification of some sort took place. Before Oracle will modify a block with new information, it must get the most current copy of it.

So, how does read consistency affect a modification? Well, imagine you were executing the following UPDATE statement against some database table:

Update t set x = x+1 where y = 5;


We understand that the WHERE Y=5 component, the read-consistent phase of the query, will be processed using a consistent read (query mode gets in the TKPROF report). The set of WHERE Y=5 records that was committed in the table at the beginning of the statement’s execution are the records it will see (assuming READ COMMITTED isolation—if the isolation is SERIALIZABLE, it would be the set of WHERE Y=5 records that existed when the transaction began). This means if that UPDATE statement were to take five minutes to process from start to finish, and someone added and committed a new record to the table with a value of 5 in the Y column, then that UPDATE would not “see” it because the consistent read would not see it. This is expected, and normal. But, the question is, what happens if two sessions execute the following statements in order:

Update t set y = 10 where y = 5;

Update t Set x = x+1 Where y = 5;

Table 7-8 demonstrates the timeline:
Table 7-8. Sequence of Updates

T1 Session 1: Update t set y = 10 where y = 5; This updates the one row that matches the criteria.

T2
Session 2: Update t Set x = x+1 Where y = 5; Using consistent reads, this will find the record session 1 modified, but it won’t be able to update it since session 1 has it blocked. Session 2 will block and wait for this row.

T3 Session 1: Commit; -- This releases session 2; session 2 becomes unblocked. It can finally do the current read on the block containing this row, where Y was equal to 5 when session 2 began its update.

So the record that was Y=5 when you began the UPDATE is no longer Y=5. The consistent read component of the UPDATE says, “You want to update this record because Y was 5 when we began,” but the current version of the block makes you think, “Oh, no, I cannot update this row because Y isn’t 5 anymore—it would be wrong”.

If we just skipped this record at this point and ignored it, then we would have a nondeterministic update. It would be throwing data consistency and integrity out the window. The outcome of the update (how many and which rows were modified) would depend on the order in which rows got hit in the table and what other activity just happened to be going on. You could take the same exact set of rows and in two different databases, each one running the transactions in exactly the same mix, you could observe different results, just because the rows were in different places on the disk.

In this case, Oracle chose to restart the update. When the row that was Y=5 when you started is found to contain the value Y=10, Oracle will silently roll back your update and restart it—assuming you are using READ COMMITTED isolation. If you are using SERIALIZABLE isolation, then at this point you would receive an ORA-08177 can't serialize access error for this transaction. In READ COMMITTED mode, after the transaction rolls back your update, the database will restart the update (i.e., change the point in time at which the update is “as of”), and instead of updating the data again, it will go into SELECT FOR UPDATE mode and attempt to lock all of the rows WHERE Y=5 for your session. Once it does this, it will run the UPDATE against that locked set of data, thus ensuring this time that it can complete without restarting.

But to continue on with the “but what happens . . .” train of thought, what happens if after restarting the update and going into SELECT FOR UPDATE mode (which has the same read-consistent and read current block gets going on as an update does), a row that was Y=5 when you started the SELECT FOR UPDATE is found to be Y=11 when you go to get the current version of it? That SELECT FOR UDPDATE will restart and the cycle begins again.
There are two questions to be addressed here—two questions that interested me, anyway. The first was, Can we observe this? Can we see this actually happen? And the second was, So what? What does this actually mean to us as developers? We’ll address these questions in turn now.

To be continued….
POST A COMMENT

43 Comments:

Blogger Doug Burns said....

Tom,

"the read/write consistency section. This is by far my favorite part of Oracle. It is the feature that made me choose Oracle above all other databases back in the early 1990’s."

I agree. This post bought back memories of teaching DBA courses for Oracle UK just over 10 years ago and my evangelical tone! I have a couple of questions you might be able to answer as an Oracle insider.

1) I remember there being a white paper which was called something like 'The Business Case for Oracle'. I'm sure the title's wrong but near the mark. It talked about locking and read consistency (amongst other things) and why these were important from a business application perspective. I loved that document and used it to explain Oracle to non-IT people. Do you know if such a thing still exists, or even have a link to the original? It must have been around 1994.

2) Is Oracle's read consistency model patented? I remember that it was patent pending or something like that and was wondering if it is a thing that will always be exclusive to Oracle. I know there are many different approaches but Oracle's has always seemed the most robust to me (still waiting to be proved wrong ...)

Cheers,

Doug

Tue Aug 30, 01:28:00 PM EDT  

Blogger Thomas Kyte said....

1) i remember that paper, I helped refined later editions of it, probably still around somewhere.

It has become fashionable again to talk about non-blocking reads and correct answers everynow and then. In 2005, I still use sqlplus to show it (no need for a ppt build when you can actually show it)

2) Oracle has perhaps hundreds or thousands of patents (I'm on a couple! That is neat, all regarding html-db - but I did not write html-db, just did little things here and there). I don't really keep track, many of the multi-versioning and read consistency ones are covered I am sure.

Tue Aug 30, 01:43:00 PM EDT  

Blogger Joel Garry said....

www.uspto.gov has a search function. searching on oracle AND "read-committed" shows a couple of interesting results. Couldn't find a read consistency patent, except for two-phase commit. But I didn't look too hard.

Tue Aug 30, 03:04:00 PM EDT  

Blogger Thomas Kyte said....

not sure that read consistency or anything like that would be in there.

You should see patents made by a motor-cycle vendor sometime, you would not even recognize them as being from a motorcycle due to the wording (shown to me by a vendor of software to mine patents by "context" rather than keyword)

Patents can be very "direct" yet "vague" at the same time.

Tue Aug 30, 03:16:00 PM EDT  

Anonymous Eric said....

What about doing a flashback query to the time you started the query? Would that eliminate Oracle's need to re-read data? Since not too much time has progressed then there wouldn't be too much undo to parse through to find the state of the table at that time.

Tue Aug 30, 04:17:00 PM EDT  

Anonymous Rob H said....

Will you be adding anything in regards to distributed transactions and how they affect read consistancy? Does it matter?

Tue Aug 30, 04:55:00 PM EDT  

Blogger Doug Burns said....

Tom,

"i remember that paper, I helped refined later editions of it"

I'm completely impressed by that ;-)

"Oracle has perhaps hundreds or thousands of patents (I'm on a couple! "

Impressed again. I suppose it's thousands. Staying far away from the political and philosophical issues around software patents, I was just interested in whether we were likely to see other products use a similar approach - probably not.

Cheers,

Doug

Tue Aug 30, 05:03:00 PM EDT  

Blogger Niall said....

Hi Doug

Well SQL2005 definitely doesn't use a similar mechanism. They use something much more akin to (if I get the marketing name right) flashback logging. i.e. a scratch area (in tempdb) with records of versions and (IIRC) change vectors.

I imagine that other databases that do this - one of the open source ones does - would use a similar mechanism to SQL2005. I can't imagine Oracle not patenting this.

Tom described Patents as 'direct' yet 'vague'. I think this a brilliant description. The application has to clearly and unambiguously cover the invention in question - but the smart lawyer will try to endeavour to make the claims as generic as possible in order to avoid circumventing the patent by small deviations from the disclosure.

I'll admit that the fact that I married a patent lawyer makes me biased in favour of patents (and my IT and Economics makes me biased against the US style business process patents) but I think its quite neat reading patents that cover a specific, but generate revenue for the inventing company for incremental improvements to their original idea

Tue Aug 30, 05:31:00 PM EDT  

Anonymous Anonymous said....

"What about doing a flashback query to the time you started the query? ... "

Flashback queries use the same mechanism to rollback blocks as a normal query. The only difference is that the user chooses the moment the query is consistent instead of Oracle choosing.

Tue Aug 30, 07:40:00 PM EDT  

Blogger Thomas Kyte said....

What about doing a flashback query to the time you started the query?

Oracle *is* doing a flashback query :) that is what read consistency and multiversioning are all about.

Will you be adding anything in regards to distributed transactions
I covered distibruted transactions lightly (mostly on the two phase commit) but they use read consistency on their respective sites as well.

Wed Aug 31, 12:44:00 AM EDT  

Anonymous Alex Nuijten said....

If I understand things correctly, then this must be a typo:

T3 Session 1: Commit; -- This releases session 1; session 1 becomes unblocked. It can finally do the current read on the block containing this row, where Y was equal to 5 when session 1 began its update.

Shouldn't it read session 2 becomes unblocked.?

Wed Aug 31, 04:47:00 AM EDT  

Blogger Thomas Kyte said....

shouldn't it read.....


Yes, you are right, someone else emailed me that as well.

Thanks.

Wed Aug 31, 04:54:00 AM EDT  

Anonymous Anonymous said....

Hi Tom,

I find this subject to be interesting as well, especially for someone who is still relatively new to Oracle. I was wondering, has much changed with Oracle's read consistancy and locking + concurrency model multiversioning since the writing of your two books? I thought that maybe something has and that's why you chose it as a blog topic.

Wed Aug 31, 10:51:00 AM EDT  

Blogger Thomas Kyte said....

I was wondering, has much changed with Oracle's read consistancy and locking + concurrency model multiversioning since the writing of your two books?

No, it hasn't changed, but over the last 5 years, I've learned more, and have better ways to say what I know. This section is actually a new addition to the book that never existed before.

Wed Aug 31, 11:27:00 AM EDT  

Blogger R Menon said....

Very very nicely written!

Few basic questions:

"The current mode gets are performed in order to retrieve the table block as it exists right now, the one with the row on it, to get an undo segment block to begin our transaction, and an undo block. "

What does the get corresponding to
"undo segment block to begin a transaction" mean? I guess that in order to get an undo block, you need to read the segment header once? Is this get only once per transaction?

And why do we do an "undo block" get again when doing a current mode block get?

"So the rule is that the set of columns used in the WHERE clause to find the rows plus the columns referenced in the row triggers will be compared. "

i still dont understand why Oracle needs to look at the :new and :old values. How can simply referencing the old and new values of a column trigger the restart? Is it because for some reason, Oracle is not able to detect changes in these values? I guess, that might not be feasible or be too complicated and hence not-justified for this relatively "rare" phenomenon, come to think of it.

Wed Aug 31, 12:28:00 PM EDT  

Blogger Thomas Kyte said....

What does the get corresponding to
"undo segment block to begin a transaction" mean?


it means "we are beginning our tranasction, we are 'registering ourselves' with the rollback segment".


And why do we do an "undo block" get again when doing a current mode block get?

In order to record our UNDO, the UNDO we just generated..

i still dont understand why Oracle needs to look at the :new and :old values.

blog hopping you are, that belows to the other thread doesn't it :)

the before, for each row trigger gets pass the READ CONSISTENT :new record and the modifications to the READ CONSISTENT :new record to represent the :new and :old records.

Only AFTER that trigger fires do we get the block in current mode. Upon finding the READ CONSISTENT version of the record isn't "there anymore", that trigger we just fired - it was passed information *that doesn't exist*, it made decisions based on data *that doesn't exist*, it must be undone and done over.

Sure, in my example, I was just printing them out - but *in general*, you are looking at them, making decisions based on them and so on..

Therefore, we must have a "do-over", a restart. That trigger received data that quite simply didn't exist anymore.

Wed Aug 31, 12:43:00 PM EDT  

Blogger R Menon said....

"blog hopping you are, that belows to the other thread doesn't it :)
"

Sorry, I posted the comment in the correct blog now (with some other comments).

Essentially, what you are saying is that since you have "touched" the values potentially in the before trigger, Oracle assumes that you *can* change them (see other blog comments too)

Wed Aug 31, 01:15:00 PM EDT  

Anonymous Scot said....

Is the difference between what you talk about in this post and the lost update issue the fact that here you are updating the primary key (well, the field used in the where clause to identify the row(s) to update), whereas in a lost update you are updating other columns in the table?

I just ran two tests that tell me yes, but also wanted to ask. It is a subtle point that I missed the first time I read this post.

Wed Aug 31, 02:34:00 PM EDT  

Blogger Thomas Kyte said....

Is the difference between what you talk about in this post and the lost update issue

this is not truly related to the classic lost update - but way under the covers could be considered to be so.

I think for our purposes, it is best not to confuse the two concepts.

The way I think of this is:

there is a where clause in the update (where y = 5 for example)

When our update began, there was a row such that y = 5. By the time we got to it however, someone changed it, it is now y = 6.

So, consistent read says "y=5"
current read says "y=6"

we cannot update this row
we cannot skip it

therefore we must start over.

It is similar to lost update, if we updated it - it would be a lost update. but we don't update it, we don't SKIP it either, we start over.

Wed Aug 31, 02:58:00 PM EDT  

Anonymous Scot said....

Gotcha, thanks.

Wed Aug 31, 03:40:00 PM EDT  

Blogger Robert said....

Excellent read and info here

>>
we cannot update this row
we cannot skip it

therefore we must start over.

It is similar to lost update, if we updated it - it would be a lost update. but we don't update it, we don't SKIP it either, we start over.

<<


Okay..Oracle re-starts the update...so the end result is "this row" does NOT get updated, correct ?
If yes, how's this different from just "skipping it" ?
Why bother re-start ?

Wed Aug 31, 04:50:00 PM EDT  

Blogger Thomas Kyte said....

why bother restarting

Lets say the table T has a BEFORE FOR EACH ROW trigger and row in 'order', this is the 'order' things would hit this:


ID Y X
----- ------ ---------
1 4 ....
2 5 ....
3 5 .....


and we do "update T set x = .. where y = 5"

When we begin, the set of rows (the consistent set of rows) in T is ID=2,3.

Now, we start our update and do ID=2.

At the same time, another session does:

update t set y = y+1 where id in (1,3);


Our session doing "update T set x = .. where y = 5" gets blocked on row ID=3. It sees (by read consistency) that Y *was* 5 when it began and is interested in this row.

The "update t set y = y+1 where id in (1,3);" commits.

Our session doing the "update T set x = .. where y = 5" now unblocks and sees that "Y was 5, but Y is now 6" - we cannot update it.


But, if we just skipped it - all we would do is update the ID=2 row. AT NO POINT IN TIME did the database have only one Y=5 record, there was no consistent set of rows such that Y=5 had a cardinality of 1. We just did an inconsistent update, we had an inconsistent set of records.

Unless - we restart.


And don't forget, the BEFORE FOR EACH ROW trigger *thinks* we updated the row with ID=3 as well - anything it did (any other updates IT did) would necessarily need rolling back as well.

Thu Sep 01, 12:33:00 AM EDT  

Anonymous Enayet said....

Tom,
Though I am very enthusiastic about multi-versioning and read consistency, I am still trying to grasp the whole scenerio. Your explanation is definitely helping :)

Will you be posting Part II/III in your blog?

Well, one thing I must appreciate, you have started a whole new enthusiasm about blogging. Perhaps, maybe, you can conceively convince Larry E. to start his own blog (with a connotation of mayhem (grin :D)). Much appreciated, always :)))

Thu Sep 01, 01:20:00 PM EDT  

Blogger Robert said....

Thx Tom for the explanation above.

So it seems the re-start was "demanded"
by Oracle to satisfy its (theoretical) design under the ACID model.

Because the user will still go:
"darn it ! how come only one row updated ?!" (and I will tell him "'cause someone beat you to it...")

BUT...as you indicated in the last 2 paragraphs of this post, the "what if"

Maybe the whole rollback/restart will end up as total loss for that user : no records updated ?

looking forward to your insight on this.

Thu Sep 01, 01:51:00 PM EDT  

Blogger Thomas Kyte said....


Will you be posting Part II/III in your blog?


already done.... (hit reload :)


Maybe the whole rollback/restart will end up as total loss for that user : no records updated ?

I didn't quite get that train of thought, the user (since they hadn't locked the records they wanted to update) cannot have ANY expectations about the records that will be updated -- they have *no way* of knowing what records will be. It is not like they can look and see what records will be updated using a select and then update them -- by the done the select is finished, the set of records could already have changed.

Thu Sep 01, 03:14:00 PM EDT  

Anonymous Anonymous said....

AS THE EXTENT of Hurricane Katrina’s devastation became clearer on Tuesday — millions without power, tens of thousands homeless, a death toll unknowable because rescue crews can’t reach some regions — President Bush carried on with his plans to speak in San Diego, as if nothing important had happened the day before.

Katrina already is measured as one of the worst storms in American history. And yet, President Bush decided that his plans to commemorate the 60th anniversary of VJ Day with a speech were more pressing than responding to the carnage.

A better leader would have flown straight to the disaster zone and announced the immediate mobilization of every available resource to rescue the stranded, find and bury the dead, and keep the survivors fed, clothed, sheltered and free of disease.

The cool, confident, intuitive leadership Bush exhibited in his first term, particularly in the months immediately following Sept. 11, 2001, has vanished. In its place is a diffident detachment unsuitable for the leader of a nation facing war, natural disaster and economic uncertainty.

Wherever the old George W. Bush went, we sure wish we had him back.

Fri Sep 02, 09:05:00 AM EDT  

Anonymous Nicola Farina said....

I started using Oracle in 2000 coming from five years of Interbase.
So multiversioning was a natural way of think for me.
I still find Interbase implementation better than the Oracle's one.
In Interbase there are no errors like
"rollback segment too old" or
"table is mutating" which are, in my opinion, a clue of a weak multiversion architecture

Sun Sep 04, 04:08:00 AM EDT  

Blogger Thomas Kyte said....

Nicola,

firstly, the mutating table constraint is there for data integrity and has not a thing whatsoever to do with multi-versioning. Nothing whatsoever, not even a little bit.

Any database that would let you query the table being fired upon in a row trigger - hence seeing partitial changes to that table - is in my opinion "not doing you any favors whatsover".

That aside, in a properly setup Oracle database - snapshot too old does not happen - and in interbase, if the equivalent issue is not EVER to raise it's head, that would likely mean interbase databases always grow, for every update, for every delete -- they get bigger and bigger. Else, the 1555 concept will ultimately creep in.

Actually, I've found material to that effect - not sure if the source is dead on accurate as their depiction of Oracle is rather poor (stating for example that multi-versioning was added to Oracle version 7, umm - no, it would have been version 4 in 1984, two years before interbase. The author of one paper I found was quite sure that having the feature "first" made one more "natural" stating:

This makes the behavior of Firebird close to Oracle, however with a notable difference Firebird is naturally multi-versioned, while Oracle acquired this feature in Oracle 7.x.. That statement is false on two counts - version 4 in 1984 was when Oracle gained this ability and a database is either multi-versioned, or not - nothing "natural" about it).

In my experience, 1555 is directly related to the DBA trying to "save disk".

and if you want the interbase behaviour documented as:

....As a result, a long-lived transaction blocks the removal of back versions of all records, causing the database to grow and performance to deteriorate. .....


You simply set your undo retention period (as a dba) and let your undo tablespace autoextend (not something many are doing - because of the obvious implication of the run away query causing the versions to grow and grow and consume all disk and .... well -- you get the picture.)

The two implementations are very similar under the covers and if you are subject to many ora-1555's, you have to look to the DBA's at that point.

If interbase ran out of disk, would you blame a poor implementation of interbase? Or the admins for not configuring sufficient resources to complete your job.

Sun Sep 04, 09:33:00 AM EDT  

Blogger Thomas Kyte said....

just to clear up any confusion -- interbase is now known as firebird.

Sun Sep 04, 09:50:00 AM EDT  

Anonymous Nicola Farina said....

Hi Tom,

Certainly each implementation has her own pro's and con's (there was a kind of "garbage collection" in Interbase, at the time I used it)
and I am not
saying that Interbase is *better* in general than
Oracle, it would be silly.
I just described what was my feeling
when I switched from IB to Oracle.

I know that now 1555 errors are minimized or eliminated but this
doesn't mean that the idea of rollback segment seemed to me a "primitive" thing

Finally I don't understand why read consistency should not be applied to
triggers also.
It seems to me that the rule should
be the same in every aspect of the db, but if you have some
examples/arguments on this I will be glad
to change my mind!
Bye
Nicola

Mon Sep 05, 06:44:00 AM EDT  

Blogger Thomas Kyte said....

Nicola -

Not sure if you think using rollback for it's dual purpose is "good" or not.

rollback has to exist (for transaction rollback), why not use it for the versioning information as well? Rather than a weakness, I believe it truly to be the strength. No garbage collection - just a big old (fast) circular queue.


As for read consistency being applied to triggers -- they are! Each query in the trigger is read consistent with respect to the point it starts (the query) or with respect to when the transaction began (serializable).

HOWEVER, since a transaction can see it's own modifications -- and you are in the MIDDLE of making them.... Well, read consistency or no - you are in a mess.

You can use an autonomous transaction to "work around" this (to not see your changes) but then one must question the validity of your trigger code.

Give a real world case where you believe a row trigger should be able to read the table upon which it is firing to enforce some sort of integrity constraint - and I'll more than likely find multi-user flaws in the logic. The mutating table constraint is there predominantly for our protection. Every time I see an example that hits it - the logic (developed trigger logic) is horribly flawed from a concurrency perspective (meaning if it didn't get the mutating table constraint, that would have been even WORSE for the programmer)

Mon Sep 05, 05:38:00 PM EDT  

Anonymous Nicola Farina said....

Tom

I understand your point but
(yeah but..)
The very first trigger I wrote in Oracle (in year 2000)
should be very simple.
I don't remember exactly but
I am sure it should simply read some data. No update, no enforcing constraints nor business logic.
At the time our customers had the
7.3 version.
I was very frustrated/surprised to
see that I cannot write a so simple
thing.
Now I see that the same kind of trigger is possible (you also say,
somewhere in your site "they
relaxed the mutating table in 9i"
(or 10i I don't remember exactly)
)
It seemed to me fair to remember Interbase in this very topic,
a very good product but with no
marketing.
Nobody knows about it, but I think
it is much better than many many
others more "famous"
Thanks a lot for your time
Bye

PS
Sorry for the bad english!

Tue Sep 06, 05:02:00 AM EDT  

Blogger Thomas Kyte said....

Nicola -

It is so very hard to deal with "should be very simple. I don't remember exactly....."

That is like hearing from a friend who heard from a friend that overhead a conversation at a bar where the speaker said they heard from their brother ........

That is, you say "i had something trivial to do, and I know it was trivial and it should have been doable but wasn't"

And I cannot respond in kind with an explanation of why it was or was not possible.

So, like I said, if you come up with a real world case (or even a faked up one that looks reasonable), I'd be glad to comment on it.

Your english is quite good, no need to apologize at all.

Tue Sep 06, 07:22:00 AM EDT  

Anonymous Roman Rokytskyy said....

Tom,

I am the author of that article you mention and I appologise for the mistake about the time when Oracle got its MVCC - I did my best trying to find that information on Google, but the only source I had were Oracle release notes. After reading release notes for Oracle 7.0 I got an impression that it does not have MVCC, but then release notes of 8.0 were talking about MVCC as an existing feature. I will correct that article according to your information.

As to the "natural" vs "non-natural", I mean that Oracle had to add this feature to an existing product, possibly making compromises between MVCC needs and the backward compatibility requirements (I say possibly, since I cannot say it for sure).

Roman Rokytskyy

Wed Oct 19, 05:28:00 PM EDT  

Blogger Thomas Kyte said....

Roman - thanks for the feedback.

I actually talked with Ken Jacobs - whose been with Oracle since time began.

Multi-versioning came in with version 3.0 of Oracle - after version 2.0 which was actually the first release (and a complete rewrite of the first release).

I feel very confident saying "multi-versioning" is "native" or natural to Oracle. It has been there since the beginning, not 'grafted on' (ala SS2005 if it comes out in 2005 ;)

Wed Oct 19, 06:40:00 PM EDT  

Anonymous Anonymous said....

TK: Consistent reads: When “finding” the rows to modify

Tom. how does oracle find the rows it needs to modify ??

Tue May 16, 05:25:00 AM EDT  

Blogger Thomas Kyte said....

how does oracle find the rows it needs to modify ??

By querying the table using the same read consistency it does during a query. So,

update t set x = 2 where y = 5

will use the where y = 5 in a read consistent fashion to find all of the y=5 rows as of the point in time the statement started (restarting the statement if necessary)

Tue May 16, 06:28:00 AM EDT  

Anonymous Pradeep said....

Thanks Tom.

By querying the table using the same read consistency it does during a query

Does that mean that an update gets blocked during its current read ???

Wed May 17, 01:55:00 AM EDT  

Blogger Thomas Kyte said....

Read consistency PLUS multi-versioning are what provide for NON-BLOCKING reads.

The update does not get blocked during the read consistent query portion, it will get blocked when it finds a row it wants to update but some other session is already updating it.

reads do not block writes.
writes do not block reads.
writes block other writes of the same row.

Wed May 17, 06:39:00 AM EDT  

Anonymous Pradeep said....

I'll try to make my question clear.

table t
id x y

1 a e
2 b e

1.Update t set y=f where x=a
then
2. Update t set x=c where y=e

I know 2nd update will get blocked. What i wanted to know was
during 2nd update what is the value for col y when it is doing

i) Consistent read - e? or f ? or get blocked ?? ( may be 'e')
ii) Current read - e? or f ? or get blocked ?? ( not sure)
ii) updating row - e? or f ? or get blocked ?? ( i think it will get blocked)


thanks a lot

Wed May 17, 09:26:00 AM EDT  

Anonymous Anonymous said....

Very Good article , this article make some interesting points.
Tactical Flashlights
r c helicopter
video game
Tactical Flashlight

Mon Jun 23, 02:30:00 AM EDT  

Anonymous Anonymous said....

I like ralley your xample, it make me feel very good about using Oracle. Dank You berry much, you do grape job.

John M.

Thu Sep 04, 09:33:00 AM EDT  

Blogger Duke said....

"The Business Case for Oracle"? There are certainly White Papers that make the case, for example "Database Performance with
Oracle Database 10g Release 2" May 2005. It's section on MVCC says, in part:
"Oracle Database’s implementation of multi-version read consistency always provides consistent and accurate results. When a transaction updates data, the original data values are recorded in the database's undo records. Oracle Database uses these values to construct a read-consistent view of a table's data, and to ensure that a version of the information, consistent at the beginning of the uncommitted transaction, can always be returned to any user."

And Figure 2: "Transactions and multi-version read consistency" is a nice illustration of how it works.

http://www.oracle.com/technology/deploy/performance/pdf/twp_perf_database%20performance%20with%20oracle10gr2.pdf

Thu Apr 29, 03:12:00 AM EDT  

POST A COMMENT

<< Home