Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, Vikram.

Asked: October 24, 2001 - 7:23 pm UTC

Last updated: January 23, 2011 - 3:44 pm UTC

Version: 8.1.7

Viewed 1000+ times

You Asked

Can I force a session to do dirty reads ?
By setting a parameter in a session or by using a hint in Oracle 8.1.7
like they claim can be done in DB2 ?

Suppose, in some cases I dont care if I get the most recent updated records or not.

That is, instead of scanning through an undo block, read the Data block directly.

and Tom said...

No, dirty reads are not a feature, they are a weak attempt to get around a limitation.

search for

dirty read

on my site to read about this "feature". If you feel you have a real need for a dirty read -- explain to me what you need to do and I'll tell you how to do it. Dirty reads are not necessary (they are evil).



Rating

  (12 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Now I am curious ...

Andre Whittick Nasser, October 25, 2001 - 9:31 am UTC

Tom, you shouldn't have said that... kidding...

I've never heard you could do that in Oracle.

I'm aware it's a really nasty, "dirty" thing to do.

Suppose I want to know the changing value of my balance in a banking system during a transaction peformed by another user, between his DMLs.

How to implement it ?

Thanks a lot !

Tom Kyte
October 25, 2001 - 10:36 am UTC

When i said I would show you how to do it -- i didn't mean to do a "dirty read", what I meant was I would tell you how to accomplish the business objective you needed to satisfy. Most of the times, this is coming from people who want to count the rows their app has loaded so far -- for them, the answer is to stick some calls to dbms_application_info into their load program. Not only can they monitor the load progress this way -- its INFINITELY faster since they don't actually have to count the rows!

In your case, you would get the WRONG answer -- you would get an answer that NEVER existed at any point in TIME if you did a dirty read!!

Lets say that other transaction was moving $500 from savings to checking. So you issue: "select sum(bal) from accounts where account_owner = 'TOM'".

You read the checking balance, they update the saving account and decrement $500, you now read the savings (dirty read) -- you report back that you have 500 less dollars then you EVER had -- ever. In this case, movement of money between accounts will result in your getting totally erroneous answers.

Your balance has not changed until they commit or rollback. This is a bad example. Do you have a case where you really believe you need a dirty read?

Continuing with the followup...

Andre Whittick Nasser, October 25, 2001 - 12:51 pm UTC

Very interesting... thanks for the attention and sorry for my "sick" curiosity, I must admit.

But... carrying on with the banking system example...

Let us suppose that I, as a bank employee, want to track and hunt down each atomic DML operation in the transaction you just mentioned via dirty reads.

Just to emphisize: This would, as you like, be an extreme, atypical, speculative, philosophical and metaphysical example. Say... for auditing purposes.

Transaction:
1) Subtract 500 from my savings account;
2) Add 500 to my checking account.

I imagine using something like LogMiner, or value based-auditing (through triggers) to implement this mechanism.

Maybe the problem with LogMiner is that I will only have an after image of the modifications, because I cannot use it "on the fly". That is, I cannot track the changes ONLINE like that, but with triggers I could set up some "dynamic" change tracking.

Is my line of thought ok ?

Tom Kyte
October 25, 2001 - 1:24 pm UTC

As a bank employee -- things would be happening so fast, you could not. To you -- it wouldn't matter whether you got dirty reads or not since things are happening SO fast (bank transactions are tiny, fast, furious).

This would not give you auditing at all -- it would give you an instant, incorrect, view of the database. You could not use this as any part of an "audit" trail at all.

If you wanted to track down each atomic DML operation and see the before and after effects of it -- Log Miner does that (and more). It could be useful for auditing (especially in 9i) since it records everything. its not a poke in the dark like a dirty read would be.

So, in short, I disagree with your thought since a bank employee cannot be fast enough (nor would any piece of software) to use a dirty read to "audit" (it would have to be looking everywhere all of the time at everything -- every bit and byte in the database).

A trigger can do this.
Auditing can do this.
Log miner post-facto can do this. (and audit analysis is pretty much always post-facto).

Those would be the solutions to this problem, NOT a dirty read.

Continuing with the followup...

Andre Whittick Nasser, October 25, 2001 - 1:27 pm UTC

Very interesting... thanks for the attention and sorry for my "sick" curiosity, I must admit.

But... carrying on with the banking system example...

Let us suppose that I, as a bank employee, want to track and hunt down each atomic DML operation in the transaction you just mentioned via dirty reads.

Just to emphisize: This would, as you like, be an extreme, atypical, speculative, philosophical and metaphysical example. Say... for auditing purposes.

Transaction:
1) Subtract 500 from my savings account;
2) Add 500 to my checking account.

I imagine using something like LogMiner, or value based-auditing (through triggers) to implement this mechanism.

Maybe the problem with LogMiner is that I will only have an after image of the modifications, because I cannot use it "on the fly". That is, I cannot track the changes ONLINE like that, but with triggers I could set up some "dynamic" change tracking.

Is my line of thought ok ?

Now I see...

Andre Whitick Nasser, October 25, 2001 - 3:11 pm UTC

Sorry Tom,

Only now did I understand that the problem is not knowing that the other transaction performed a DML operation, but WHEN it might have ocurred.

You can never tell.

If you use LogMiner, for example, you see each entry with timestamp, user, etc.

When you do a dirty read, you don't know whether the DML op was done or not.

I think this is it !

but how to "instrument" an INSERT INTO ... SELECT ... FROM ...?

A reader, November 05, 2004 - 9:57 am UTC

... since one can't make use of dbms_application_info there ...

is there any chance to inspect the progress of such dml operations (beside introducing views in conjunction with instead of triggers feeding "longops")?

Tom Kyte
November 05, 2004 - 4:33 pm UTC

what does this have to do with the "thread" here?

(but if the select statement has long running operations in it -- well, it is already instrumented)

A reader, November 06, 2004 - 9:40 am UTC

well, we just want to know the number of rows that got inserted but not yet commited by such statements:

create table myobj as
select object_id rowno, object_type, owner, object_name
from dba_objects
where 1 = 2 ;

alter table myobj
add constraint myobjpk primary key( rowno, object_type, owner, object_name ) ;

insert into myobj( rowno, object_type, owner, object_name )
select rownum, object_type, owner, object_name
from (
select distinct object_type, owner, object_name
from dba_objects
union all
select distinct object_type, owner, object_name
from dba_objects
union all
select distinct object_type, owner, object_name
from dba_objects
union all
select distinct object_type, owner, object_name
from dba_objects
) ;

examining the progress via v$session_longops would be great (just in order to maintain a progress indicator).

is there a non-pl/sql-solution available for that requirement?


Tom Kyte
November 06, 2004 - 11:01 am UTC

there is no such functionality to monitor the "number of rows" (other than looking at the size of the extents that have been allocated -- you can monitor the growth of the table)

doing something in plsql would just make this about as slow as you possibly could -- I'm assuming the creates are "large" (else you wouldn't care about timing them) and if they are large - calling plsql from sql a kabillion times is going to be like setting "fast=No_way" in your init.ora.

j., November 06, 2004 - 6:05 pm UTC

but monitoring extend allocation sounds quite nice, doesn't it? i'll now have to take a look into documentation to find out how to do. let me just ask you whether one can distinguish between concurrent inserts or not with that approach.

Tom Kyte
November 06, 2004 - 6:40 pm UTC

you'll just see growth, that is all. you'll just see the table growing (dba_extents)

j., November 07, 2004 - 6:44 am UTC

i've found some session statistics ('physical write%', 'table fetch by rowid') to change as well --- although i can't get these changes "mapped" to the number of rows that got inserted :o(

Tom Kyte
November 07, 2004 - 2:41 pm UTC

it is impossible to get "number of rows inserted", that just isn't anything that is tracked anywhere.

Dirty Read

Manoj, December 17, 2009 - 11:38 pm UTC

Can you please explain how dirty read is avoided when concurrent users are working in the system and doing commits to the transaction.

It would be great if you include SCN, Redo Log and Undo Logs with Buffers(Dirty,pinned) in the example. I need a clear understanding. Am sorry for my dumbness about it but still I want to learn.
Tom Kyte
December 18, 2009 - 12:34 pm UTC

see the previous place you asked basically the SAME THING.

legit reason for a dirty read?

Craig, January 11, 2010 - 9:27 pm UTC

I'm working remotely with an extremely junior employee. We're working through an issue where an insert is throwing a unique constraint error. Fixing it isn't the issue; working through the training opportunity is my real goal. The thing is, I can't see what is causing the issue, since I can't see into her session. (this might not sound like a real problem, but harken back to the days when you could barely spell SQL, and you'll see I might have a point).
Tom Kyte
January 18, 2010 - 12:35 pm UTC

tell us more, tell us how you looking "into her session" would help you

a) diagnose this issue
b) educate

You wouldn't be able to just see their dirty data - you'd see everyones, I have no idea how this would be useful?

I don't get it, they have an insert that fails.

If the insert is insert values - ta-dah, no problem, you know what the duplicate data is - you have the values *right there*.

If the insert is insert as select - you won't be able to know what rows are "suspect" in that table - at all.


Give us a blow by blow whereby you think that being able to do a dirty read would help, give us the "for example"

dirt read

A reader, January 18, 2010 - 1:51 pm UTC


Dirty reads need

Amalmaran George, January 18, 2011 - 3:19 pm UTC

Hi Tom,

I have the dirty reads with business reason. We have a requirment to recycle business identifiers that are not used.

So we have a table of them. For a transaction I need to read a row from it mark it as in_use. So if there is another transaction coming in the same time, how will I get a different row unless dirty read are possible?

My query would be "select identifier from recycle_table where in_use = 'F' and rownum = 1"

Regards,
Amal
Tom Kyte
January 23, 2011 - 3:44 pm UTC

no, you just need to read about "select for update SKIP LOCKED", you do not have a business need for a dirty read.


see
https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:9456492352510#2830701200346901119