Skip to Main Content
  • Questions
  • Query update where select was in progress

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Piyush.

Asked: May 01, 2007 - 11:58 am UTC

Last updated: September 17, 2009 - 8:58 am UTC

Version: Oracle 9i, 10G

Viewed 10K+ times! This question is

You Asked

Hi Tom,

One more interview question. To you it may sound amusing but I was very much confused.

Here goes the question...

There is a large database say, Terrabytes in size. Select query has started executing say, sharp at 10 AM.. simple select query like "select * from <table_name>". While the query was in execution, an update was fired at 10:01 AM on a row and the value of the col1 'A' was updated to value 'B'. Update transaction was committed too. Query completed execution say at 10:02 AM. Please let me know what would the query return the value 'A' and when it would 'B'.

Please discuss the scenario with justification.

Well I told that since commit has taken place, if the query has read the row before update, it would return 'A' else it would return 'B'. Interviewer was not very much pleased.

Moreover, the interview asked that what would be done if they need the value 'A' still.

Can you please throw some light on this issue. I think I missed some fundamental concept.

Please help!!!

Regards,
Piyush

and Tom said...

http://asktom.oracle.com/pls/ask/search?p_string=preordained


There is an ORACLE database. We are running in the default isolation level of read committed.

Suppose we OPEN a query, a result set at 10am on the dot. A simple "select *"

While that query is running - an update happened at 10:01am and Col1='A' was changed to Col1='B and committed.

At 10:02, the original select finished.

What value for col1 would be returned by that query.

there is only one answer - and all we really needed to know was:

....
There is an ORACLE database. We are running in the default isolation level of read committed.

Suppose we OPEN a query, a result set at 10am on the dot. A simple "select *"
......

we don't need to know anything else at all - for you see, in Oracle there is this concept of read consistency, the results of the query are "fixed" - preordained if you will - at the time we OPEN the query - before a single row is even touched.

The query would return col1='A' for the simple reason that when the query was opened - col1='A'. It matters not what happens to that row after we open the query, the query will always see the row as it existed when the query began

This is a very fundamental concept in Oracle - it is the feature that allows us to get consistent (eg: CORRECT) results without having reads block writes and writes block reads.

I've written extensively about this in Expert Oracle Database Architecture.

Rating

  (17 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

A reader, May 03, 2007 - 1:50 pm UTC

What would have happened in sqlserver db in this case, by default ?


Tom Kyte
May 03, 2007 - 10:54 pm UTC

by default, they would have

a) blocked on the row if it were uncommitted.
b) read whatever was committed in the row when they hit it.

they read rows with the committed data as they hit them. Meaning, if your query needs 1,000 rows to give you the answer - each of the 1,000 rows could come from different database states.

Let us consider a simple 3 row table:

ACCOUNT             BALANCE
------------        ----------
toms savings        100
your acct           100
toms checking       50

Ok, the sum(balance) in the bank is obviously 250 - right?

SQLServer:

a) start the query select sum(balance) from table
b) it reads row one - 100 is the sum so far
c) I goto ATM and transfer 50 from savings to checking
d) I commit ATM
e) you read "your acct", sum(balance) is 200
f) you read "toms checking" - it returns 100, sum(balance) is now 300

answer = 300, a value that NEVER EXISTED EVER AT ANY POINT IN TIME.

In Oracle, step e) would have noticed "value changed, put back" and 50 would have been used. 250 would have been the answer.

But what about reads not in transactions

Loz, May 04, 2007 - 1:30 am UTC

Hi Tom,
I understand that a single read will be as at the time the query was run.
I also understand that in the context of a transaction, all reads are consistent as at the time the transaction started.

So how do I get a constent read for multiple queries when I don't otherwise need a transaction?
Tom Kyte
May 04, 2007 - 12:57 pm UTC

... I also understand that in the context of a transaction, all reads are consistent as at the time the transaction started. ...

no, that is not true unless you change the default isolation level.

ops$tkyte%ORA10GR2> set transaction isolation level serializable;

Transaction set.



A reader, May 04, 2007 - 3:01 am UTC

Tom,
May be worth mentioning 'write consistency', I know the question did not say it had a 'for update of ' clause, but perhaps the guy who raised the question need to know about it?
That is if the colum that was updated was part of the Where clause, the query will stop, move the point of start time to after the update and redo itself?

Thanks
Ravi
Tom Kyte
May 04, 2007 - 1:01 pm UTC

for update would change everything - yes. But, that wasn't stated, so I didn't touch it.

Piyush was right?

David Weigel, May 04, 2007 - 9:42 am UTC

In other words, missing from Piyush's original question was whether it was an Oracle database. If it wasn't, and it was SQL Server, then Piyush's answer was correct -- it depends on what was read and when. I wonder if the interviewer was specific.
Tom Kyte
May 04, 2007 - 1:16 pm UTC

hopefully, it would have been an oracle database - else I'd be really confused as to why they would ask me :)

oracle isolation level

Piyush Agarwal, May 05, 2007 - 8:19 am UTC

Well, I had asked about the Oracle database only.

"There is an ORACLE database. We are running in the default isolation level of read committed. "

case 1
This means when the isolation level is read committed then the oracle select would return the value when it existed when it first started.

case 2
When it is serializable, then it would read the update value if I am not wrong.

Please do correct me wherever I am wrong. then clarify my following answer too. Since I am from SQL Server background, I am bit confused myself.

Now lets say in case 1, if the updated value has been committed by another transaction and the buffer has been overwritten where it has stored the old value (state before start of transaction). Then will the oracle server return any error or it still gives the same value. if yes then from where. If there is any error, then what error is returned and why ?

Why such a difference between two isolation levels between SQL Server and Oracle server where in concept they define the same thing ?

Thank you all the reviewers and Tom. Once again I expect valuable comments from Tom and that from reviewers so that this is clarified to us all.

Regards,
Piyush
Tom Kyte
May 08, 2007 - 10:09 am UTC

case 1) the results of a query are pre-ordained as of the time you open the query. if you:

a) open a cursor at 10am
b) wait 4 hours
c) fetch the first record at 2pm (IO is done now), you will fetch whatever existed in the database at 10am
d) wait 4 more hours
e) fetch the 2nd record (IO is done now) at 6pm, you will fetch whatever existed in the database at 10am

2) you are wrong, with serializable, then EVERY query you open in your transaction will use an "as of time" that corresponds to the beginning of your transaction.

check out the concepts guide, it covers multi-versioning, read consistency and how we use the rollback data to put data back the way it was....

some more doubts

Piyush Agarwal, May 06, 2007 - 3:08 pm UTC

Somebody has mentioned this above - "May be worth mentioning 'write consistency', I know the question did not say it had a 'for update of ' clause, but perhaps the guy who raised the question need to know about it?
That is if the colum that was updated was part of the Where clause, the query will stop, move the point of start time to after the update and redo itself? "

The above 2 lines has confused me. Please explain in context of Oracle with some example.

Piyush

KM, May 06, 2007 - 5:41 pm UTC

>When it is serializable, then it would read the update
>value if I am not wrong."

You're wrong :) In serializable mode, the data are fixed for a whole transaction, not a single query. In "read committed" mode, you will get newly commited data if you repeat your query in the same transaction.

>and the buffer has been overwritten where it has stored
>the old value (state before start of transaction).
IIRC Oracle uses the undo information to "reconstruct" the data at the time the query (or transaction) started.

As Tom always says, I'd suggest you to read the "Concepts Guide" where these things are explained in details.

KM, May 06, 2007 - 5:47 pm UTC

>The above 2 lines has confused me.
Probably he was talking about a "SELECT...FOR UPDATE" statement - it will change the behaviour.

Loz, May 06, 2007 - 8:01 pm UTC

Hi Tom,
Thanks for your reply to my question above:

... I also understand that in the context of a transaction, all reads are consistent as at the time the transaction started. ...

no, that is not true unless you change the default isolation level.

Ok, 2 questions then
a) Why is the default isolation level not serialize? It appears that read consistency treated as somehow 'less important' than updates. No doubt this is not the case, but why would that be? Surely reading consistent data can be as important as writing it?

b) My original question: So how do I ensure each of my queries returns consistent results without starting a transaction?
Tom Kyte
May 08, 2007 - 10:39 am UTC

a) because you have to have a default, and no matter what default value for anything that is chosen, someone will say "why isn't the default this instead of that".

I don't understand your last bit "as writing it", read consistency work with modifications as well.

b) same answer: use serializable, as I stated. alter your session to be serializable.

ops$tkyte%ORA10GR2> create table t ( x int );

Table created.

ops$tkyte%ORA10GR2> insert into t values ( 1 );

1 row created.

ops$tkyte%ORA10GR2> commit;

Commit complete.

ops$tkyte%ORA10GR2> alter session set isolation_level = serializable;

Session altered.

ops$tkyte%ORA10GR2> select * from t;

         X
----------
         1

ops$tkyte%ORA10GR2> declare
  2          pragma autonomous_transaction;
  3  begin
  4          update t set x = x+1;
  5          commit;
  6  end;
  7  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select * from t;

         X
----------
         1

ops$tkyte%ORA10GR2> commit;

Commit complete.

ops$tkyte%ORA10GR2> select * from t;

         X
----------
         2

ops$tkyte%ORA10GR2> declare
  2          pragma autonomous_transaction;
  3  begin
  4          update t set x = x+1;
  5          commit;
  6  end;
  7  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select * from t;

         X
----------
         2

ops$tkyte%ORA10GR2> commit;

Commit complete.

ops$tkyte%ORA10GR2> select * from t;

         X
----------
         3


KM, May 07, 2007 - 6:05 pm UTC

Loz, see the Concepts guide, chapter 13 "Data Concurrency and Consistency". Read committed usually gives more thoughput, and is ok for most applications. And how could you avoid to start a transaction in Oracle?

Loz, May 08, 2007 - 3:31 am UTC

KM,
If I am just doing selects I am not starting a transaction.

KM, May 14, 2007 - 6:25 pm UTC

performance overhead

Vadim, July 17, 2008 - 6:15 pm UTC

Tom,

Functionally Read consistency makes perfect sense, however we suffer very serious performance degradation due to this mechanism.
Our reports performance literally varies from 10 minutes to many hours because of this. When we monitor long running reports they are reading heavily from UNDO tablespace to get consistent blocks image.
The specifics of our system is that inserts and reports are running at the same time as reports require near real-time data. We often have to kill report and restart it to reset its start time.
Any suggestion to speed this up or to minimize read consistency impact on performance besides separating inserts and selects in time ?

Thanks
Tom Kyte
July 18, 2008 - 4:12 pm UTC

well, by far - the best - is to reduce the query runtime in the first place.

Materialized views
Indexing strategies
Partitioning


and remember, without read consistency - your reports would get "real time made up data" - eg: data that never ever existed, ever - or the reports would just not run - since reads are blocked by those writes, or if you wanted consistent data - the reads and modifications would deadlock each other right and left.

Vinny, July 22, 2008 - 9:17 am UTC

The problem with Vadim's situation is that they are mixing apples and oranges - or apples and toaster ovens as Tom likes to say :-)

Transactional systems and real time reporting do not mix very well. They need to build some sort of a reporting solution which can be real time - think Change Data capture. There are many tools out there or you can even use log miner to capture changes in real time and create a reporting structure to do your real time reporting. Then you would not have the issue of having to constantly reconstruct a consistent image.

select and update on the same table runs very long with "latch: cache buffers chains" wait

Pintu, September 16, 2009 - 5:56 pm UTC

Hi Tom,
We have following table and I want to update BCH_SEQ_NBR with some value for only those records which are having records with AMT = 0 and group by on STORE, RX and DOS. In below moentioned table the records with REC_NO 3,4 and 5 only should be updated. In our system this table contains around 2 Million records. And it runs for very long time and session info shows "latch: cache buffers chains" wait is quite high. What we are doing is: (select records with AMT=0 STORE, RX and DOS)minus(select records withAMT !=0 group by STORE, RX and DOS) and then update these records. Can this be achived in a single hit to the table?

Record Store RX Dod_Dt Amt bch_seq_nbr
1 10 A 1-Jan-08 0
2 10 A 1-Jan-08 10
3 10 A 2-Jan-08 0
4 10 A 3-Jan-08 0
5 10 A 3-Jan-08 0
6 10 A 4-Jan-08 10
7 10 A 5-Jan-08 10
8 10 A 5-Jan-08 11
9 10 A 6-Jan-08 0
10 10 A 6-Jan-08 1
11 10 A 6-Jan-08 -1

Tom Kyte
September 16, 2009 - 7:51 pm UTC

... What we are doing is: (select records with AMT=0
STORE, RX and DOS)minus(select records withAMT !=0 group by STORE, RX and DOS)
....

does not fully compute, need a SCHEMA and constraints (primary keys, etc) in order to understand the table.


I don't see how the above 'query' would return anything since AMT is part of 'records', the minus would do nothing.

so, need more detail

select and update on the same table runs very long with "latch: cache buffers chains" wait

pintu, September 16, 2009 - 9:43 pm UTC

Below is the exact defination of table.Query and its execution plan is also copied below.
Table is composit-partitioned table.
Range partitioned on PARTITION_CD (11 Partition) and
Hash sub-partitioned on REMIT_SEQ_NBR (16 subpartitiones per partition)
Table dosen't have any index or constraint.
In our case only one sub-partition contains data due to data distribution conditions.

desc feed_Remit_claim
Name Null? Type
----------------------------------------- -------- -------------------------
STORE_NBR NUMBER(6)
RX_NBR VARCHAR2(20)
DOS_DT DATE
PARTITION_CD VARCHAR2(5)
REMIT_SEQ_NBR NUMBER(15)
REMIT_BCH_SEQ_NBR NUMBER(15)
CLM_PYMT_AMT NUMBER(11,2)
STORE_PROV_NBR_ID VARCHAR2(20)
AUTO_CROSS_IND VARCHAR2(1)
SCRIPT_SVC_REFR_NBR_ID VARCHAR2(38)
QTY NUMBER(10,3)
INP_COVRD_DAYS_VISITS_CNT NUMBER(9)
PRIOR_AUTH_NBR_ID VARCHAR2(20)
ACV_AUTH_NBR_ID VARCHAR2(20)
NDC VARCHAR2(20)
RF_NBR NUMBER(2)
PLN_SEQ_NBR NUMBER(15)
CARDHLDR_ID VARCHAR2(20)
SPLIT_BILL_NBR NUMBER(2)
CARDHLDR_FNAME VARCHAR2(35)
CARDHLDR_LNAME VARCHAR2(60)
ALLOWED_AMT NUMBER(11,2)
HCPCS_CD VARCHAR2(48)
COBA_CD VARCHAR2(80)
DOS_END_DT DATE
DOS_START_DT DATE
INGRD_DG_CST_AMT NUMBER(11,2)
ADJUST_UC_AMT NUMBER(11,2)
PAT_DOB_DT DATE
PRO_FEE_AMT NUMBER(11,2)
SALES_TAX_AMT NUMBER(11,2)
ST_CD VARCHAR2(2)
DISPNG_AMT NUMBER(11,2)
DG_NAME VARCHAR2(30)
TOT_ADJUST_AMT NUMBER(11,2)
DG_CD VARCHAR2(4)
CLM_STAT_CD VARCHAR2(5)
TOT_CLM_CHRG_AMT NUMBER(11,2)
INP_CLM_DISPROP_SHARED_AMT NUMBER(11,2)
CLM_FILING_CD VARCHAR2(2)
PAT_RESPONSIBILITY_AMT NUMBER(11,2)
PAYER_CLM_CONT_NBR_ID VARCHAR2(50)
FACLTY_TYPE_CD VARCHAR2(2)
CLM_FREQ_TYPE_CD VARCHAR2(1)
CLM_QTY NUMBER(10,3)
CLM_DISCH_FRACTION_NBR NUMBER(14,4)
INP_PPS_OPERATING_OUTLIER_AMT NUMBER(11,2)
INP_LIFETIME_PSY_DAYS_CNT NUMBER(9)
INP_CLM_DG_AMT NUMBER(11,2)
INP_CLM_PYMT_REMARK_TXT VARCHAR2(50)
INP_CLM_MSP_PASS_THROUGH_AMT NUMBER(11,2)
INP_NAME_CLM_PPS_CAP_AMT NUMBER(11,2)
INP_PPS_CAP_DSH_DRG_AMT NUMBER(11,2)
INP_PPS_CAP_FSP_DRG_AMT NUMBER(11,2)
INP_PPS_CAP_HSP_DRG_AMT NUMBER(11,2)
INP_OLD_CAP_AMT NUMBER(11,2)
INP_PPS_CAP_IME_AMT NUMBER(11,2)
INP_PPS_OHS_AMT NUMBER(11,2)
INP_CST_RPT_DAY_CNT NUMBER(9)
INP_PPS_OFS_AMT NUMBER(11,2)
INP_CLM_PPS_CAP_OUTLIER_AMT NUMBER(11,2)
INP_CLM_INDIRECT_TEACHING_AMT NUMBER(11,2)
INP_NONPAYABLE_PROCOM_AMT NUMBER(11,2)
INP_CLM_PYMT_REMARK_CD1 VARCHAR2(50)
INP_CLM_PYMT_REMARK_CD2 VARCHAR2(50)
INP_CLM_PYMT_REMARK_CD3 VARCHAR2(50)
INP_CLM_PYMT_REMARK_CD4 VARCHAR2(50)
INP_PPS_CAP_EXCP_AMT NUMBER(11,2)
MOA_CLM_HCPCS_PAYABLE_AMT NUMBER(11,2)
MOA_CLM_PYMT_REMARK_CD1 VARCHAR2(50)
MOA_CLM_PYMT_REMARK_CD2 VARCHAR2(50)
MOA_CLM_PYMT_REMARK_CD3 VARCHAR2(50)
MOA_CLM_PYMT_REMARK_CD4 VARCHAR2(50)
MOA_CLM_PYMT_REMARK_CD5 VARCHAR2(50)
MOA_CLM_ESRD_PYMT_AMT NUMBER(11,2)
MOA_NONPAYABLE_PROCOM_AMT NUMBER(11,2)
MOA_RMBRS_RATE_AMT NUMBER(11,2)
STAT_CD VARCHAR2(5)
SUB_STAT_CD VARCHAR2(5)
HOLD_IND VARCHAR2(1)
PYMT_TAPE_SEQNO NUMBER(7)
PD_INGRD_AMT NUMBER(12,2)
DENIAL_PNDG_IND VARCHAR2(1)
BCH_NBR_ID VARCHAR2(15)
ORIG_TOT_CLM_CHRG_AMT NUMBER(11,2)
ORIG_PAYER_CLM_CONT_NBR_ID VARCHAR2(50)
REMIT_XFERED_DT DATE
SEG_CD VARCHAR2(20)
LOOP_CD VARCHAR2(20)
REMIT_CLM_SEQ_NBR NUMBER(15)
REMIT_CLM_HDR_SEQ_NBR NUMBER(15)
PLN_NBR_ID NUMBER(7)
PAT_FNAME VARCHAR2(35)
PAT_LNAME VARCHAR2(60)
ACTV_DT DATE
PRSCBR_ID VARCHAR2(80)
ADJUDICATION_DT DATE
BILL_AMT NUMBER(11,2)
CORR_PRIORITY_PAYER_ID VARCHAR2(80)
CORR_PRIORITY_PAYER_NAME VARCHAR2(60)
CRSOVR_CARRIER_NAME VARCHAR2(60)
MEDI_RCD_NBR_ID VARCHAR2(50)
COBA_NAME VARCHAR2(30)
OTHER_CLM_RELATED_ID VARCHAR2(50)
OTHER_CLM_RELATED_ID_DSC VARCHAR2(80)
PROC_CD_DSC VARCHAR2(80)
PAT_MINIT_NAME VARCHAR2(25)
TXN_TYPE VARCHAR2(2)
GRP_CD VARCHAR2(20)
CMNT_TXT VARCHAR2(250)
CRSOVR_CARRIER_ID_CODE VARCHAR2(80)
PROV_MULT_STORE_IND VARCHAR2(1)
REMIT_FILE_NBR NUMBER(15)
RX_MCH_NBR NUMBER(8)
UPDATE feed_remit_claim frc
SET remit_bch_seq_nbr = :b3
WHERE (store_nbr, NVL (rx_nbr, -1), NVL (dos_dt, SYSDATE)) IN (
SELECT store_nbr, NVL (rx_nbr, -1), NVL (dos_dt, SYSDATE)
FROM feed_remit_claim frci
WHERE remit_seq_nbr = :b2
AND frci.partition_cd = :b1
AND NVL (frci.clm_pymt_amt, 0) = 0
MINUS
SELECT store_nbr, NVL (rx_nbr, -1), NVL (dos_dt, SYSDATE)
FROM feed_remit_claim frci
WHERE remit_seq_nbr = :b2
AND frci.partition_cd = :b1
AND NVL (frci.clm_pymt_amt, 0) <> 0)
AND NVL (frc.clm_pymt_amt, 0) = 0
AND NVL (frc.stat_cd, 'ERR') <> 'ERR'
AND frc.remit_seq_nbr = :b2
AND frc.partition_cd = :b1

Plan :

Execution Plan
----------------------------------------------------------
Plan hash value: 1713071007

----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
----------------------------------------------------------------------------------------------------
| 0 | UPDATE STATEMENT | | 1 | 115 | 9 (34)| 00:00:01 | | |
| 1 | UPDATE | FEED_REMIT_CLAIM | | | | | | |
|* 2 | HASH JOIN | | 1 | 115 | 9 (34)| 00:00:01 | | |
| 3 | PARTITION RANGE SINGLE | | 1 | 81 | 2 (0)| 00:00:01 | 1 | 1
| 4 | PARTITION HASH SINGLE | | 1 | 81 | 2 (0)| 00:00:01 | 8 | 8
|* 5 | TABLE ACCESS FULL | FEED_REMIT_CLAIM | 1 | 81 | 2 (0)| 00:00:01 | 8
| 6 | VIEW | VW_NSO_1 | 1 | 34 | 6 (34)| 00:00:01 | | |
| 7 | MINUS | | | | | | | |
| 8 | SORT UNIQUE | | 1 | 64 | | | | |
| 9 | PARTITION RANGE SINGLE| | 1 | 64 | 2 (0)| 00:00:01 | 1 | 1
| 10 | PARTITION HASH SINGLE| | 1 | 64 | 2 (0)| 00:00:01 | 8 | 8
|* 11 | TABLE ACCESS FULL | FEED_REMIT_CLAIM | 1 | 64 | 2 (0)| 00:00:01 | 8 |
| 12 | SORT UNIQUE | | 1 | 64 | | | | |
| 13 | PARTITION RANGE SINGLE| | 1 | 64 | 2 (0)| 00:00:01 | 1 | 1
| 14 | PARTITION HASH SINGLE| | 1 | 64 | 2 (0)| 00:00:01 | 8 | 8
|* 15 | TABLE ACCESS FULL | FEED_REMIT_CLAIM | 1 | 64 | 2 (0)| 00:00:01 | 8 |
----------------------------------------------------------------------------------------------------

Tom Kyte
September 17, 2009 - 8:58 am UTC

using pretty simple 'logic', you can of course reduce:
 SELECT store_nbr, NVL (rx_nbr, -1), NVL (dos_dt, SYSDATE)
            FROM feed_remit_claim frci
           WHERE remit_seq_nbr = :b2
             AND frci.partition_cd = :b1
             AND NVL (frci.clm_pymt_amt, 0) = 0
          MINUS
          SELECT store_nbr, NVL (rx_nbr, -1), NVL (dos_dt, SYSDATE)
            FROM feed_remit_claim frci
           WHERE remit_seq_nbr = :b2
             AND frci.partition_cd = :b1
             AND NVL (frci.clm_pymt_amt, 0) <> 0


which says "keep all of the 'store_nbr, NVL (rx_nbr, -1), NVL (dos_dt, SYSDATE)' values that have 'NVL (frci.clm_pymt_amt, 0) = 0' and NO OTHER 'store_nbr, NVL (rx_nbr, -1), NVL (dos_dt, SYSDATE)' has 'NVL (frci.clm_pymt_amt, 0) <> 0'

We can just group by your selected columns and make sure the max of clm-pymt-amt is = 0 (i used ABS in case pymt could be negative)

 (
         SELECT store_nbr, NVL (rx_nbr, -1), NVL (dos_dt, SYSDATE)
            FROM feed_remit_claim frci
           WHERE remit_seq_nbr = :b2
             AND frci.partition_cd = :b1
           group by store_nbr, NVL (rx_nbr, -1), NVL (dos_dt, SYSDATE)
          having max(abs(NVL (frci.clm_pymt_amt, 0))) = 0
 )




Question for you, why would you code:

   AND NVL (frc.stat_cd, 'ERR') <> 'ERR'


I understand peoples fear of NULL (well, I don't understand it, I understand that it exists) but - hey - read that....


if stat_cd is not null for a given row, the NVL is not necessary and the compare happens normally.
if stat_cd is null, you make it 'ERR' and 'ERR' <> 'ERR' is never true


if you just used stat_cd <>'ERR', then

if stat_cd is not null, the compare happens normally.
if stat_cd is null, it is never true


Read Committed in SQL Server does not provide read consistency

Rupert, September 17, 2009 - 8:12 am UTC

There is a fundamental misunderstanding amongst the majority of SQL Server "development" teams. That, the isolation level of "read committed" can provide a consistent read of data. It cannot. Nope. Not possible.

Only the transaction levels REPEATABLE_READ and SERIALISATION can provide read consistency in SQL Server.

Go go Oracle!