Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, GAURANG.

Asked: August 14, 2000 - 11:42 am UTC

Last updated: October 21, 2019 - 11:54 am UTC

Version: 8.1.5

Viewed 10K+ times! This question is

You Asked

what do you mean by 'Fetch across commit'




and Tom said...



for x in ( select rowid rid, t.* from T ) loop
update T set x = x+1 where rowid = x.rid;
commit;
end loop;


That implicit cursor is fetched from "across a commit". It is the practice of keeping a cursor open after committing. It is a bad practice and is a common cause of ORA-1555 (the above looping construct in particular)

Rating

  (84 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

'Fetch across commit'

Cheema, March 07, 2002 - 7:28 pm UTC

I have found that this construct does not give the error in 8i and above. However, the lock acquired by For Update clause are released by the Commit statement.

Previous versions used to give the error mentioned.

Cheema

Tom Kyte
March 07, 2002 - 7:45 pm UTC

sure it does.  ALL versions of Oracle (even 9i if you use rollback segments or set the undo_retention too small) will give you this.  100%.

You just ran your test in a database with enough rollback to NOT hit it.

Here is one that shows is definitely can and will happen:

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t as select * from all_objects;

Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> create index t_idx on t(object_id);

Index created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> create rollback segment rbs_small
  2  storage (initial 1k next 1k maxextents 2) tablespace system;

Rollback segment created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> alter rollback segment rbs_small online;

Rollback segment altered.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> begin
  2          for x in ( select * from t where object_id > 0 )
  3          loop
  4                  dbms_transaction.use_rollback_segment( 'rbs_small' );
  5                  update t set object_name = lower(object_name)
  6                    where object_id = x.object_id;
  7                  commit;
  8          end loop;
  9  end;
 10  /
begin
*
ERROR at line 1:
ORA-01555: snapshot too old: rollback segment number 7 with name "RBS_SMALL" too small
ORA-06512: at line 2
 

Fetch out of sequence error

Mike, March 07, 2002 - 8:13 pm UTC

It appears that you can't fetch across commit on a "For update" cursor.  I tried it with the following code (using your temp table) and got the fetch out of sequence error:

SQL> begin
  2      for x in ( select * from t where object_id > 0 for update)
  3      loop
  4         update t set object_name = lower(object_name)
  5              where object_id = x.object_id;
  6              commit;
  7      end loop;
  8  end;
  9  /
begin
*
ERROR at line 1:
ORA-01002: fetch out of sequence
ORA-06512: at line 2 

Tom Kyte
March 08, 2002 - 8:32 am UTC

You've never been able to fetch across commit on a FOR UPDATE cursor. The commit releases the locks -- cursor gone.

RE: Fetch out of sequence error

Mark A. Williams, March 07, 2002 - 9:04 pm UTC

From the Oracle documentation (Oracle8i Error Messages):

ORA-01002 fetch out of sequence

Cause: In a host language program, a FETCH call was issued out of sequence. A successful parse-and-execute call must be issued before a fetch. This can occur if an attempt was made to FETCH from an active set after all records have been fetched.

***
This may be caused by fetching from a SELECT FOR UPDATE cursor after a commit.
***

A PL/SQL cursor loop implicitly does fetches and may also cause this error.

Action: Parse and execute a SQL statement before attempting to fetch the data.

HTH,

Mark

I dont understand

A reader, August 21, 2002 - 3:21 pm UTC

hi tom

I am a bit confused about fetch across commit and ora-01555

if I have anonymous pl/sql block such as

begin
for i in (select rownum numrow from a where rownum < 10001)
loop
set transaction use rollback segment rbs1;
update a set a = i.numrow;
commit;
end loop;
end;
/

I can reproduce ora-01555 easily but my question is, isnt the cursor stored in memory? If it is stored in memory, then before the update starts there is no need to read the rollback segment rbs1 right?

My understanding about ora-01555 is that the query is reading a consistent image from rbs, if the query's starting scn is older than the scn in the rollback segment then ora-01555 occurs.

I mean in the pl sql block I pasted doesnt it work this way?

1. read table a into memory from disk
2. set rbs
3. update the table with new values
4. set rbs again
5. update the table again

and so on till cursor finishes, if it works this way there is nowhere that the cursor reads rbs then how can ora-01555 be arised?

or it works this way

1. read an array of rows of table a into memory from disk
2. set rbs
3. update the table with new values
4. set rbs again
5. update the table again
6. read next array of rows of table a into memory from disk and repeat 2 and 3 and so on until cursor finishes



Tom Kyte
August 21, 2002 - 3:26 pm UTC

No, a cursor is not stored in memory. Consider this query:


select * from one_billion_row_table_that_has_1000_varchar2(4000)_columns;


(forgetting for a moment that the syntax isn't quite right ;). The results are not precomputed in general, the query is not pre-answered.

So, your basic assumption (table is read into memory from disk) is flawed. It works more like the last way you describe.

array is random I guess

A reader, August 21, 2002 - 4:01 pm UTC

thanks for the reply

I guess the size of array reading into memory is random or it follows some strict data block buffer availability algorithm?

Tom Kyte
August 21, 2002 - 4:05 pm UTC

it is based on the request from the client -- how many rows did they ask for in that call?

5000

A reader, August 21, 2002 - 4:12 pm UTC

I set arraysize in sqlplus to 5000, I tried to set to more but I got

SP2-0267: arraysize option 1569325056 out of range (1 through 5000)

what is option 1569325056?

if I was using a GUI Front-end such as forms/discoverer/reports whatever how do we specify arraysize? setting the highest I guess I yield best performance :?

Tom Kyte
August 21, 2002 - 4:20 pm UTC

ops$tkyte@ORA817DEV.US.ORACLE.COM> set arraysize 5000
ops$tkyte@ORA817DEV.US.ORACLE.COM> 

ops$tkyte@ORA806.WORLD> set arraysize 5000
ops$tkyte@ORA806.WORLD> 


ops$tkyte@ORA734.WORLD> set arraysize 5000
ops$tkyte@ORA734.WORLD> set arraysize 50000
arraysize option 50000 out of range (1 through 5000)
ops$tkyte@ORA734.WORLD> set arraysize 500000000
arraysize option 500000000 out of range (1 through 5000)
ops$tkyte@ORA734.WORLD> set arraysize 500000000000000000000
arraysize option -279969792 out of range (1 through 5000)
ops$tkyte@ORA734.WORLD> set arraysize 500000000000000
arraysize option 1382236160 out of range (1 through 5000)
ops$tkyte@ORA734.WORLD> 


what I believe you did was not 5000 but some number much bigger and we put it into a 2 byte integer -- see the effect with 500000000000000000000 and 500000000000000....


so you are just seeing a wrapped number....


As for the comment "setting the highest I guess I yield best performance " is like saying "if one aspirin makes me feel better, the whole bottle should fix me up real good" (yeah, right into the hospital good)

I find 100 to be a "sweet spot", much more -- diminishing returns.  5000, way too large in my experience.

As for setting arraysizes in various products -- it'll be product specific.   

hmm why big arraysize is bad?

A reader, August 21, 2002 - 4:30 pm UTC

if I was doing a full table scan of 100000 rows wouldnt a bigger arraysize give me better performance than 100....????

I dont see the theory of bigger the array the performance can get worse... :0

Tom Kyte
August 21, 2002 - 8:05 pm UTC

Benchmark it for me would you. Present the results. See what happens.

Make sure the client DOES something with the rows ok. That's important (cause clients do do things with the data). You want to get an steady flow of "request to server", "processing by client", "request to server", "processing by client" and so on.

There are ram considerations as well. Tell me, which would take less resources to cache on a client -- 100 row or 100000 rows? You want to deal with that java result set? I don't. (or try throwing that at VB sometime)

So I run:

set echo on
spool test
set termout off
set timing on
set arraysize 10
select * from all_objects;
set arraysize 100
select * from all_objects;
set arraysize 1000
select * from all_objects;
set arraysize 5000
select * from all_objects;
spool off
exit

$ sqlplus scott/tiger@ora8i @test

$ grep Elapsed test.lst
Elapsed: 00:00:31.70
Elapsed: 00:00:27.90
Elapsed: 00:00:25.66
Elapsed: 00:00:26.14

No significant difference between 100 and 5000 (except in the amount of ram it used on the client and server that is)




A reader, August 21, 2002 - 4:43 pm UTC

I forgot to ask.... arraysize sets the number of rows sqlplus fetches from the database... but that hasnt got anything to do with reading from disk to memory in server side... no?

disk ========> data block buffer

database ====> client

:-?

Tom Kyte
August 21, 2002 - 8:09 pm UTC

Yes it does ;)

Watch the consistent gets:

scott@ORA8I.WORLD> drop table t;

Table dropped.

scott@ORA8I.WORLD> create table t as select * from all_objects;

Table created.

scott@ORA8I.WORLD>
scott@ORA8I.WORLD> set autotrace traceonly statistics;
scott@ORA8I.WORLD>
scott@ORA8I.WORLD> set arraysize 10
scott@ORA8I.WORLD> select * from t;

37001 rows selected.


Statistics
----------------------------------------------------------
0 recursive calls
12 db block gets
4153 consistent gets
503 physical reads
0 redo size
4551787 bytes sent via SQL*Net to client
411125 bytes received via SQL*Net from client
3702 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
37001 rows processed

scott@ORA8I.WORLD> set arraysize 100
scott@ORA8I.WORLD> select * from t;

37001 rows selected.


Statistics
----------------------------------------------------------
0 recursive calls
12 db block gets
875 consistent gets
448 physical reads
0 redo size
4175497 bytes sent via SQL*Net to client
41495 bytes received via SQL*Net from client
372 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
37001 rows processed

scott@ORA8I.WORLD> set arraysize 1000
scott@ORA8I.WORLD> select * from t;

37001 rows selected.


Statistics
----------------------------------------------------------
0 recursive calls
12 db block gets
544 consistent gets
465 physical reads
0 redo size
4137868 bytes sent via SQL*Net to client
4532 bytes received via SQL*Net from client
39 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
37001 rows processed

scott@ORA8I.WORLD> set arraysize 5000
scott@ORA8I.WORLD> select * from t;

37001 rows selected.


Statistics
----------------------------------------------------------
0 recursive calls
12 db block gets
515 consistent gets
481 physical reads
0 redo size
4134478 bytes sent via SQL*Net to client
1202 bytes received via SQL*Net from client
9 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
37001 rows processed

scott@ORA8I.WORLD>

Might not neccessarily affect the physical IO in a lightly loaded system -- but definitely affects the LIO's -- and they are just as important.

Where a large array size might not be performant

Jim Kennedy, August 21, 2002 - 4:51 pm UTC

You can take an empirical approuch and test a variety of situations and find which is best for your environment. Thus we move from theory to actuality. (theory could be wrong)

Reasons why 5,000 array might not be as performant as 100 array. The memory allocation algorithium might be more efficient at 100 elements vs 5,000 elements. For example, to send that information over the wire the server probably has to allocate a large enough buffer space to ready that data for the network interface card to send. It might be very efficient at 100 elements and not very efficient at 5,000 elements.(larger size, memory allocation routines might have a sweet spot) For example, I remember seeing a performance test for a Cray computer that tested the handling of arrays. As the array got larger and larger the test took on a monotonicly increasing saw tooth response time. (got longer and longer then all of a sudden would get better, then repeat the pattern) This was because the hardware of the machine had a certain memory stride and as you got farther and farther away from the optimal memory stride the calculation - adding the elements of the array in order - got less and less effecient.

Also lets assume that each network packet holds 2 rows. So for 100 element array that would be about 50 packets. For a 5,000 element array that would be about 500 packets. It is possible that you might start getting packet collisions and thus actually generate more traffic and slow things down.

All these are pure theory and the best way is to test it and find where it works best for your application. (eg for a big wide table 100 might be too large, but for a narrow skinny table or result set 500 might be best.)

Tom Kyte
August 21, 2002 - 8:10 pm UTC

*Thank you*

well said.

then what has reading disk to mem?

A reader, August 21, 2002 - 5:03 pm UTC

Jim

then what has arraysize to do with reading from disk to sga in the server???? if it used to set a limit of fetching from server to client?

Tom Kyte
August 21, 2002 - 8:12 pm UTC

It is all about buffer gets. forget physical IO -- thats noise here. Logical IOs are expensive (goto www.hotsos.com and read "Are you still using cache hit ratios?", excellent paper, love it)



arraysize is set in the call from the client

Jim Kennedy, August 21, 2002 - 6:13 pm UTC

I know you can set it from jdbc (prefetch I think) and OCI and probably proc C and even some ODBC drivers. So I assume it is telling the server how you are going to retrieve data - X number of rows at a time. So I assume the server is doing something different from single row fetches. sqlplus is a client program not part of the server - sure it can run on the same machine, but it is not a part of the Oracle database. It is a client tool.

Fetch out of sequence

Paul, September 19, 2002 - 5:15 am UTC

Tom,

I understand why this happens, but how would you suggest I get round my problem? I have to process a number of files loaded into the database using SQL*Loader and want to commit after processing each file successfully, so my code (simplified - I have exceptions etc, of course) looks something like this:

BEGIN
FOR rec1 IN(SELECT file_id,
ROWID
FROM file_data
WHERE Processed = 'N'
FOR UPDATE OF Processed) LOOP
FOR rec2 IN(SELECT *
FROM file_records
WHERE file_id = rec1.file_id) LOOP
<Process file records here>
END LOOP;

UPDATE file_data
SET Processed = 'Y'
WHERE ROWID = rec1.ROWID;
-- Ideally would like to commit here, but would get "ORA-01002: fetch out of sequence"
END LOOP;
END;


Tom Kyte
September 19, 2002 - 8:01 am UTC


loop
begin
select file_id, rowid into l_file_id, l_rowid
from file_data
where processed = 'N'
and rownum = 1
for update of processed;
exception
when no_data_found then exit;
end;
for rec2 in ( select ..... )
loop
process....
end loop;
update file_data set processed = 'Y' where rowid = l_rowid;
commit;
end loop;



Thanks a million

Paul, September 19, 2002 - 10:03 am UTC

So simple!
So beautiful!

Strang behavior ora 8.0.4

Martin, September 23, 2002 - 11:39 am UTC

>Followup:
>You've never been able to fetch across commit on a FOR >UPDATE cursor. The
>commit releases the locks -- cursor gone.

Tom,

We recently upgraded from dbms 8.0.4. We had several packages with a fetch across a commit, but these always seem to have worked fine. I totally agree fetching across a commit is bad practice but I don't understand why the packages have worked until now? Do you have any clue? A bug maybe?



Tom Kyte
September 23, 2002 - 1:12 pm UTC

It was a bug (and they didn't "work" before, the "for update" part got lost as soon as you committed.

Difference between commit and no commit in terms of RBS usage

Sam To, October 16, 2002 - 6:00 pm UTC

What will the difference be in the following statement if we remove the commit (assume id is not unique within T) in terms of the query having to read from the RBS? When is the system change number updated? Won't the query still have to go to the RBS in order to achieve read consistency? In that case, will we still potentially run into ORA-01555 even if we remove the commit?


for x in ( select rowid rid, t.id from T ) loop
update T set x = x+1 where id = x.id;
commit;
end loop;

If id is unique in T, then will it make any difference in the above statement?



Tom Kyte
October 16, 2002 - 6:50 pm UTC

the differences will be (without the commit)

o it will run faster
o it will protect the logical integrity of the data (eg: it will NOT tend to
corrupt your data as the one with the commit would since it is NOT
restartable!)
o it will run faster
o it will run without a HIGH probability of hitting ORA-1555

Leave the commit in and it will

o run slooooowwwwwer
o logically corrupt you data when you are forced to re-run it when you get the
ora-1555
o run lots slower
o it will run with an EXTREMELY HUGELY HIGH probability of a self induced
ora-1555

unique, none unique -- matters not.


the commit does NOT belong there.

Search this site for ora-01555

or, if you have my book "expert one on one" i have a ton of examples and detailed explanation in there.


Implied fetch across commit

Damon, April 17, 2003 - 11:34 am UTC

I have come across a case where there seems to be an implied commit (i.e. I haven't coded it, but it happens anyway)...

for r in (select * from all_snapshots) loop
dbms_snapshot.refresh(r.name);
end loop;

I expect you'll have a better way of doing this anyway, but until I read about the fetch across commit issue here I didn't know why I kept getting rollback segment too old errors - I'm guessing that the dbms_snapshot call does a commit? I spent a long time trying to prevent the error by increasing the RBS and not getting anywhere, but re-writing the loop to build the list of snapshots to refresh then calling the function with the list seems to have worked.

Thanks

Damon

Tom Kyte
April 17, 2003 - 3:15 pm UTC

refresh commits, yes.

you can

declare
l_names dbms_utility.uncl_array;
begin
select name bulk collect into l_names from all_snapshots;
dbms_snapshot.refresh
( tab => l_names,
...
atomic_refrehs => TRUE (to do all and THEN commit, => FALSE to commit each
as it does it)




Reader

A reader, April 17, 2003 - 11:08 pm UTC

"
declare
l_names dbms_utility.uncl_array;
"

Where can I find information about dbms_utility.uncl_array

Tom Kyte
April 18, 2003 - 11:08 am UTC

vi dbmsutil.sql in $ORACLE_HOME/rdbms/admin

it is the documented datatype you use to pass to this function. it is just a table of varchar2's

Fetch Across Commit

ramks, May 12, 2003 - 5:24 am UTC

Hi Tom,

I have a pl/sql block like following;

begin
for i in (select * from t)
loop
--Call to
procedure A
procedure B
if return_status = 'E' then
rollback;
else
commit;
end if;
end loop;
end;

the logic behind is, if any one of above procedure generates error the whole transaction should be rolled back
otherwise commit. Here I am getting the ORA-01002 error. how to avoid that. If the above method is bad practice what is the best practice. Here I don't need to use 'For Update' clause.

Can you help me in this regard. thanks in advance



Tom Kyte
May 12, 2003 - 7:49 am UTC

it means you don't have a block like the above, you actually have a for update in there.

the best practice:

begin
for x in ( select * from t )
loop
proca;
procb;
end loop;
end;


proca and procb would RAISE AN EXCEPTION upon failure (no status codes, that is so 70's). Since you do not catch the exception (you cannot possibly recover from it), it'll propagate to the client and the work performed by that statement will automagically roll back.

The client will then decide "should I rollback the entire transaction at this point or should I commit"






Fetch Across Commit - contd

ramks, May 12, 2003 - 6:08 am UTC

one more thing tom in my above question;

if procedure B generates error, the previous transaction generated by A should be rolled back and an error record must be created in a seperate table and that should be commited. then the cursor should select next record.




Tom Kyte
May 12, 2003 - 8:03 am UTC

your example is not complete, something is missing -- whole story is not there. for update maybe?

Fetch Across Commit

ramks, May 13, 2003 - 11:49 am UTC

Hi Tom,
sorry for not explaining properly. here i am giving the code;

main procedure:
---------------
procedure main is
x_err_status varchar2(10);
my_exception exception;
begin
for i in (select * from t)
loop
begin
--call to proc's
procedure A(id,x_err_status);
if x_err_status='ERROR' then
RAISE my_exception;
end if;
procedure B(id,x_err_status);
if x_err_status='ERROR' then
RAISE my_exception;
end if;
commit; --If above two proc's went on well
exception
when my_exception then
rollback;
end;
end loop;
end;

procedure A is(id in number,p_err_status in varchar2) is
begin
<some sql stmts>
p_err_status := 'SUCCESS';
exception
when no_data_found then
p_err_status := 'ERROR';
<call procedure Error_proc to log error record>
end;

procedure B is(id in number,p_err_status in varchar2) is
begin
<some sql stmts>
p_err_status := 'SUCCESS';
exception
when no_data_found then
p_err_status := 'ERROR';
<call procedure Error_proc to log error record>
end;

--Procedure to log Error record

procedure Error_proc is
begin
rollback; (all the previous txn's done by procedures A & B
insert into <error table with valeus>
commit;
end;

I hope this will enough u to understand my problem. Please help me in this regard.



Tom Kyte
May 13, 2003 - 4:46 pm UTC

use savepoints, it is the rollback, and only if it was the first call that rolled back and only if I'm guessing right and you were already in a transaction.

so, what is happening is this:

o before you call main, you did some other insert/update/delete
o you call main, it opens the cursor (which MAY -- or may NOT be dependent on 
  the data you inserted/updated/deleted in the above bullet point)
o if you rollback -- you rollback that first bullet point, that INVALIDATES
  the cursor in the second bullet point
o when you next fetch -- error

If you


o before you call main, you did some other insert/update/delete
o you call main, it opens the cursor (which MAY -- or may NOT be dependent on 
  the data you inserted/updated/deleted in the above bullet point)
o you commit -- that makes the changes from the first bullet point PERMANENT
o when you next fetch -- success
o any rollbacks won't FAIL this cursor

ops$tkyte@ORA920LAP> create table t ( x int );
Table created.

ops$tkyte@ORA920LAP> begin
  2      insert into t values ( 1 );
  3      for x in ( select * from emp where rownum <= 2 )
  4      loop
  5          dbms_output.put_line( x.ename );
  6          rollback;
  7      end loop;
  8  end;
  9  /
SMITH
begin
*
ERROR at line 1:
ORA-01002: fetch out of sequence
ORA-06512: at line 3

<b>that simulates your error</b>

ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> begin
  2      for x in ( select * from emp where rownum <= 2 )
  3      loop
  4          dbms_output.put_line( x.ename );
  5          rollback;
  6      end loop;
  7  end;
  8  /
SMITH
ALLEN

PL/SQL procedure successfully completed.

<b>that shows the insert is needed to cause it</b>

ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> begin
  2      insert into t values ( 1 );
  3      for x in ( select emp.*, rownum r from emp where rownum <= 3 )
  4      loop
  5          dbms_output.put_line( x.ename );
  6          if ( x.r = 1 )
  7          then
  8              commit;
  9          else
 10              rollback;
 11          end if;
 12      end loop;
 13  end;
 14  /
SMITH
ALLEN
WARD

PL/SQL procedure successfully completed.

<b>that shows if you commit before you rollback, it won't happen</b>


My advice - commiting in a cursor for loop like that is a heinous operation.  Consider the transactional "conundrum" you are obviously in too -- that first transaction INCLUDES SOME OTHER OUTSTANDING WORK!!!  You are rolling back -- what?  do you even know?  there is some larger transaction going on here!


Perhaps what you need is a third procedure C that is an autonomous transaction.  it will either commit or rollback - it will be SEPARATE and DISTINCT from your already established transaction.


 

Fetch across commit

ramks, May 14, 2003 - 1:11 pm UTC

Hi Tom

thank you very much for your suggession. I have used savepoint's in my proc's. after i am not receiving that error. anyway thanks for your timely help.




But how do I workaround ORA-01555?

Dan Loomis, March 09, 2004 - 3:22 pm UTC

We have a long running cursor loop that updates specific process data (flag, message, date) once the row is moved from stage to base. For *long* running processing this results in ORA-01555, so other than removing the cursor loop completely and/or chunking up the table into smaller pieces (resulting in smaller, quicker loops), what's the best approach? Here's an example:

declare

--'U' are all unprocessed stage records
cursor l_cursor is
select * from big_stage_table where process_flag = 'U';

l_record l_cursor%ROWTYPE;
l_count number;

begin

--dbms_output.put_line( 'opening cursor' );
open l_cursor;

--dbms_output.put_line( 'fetching first row' );
fetch l_cursor into l_record;

l_count := 0;

while l_cursor%found loop

l_count := l_count + 1;

insert into big_base_table (
owner,
object_name,
subobject_name,
object_id,
data_object_id,
object_type,
created,
last_ddl_time,
timestamp,
status,
temporary,
generated,
secondary
)
values
(
l_record.owner,
l_record.object_name,
l_record.subobject_name,
l_record.object_id,
l_record.data_object_id,
l_record.object_type,
l_record.created,
l_record.last_ddl_time,
l_record.timestamp,
l_record.status,
l_record.temporary,
l_record.generated,
l_record.secondary
);

--Set the processing details, which results in ORA-01555 eventually
--'P': Processed
update big_stage_table
set process_flag = 'P',
process_msg = 'Row processed.',
process_date = sysdate
where object_id = l_record.object_id;

if (l_count = 10000) then

commit;
l_count := 0;

end if;

--dbms_output.put_line( 'fetching next row' );
fetch l_cursor into l_record;

end loop;

commit;

--dbms_output.put_line( 'closing cursor' );
close l_cursor;

commit;

exception
when others then

--dbms_output.put_line( 'error! -> ' || sqlerrm );

if ( l_cursor%isopen) then
close l_cursor;
end if;

end;
/


My inclination is to decouple the updating of the processing data with the main loop, but that would result in a separate table that I would merge that data into big_stage_table after the loop completes. Is that the right track, or is there another, better approach?

Thanks in advance.

Tom Kyte
March 09, 2004 - 3:46 pm UTC

why not

update big_stage set processed_flag = 'in process'
insert /*+ append */ into table select * from big_table where processed_flag='in process';
update big_stage set processed_flag = 'done'..... where processed_flag = 'in process';
commit;


it'll most like run so much faster then this row by row thing that the 1555 will just take care of itself

(else you need larger rbs's, that is what 1555 means at the end of the day)

Re: But how do I workaround ORA-01555?

Dan Loomis, March 09, 2004 - 5:35 pm UTC

I completely agree that your approach is best. But, for argument's sake, let's say one bulk insert statement is not feasible. For each row in big_stage_table there are many, many complex validations/tranformations that happen before the data is ultimately inserted/updated into base, which cannot be boiled down easily into a few statments.

So, assuming we *have to* process row by row (and can't increase undo retention any more) how can I best avoid ORA-01555 from a code perspective, using the example above?

Tom Kyte
March 09, 2004 - 10:44 pm UTC

I gave you two workarounds

a) go faster
b) make rbs bigger <<<----

and I'll add #3

c) don't commit. (but that'll probably imply b)


short of copying the data out of big stage into a table all for your own use....




Close and Re-open cursor

Gary, March 10, 2004 - 5:26 pm UTC

In the example given, another option is to still commit every 10,000 but close the cursor before the commit and re-open it after.

This should work in this case because
a) It isn't using a FOR cursor loop so it can be explicitly closed and re-opened
b) The update on the table (setting the process flag to 'P') and the query on the cursor (process_flag 'U') means you won't re-process the already processed rows

Closing and re-opening the cursor will have a performance penalty (small if process_flag is indexed).

Tom's update/insert/update will still give better performance though.

Another solution

Sam, March 24, 2004 - 4:15 pm UTC

Another solution that we have used is to use Global Temporary Tables. We re-wrote some ETL processes for an ODS that did row by row processing and were very slow. The processing window was reduced by about 75%. By this I mean one hour processing time before now takes 15 minutes.

• Create a Global Temporary Table (preserve rows on commit) that includes all columns in your big_base_table. If necessary include additional columns to control the process or for interim calculations.
• Insert a set of the available data from your big_stage_table, 200,000 or more rows.
• Use SQL statements to select the additional data needed to complete the record.
• Commit to free up your rollback.
• Update rows in big_base_table where the PK exists
• Insert rows into big_base_table where the PK does not exist
• Or use Merge to do both at once.
• Update the processed status of these rows in big_stage_table
• Commit to end the transaction.
• Truncate the GTT
• Begin with next set of records

You can use a bulk collect query into a table object to dynamically calculate what the ranges should be to use in your initial insert into the global temporary table. Use a loop to work through the object storing the begin and end values and include then as variables in the select statement.
This assumes that you have an easily identifiable Primary Key for the update back to big_stage_table, or you can include the row id from big_stage_table.
Use SQL%ROWCOUNT to keep track of how many rows have been inserted/updated at each step and you can add error checking to be sure that the number of rows processed equals the number updated as processed in big_stage_table.
You can also create indexes on the global temporary table. You need to understand your data and how the update queries might benefit. Could help or hurt processing speed. Usually helps with the GTT fields that will become PK in big_base_table.


A reader, May 16, 2004 - 11:15 pm UTC


plsql optimization question

Menon, July 12, 2004 - 11:32 am UTC

Hi tom
Does pl/sql as part of its optimization,
move the commit outside a loop ever? I was
reading somewhere that Conor has proven this?
Conor, if you read this, please let us know
what the story is..

Thanx!
(In my simple experiments - PL/SQL was not
doing this optimization.)


Tom Kyte
July 12, 2004 - 11:58 am UTC

see
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1415454871121#20061201507022 <code>

it is not that the commit is moved outside of the loop, not at all. it is that the post to lgwr to flush is done async from inside of plsql -- only the "last one" before plsql returns to the client is done syncronously. It can dramatically reduce your log file sync waits.


very interesting!

Menon, July 12, 2004 - 1:17 pm UTC

Thanx tom - that link has a nice explanation of
the PL/SQL optimization...



Very useful

Sami, September 02, 2004 - 1:40 pm UTC


What about inserts?

Robert, January 20, 2005 - 12:56 pm UTC

Tom,

Please consider...

DECLARE
v_count number := 0;
v_limit number := 100;
BEGIN
for x in (select c from t) loop
insert into t2(c) values (x.c);
v_count := v_count + 1;
if mod(v_count, v_limit) = 0 then
commit; <----------------------
end if;
end loop;
END;

1) A fetch/INSERT accross commit 'cannot' get an ORA-1555, correct?
2) Is this fetching accross a commit for an INSERT considered bad form (assuming we DON'T have the RBS)?

Thanks,

Robert

Tom Kyte
January 20, 2005 - 7:24 pm UTC

that can ABOSOLUTEY get a 1555, easily.

the 1555 would be raised for the select.


that code is downright dangerous -- anyone care to comment why code like this is dangerous to your data? not just slow, not just bad practice, but dangerous.....

forced to use fetch across commit

A reader, January 20, 2005 - 6:24 pm UTC

Hi

I am reading 8.1.7 pl/sql developer doc and it states this

If you want to fetch across commits, do not use the FOR UPDATE and CURRENT OF clauses. Instead, use the ROWID pseudocolumn to mimic the CURRENT OF clause. Simply select the rowid of each row into a UROWID variable. Then, use the rowid to identify the current row during subsequent updates and deletes. An example follows:

DECLARE
CURSOR c1 IS SELECT ename, job, rowid FROM emp;
my_ename emp.ename%TYPE;
my_job emp.job%TYPE;
my_rowid UROWID;
BEGIN
OPEN c1;
LOOP
FETCH c1 INTO my_ename, my_job, my_rowid;
EXIT WHEN c1%NOTFOUND;
UPDATE emp SET sal = sal * 1.05 WHERE rowid = my_rowid;
-- this mimics WHERE CURRENT OF c1
COMMIT;
END LOOP;
CLOSE c1;
END;


Be careful. In the last example, the fetched rows are not locked because no FOR UPDATE clause is used. So, other users might unintentionally overwrite your changes. Also, the cursor must have a read-consistent view of the data, so rollback segments used in the update are not released until the cursor is closed. This can slow down processing when many rows are updated.


Does this mean that if we use rowid we can avoid ORA-01555?




Tom Kyte
January 20, 2005 - 7:44 pm UTC

is someone pointing a gun at your head? I cannot imagine someone physically forcing you to commit.....

it is not so much "do not use..." as much as "you are PROHIBITED from using" ....


no, using rowid won't affect 1555, in fact, this example shows how to cause 1555's to happen more frequently...

Can't see why my example is dangerous (or cause 1555)

A reader, January 21, 2005 - 10:29 am UTC

Tom,

I don't think my example, above, (fetching accross INSERT commit) will cause a 1555.
The fetch is on table T
The insert is on table T2

My insert transactions (ie. rollback) are not going to conflict at all with any rollback associated with the fetch on table T....Therefore will not cause 1555 error.

By the way.... these tables are 'isolated' tables; used by a single batch job... so it is even more safe.

Am I correct?

Thank you, sir,

Robert.

Tom Kyte
January 21, 2005 - 6:35 pm UTC

see
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:895410916429 <code>


as for the "danger", it is in the logic of such a routine -- forget 1555 -- do you see why that routine as it is is fundementally flawed (ask yourself, so what happen when -- not if, but when -- it crashes in the middle some day)

Why will it cause an ORA-1555?

Robert, January 21, 2005 - 3:48 pm UTC

Tom,

Can anyone still tell me why my FETCH/INSERT/COMMIT loop will cause an ORA-1555 error?

...I am still thinking it will not.

Thanks,

Robert.

Tom Kyte
January 21, 2005 - 8:02 pm UTC

did you click on that link? for example.

My case is an exception :)

Robert, January 22, 2005 - 7:05 pm UTC

Tom,

(Sorry..I didn't see your original replay, our responses must have crossed in the mail)

Having now read the content of the link, above...

1) Danger of 1555 (due to delayed block cleanout)

[RESPONSE]: The table we are selecting from is not very volatile so the chances are very low of getting a 1555 from delayed block cleanout.

2) What happens if the process fails in the middle?

[RESPONSE]: This job is loading a tmp/work type table (phase 1) which will be used to drive phase 2 of the job... If this phase 1 load phase fails in the middle, we just start the entire job over in phase 1 (where we truncate the table before beginning the load.)

I know you prefer single sql with appropriately sized rollback, but that is currently not an option for us.

Are my responses legitimate?

Thank you,

Robert.

Tom Kyte
January 23, 2005 - 9:57 am UTC

1) did you read my link? the table in question in my example was READ ONLY -- the nirvana of "not very volatile"

2) no, everything is an OPTION. You have made the concious decision to do it slowly. If lower performance is your desire, you got it.

Thanks for your response!

Robert, January 24, 2005 - 10:15 am UTC


I Get The Error "Fetch out of Sequence "

Roham Borzabadi Farahani, January 25, 2005 - 12:07 am UTC

Dear TOM ,
I use the Db link Facility For connect to FOXPRO(DOS) with ORACLE. All Instructions in Oracle Documents were follows and connectivity is success. In SQL* I can "SELECT " two table one in foxpro and one in Oracle. But When copy those SELECT SQL Command in Delphi 7 I get the Above Error "Fetch out of Sequence".I use the ADO method for Connect to DATABASE in Delphi 7.

Thank you very Much

Tom Kyte
January 25, 2005 - 9:09 am UTC

i know nothing about "delphi 7" or ADO. sorry -- if I was sitting there, I would immediately add "alter session set sql_trace=true" to my program before the failed select and run the application again and see if the trace file was of any use (eg: if the cursor was magically written using "for update" and I saw a commit getting thrown in there or something that causes fetch out of sequence)

Fetch Out Of Sequence

Roham Borzabadi Farahani, January 31, 2005 - 3:41 am UTC

Tank you Very Much For Your Reply,
When I set The SQL_Trace. What Can See the Logs of Oracle,
When I use the Oracle DataBase Only. Do I Can see the Log From Client?

Roham Borzabadi Farahani, January 31, 2005 - 3:45 am UTC

Where the TRace File is Saved That I can See it.

Tom Kyte
January 31, 2005 - 8:18 am UTC

SQL> show parameter user_dump

shows where trace files generated by dedicated servers go.  ON the server. 

Checking status of a script

Alex, January 31, 2005 - 9:23 am UTC

Tom,

When I was working on a project a few months back involving rather large, intensive data migration scripts we would commit inside of loops to check to see where the progress was. In other words, we would log in under a different session (because the one doing the loading would hang), and do something like
select count(*) from t where col1 = 'value_we_are_inserting'
Knowing what I know now I probably will never do that again. My question is do you have a better way of checking the progress of a long running script? The reason I did this is because I could tell by the rate of rows being inserted if the script was running properly or not.

Tom Kyte
January 31, 2005 - 9:47 am UTC

one word for you:

dbms_application_info


search for it (entry point: set_session_longops)

or read about it in expert one on one Oracle...

or in the supplied packages guide. It rocks.

Showsql

Alex, January 31, 2005 - 10:32 am UTC

The showsql script in expert one on one looks like it produces what I was looking for. Do you know where I can get this? It seems Wrox does not own the rights to the book anymore, Wiley does. Therefor they cannot provide the code to a book they don't own at their site (which is given in your book as the place to get it). Wiley's site didn't have your book either. Thanks.

Tom Kyte
January 31, 2005 - 11:31 am UTC

apress does actually (the book is right on my home page with all of the info...), but if you search for showsql on this site, you'll find it as well.

Fetch out Of Sequence

rohamFarahani, February 05, 2005 - 3:32 am UTC

I saw your response about the problem in Error "Ftech out of sequence " . I will describe my problem in Follow:
I use Delphi 7.0 and Oracle DataBase, I want Querying data From Oracle & Foxpro Tables Simuntionusly With One SELECT statment the Command Answer in SQL+ but in Delphi 7.0 have Error " Fetch Out of Sequence".
Do you think I use a Commit or RollBack Statament After the Select Statment or Not ?

I am looking Forward to hear From you.
Thank You Very Much


Tom Kyte
February 05, 2005 - 5:35 am UTC

I've never used delphi. If I was faced with this, i would sql_trace the applications and see what I see.

A reader, February 17, 2005 - 9:15 am UTC

Hi Tom, what means "prefetch index", is this relate to the question about, o what this means really

Thank you

Tom Kyte
February 17, 2005 - 10:17 am UTC

it means instead of reading just a single leaf block -- we know we need "more" of them in all likelyhood -- we ask the OS to read "this list of non-contigous blocks, please pre-fetch data we think we will need" rather than read block, process block, read block, process block..... we "read bunch of blocks, process them, read more of them..."

A reader, February 17, 2005 - 10:18 am UTC

Thanks Tom, could you give one exapmle about how to do a prefetch on windows
Thank you

Tom Kyte
February 17, 2005 - 1:25 pm UTC

it is an internal optimization that just happens. windows is not special or magic in that regards. you cannot really "see" it normally, it just happens. no example really needed beyond "select * from t where indexed_column > :x" -- that might kick in prefetching on the index but it would be transparent to you

prefetch index (graphical question)

Alberto Dell'Era, February 18, 2005 - 7:40 am UTC

> prefetch index leaf blocks
+----------+
something like +------> | block 1 |
| +----------+
branch block | +----------+
+-----------------+ | +---->| block 37 |
+ AAAA blkid ---------+ | +----------+
+ BBBB blkid ------------+ +----------+
+ CCCC blkid ----------------->| block 42 |
+ DDDD blkid ---.... +----------+
+-----------------+

ie we are range scanning, the CBO predicts we will scan 1000 index keys, we have on average 100 keys/block, so when in the branch block se we will pre-fetch leaf blocks 1,37,42 etcetera, up to 10 blocks ?

Ditto, perhaps, for the leaf blocks -> table jump ?

Tom Kyte
February 18, 2005 - 9:18 am UTC

that is right on the index.

for the table jump, if you see a plan like this:


table access T2 by index rowid
nested loops
table access T1 full
index range scan T2_IDX

instead of:


nested loops
table access T1 full
index range scan T2_IDX
table access T2 by index rowid


you would be seeing something "similar"

the first plan would get the T1 -> T2 join with the rowids for T2, sort the rowids for T2 by file/block and then access the table after getting them all in order (to avoid the hip hop back and forth).

The second plan would hit the table as it hit the rowids over and over -- so it might goto file/block 5/22, 10/11, 5/100, 8/22, 5/22
^^^^ ^^^^

the first would have gotten the rowids, ordered them and hit file/block 5/22 once.

A reader, February 18, 2005 - 9:35 am UTC

Thanks Tom

Alberto Dell'Era, February 18, 2005 - 12:00 pm UTC

>the first plan would get the T1 -> T2 join with the rowids for T2, sort the
>rowids for T2 by file/block and then access the table after getting them all in
>order (to avoid the hip hop back and forth).

Perhaps getting the adjacent table blocks (2/11, 2/12, 2/13...) in a multi-block read...

MANY thanks for your description above, that's VERY interesting!

ora-01555

Mohamed, January 15, 2006 - 5:46 am UTC

Hi Tom,
Thanks a lot for the valuable informations you ar providing in this web site. I bought your book Effective Oracle by design and I am enjoying. One of my friend told me that even if you commit during your process and if an ora-01555 occurs later then your commits are rollbacked??
is it true?
Kind regards

Tom Kyte
January 15, 2006 - 3:52 pm UTC

that would be what is known as "a false statement"

commit does what commit says it does.



Use of PLSQL table

A reader, January 22, 2006 - 6:58 am UTC

Dear Tom,
One of my favorite way to avoid fetch across commit and hence ora-01555 is to use a plsql table(table of records) instead of keeping the cursor open. Immediately after the BEGIN, i loop on the cursor and fetch its content into the plsql table and close the cursor. Then I loop on the plsql table and do my process. My questions are

1) I succeeded to treat a cursor of more than one million record and want to know if there is a limit into a plsql table number of records?

2) What is your opinion about this method?
Thanks a lot

Tom Kyte
January 22, 2006 - 9:38 am UTC

ugh. and how did you make your processing that does the commit in the loop restartable. did you do that?

The surface of the problem is ora-1555. IF you hit 1555 because of a fetch across commit - it means you must make your process restartable as well - which almost no one does.

So, is your process restartable safely.

A reader, January 23, 2006 - 4:47 am UTC

Dear Tom,

Since I read your book and understand that I have to commit only when I really need to commit then, even when using the plsql table, I did the commit outside the loop on this plsql table.

of course I made my process restartable because for each row treated I update a flag to 'Y' in order to precise that this record has been correctly processed. If not the flag_processed is maintained at 'N' and enriched with the
trapped error code. When I restart the process I consider only records with flag processed to "N".

I am updating a flag on the table on which the main cursor is based but I am not getting ora-01555. Is this due to the fact that I closed the cursor and I worked on the plsql table instead ?

Tom Kyte
January 23, 2006 - 10:16 am UTC

no, you only get 1555 when the undo we would need for a query no longer exists. You employed array fetching which is good but can be overdone (100 rows at a time is typically more than sufficient and uses somewhat "less" memory on the server as well).

commit across fetch

Mohamed, January 31, 2006 - 4:15 am UTC

Dear Tom,
As a guideline from your book, I do not commit inside a cursor loop; However I am faced to the following situation:

I have to do an initial load of 800.000 records. For each record I will call a package in which there will be several inserts and updates in several tables

Here below how I am proceeding

FOR r1 in C1 LOOP

call package Main_PCK in which there is several insert (There no raise in this package)

END LOOP

commit; -- only one commit

END;

Unfortunately by committing once I am also committing insert and updates done in Main_Pck.

How would you proceed to commit only if all inserts and update done in Main_Pck went OK?

How would you proceed to restrict number of commits in this case?

Thanks a lot for your help


Tom Kyte
January 31, 2006 - 3:13 pm UTC

tell us how you can tell if main_pck "went OK"

and then we can tell you how to rollback if it did not go OK and commit otherwise.




commit across fetch

A reader, February 01, 2006 - 5:02 am UTC

Dear Tom

Tha Main_Pck contains a return code (there is no raise in this package). If this return code is not null then it means that Main_pck Went Wrong else I have to consider that Main_pck went OK.

Thanks in advance fro your help

Tom Kyte
February 01, 2006 - 8:35 am UTC

FOR r1 in C1 LOOP

call package Main_PCK in which there is several insert (There no raise in this
package)

if return_code<>0 then
raise_application_error( -20001, 'Error, I am bailing out!!!' );
end if;

END LOOP
-- this never happens if we raised application error.
commit; -- only one commit

END;

fetch across commit

Grinalds, February 21, 2006 - 8:28 am UTC

Hi!

I like yout suggestion:

loop
begin
select file_id, rowid into l_file_id, l_rowid
from file_data
where processed = 'N'
and rownum = 1
for update of processed;
exception
when no_data_found then exit;
end;
for rec2 in ( select ..... )
loop
process....
end loop;
update file_data set processed = 'Y' where rowid = l_rowid;
commit;
end loop;

but is it universal solution in case if suppose table file_data is very large - millions of rows?
I want to create restartable process and only one commit or rollback at the end is not best for us because we have very long runing intermediate computations.

Tom Kyte
February 22, 2006 - 7:52 am UTC

is it universal? Of course not, nothing is. But I don't understand the question you are asking.

so, suppose file_data is very lage? What bearing does the size have on this one?

It basically is showing the structure of a program that does NOT fetch across commits, that is all.

A reader, March 13, 2006 - 5:44 am UTC

Dear Tom

As I read in your book, commit is to be done only when needed and outside the loop. In my case I can't do it. Becauce I am treating 700.000 records within a loop
for each record I have

BEGIN

FOR r1 in C1 LOOP

BEGIN
-- first record
insert into table T1; ==> insert is ok
insert into table T2 ;==> insert is ok
insert into table T3; ===> insert is ok

-- second record
insert into table T1; ==> insert is ok
insert into table T2 ;==> insert is ok
insert into table T3; ===> insert is not ok and then I have to rollback(in the exception part) all inserts into T1 and T2 for the second record

EXCEPTION
WHEN OTHERS THEN
ROLLBACK; ====> Here it is my problem. Not Only I am rollbacking inside the loop but also I am rollbacking correctly processed first record

END;

END LOOP;

COMMIT; ====> This commit will do some thing only if all records are correct (for 700.000 records the probabilty is not very high to have all record correctly processed)
END;

I think the only solution for me is to do like this

FOR r1 in C1 LOOP

BEGIN

-- first record
insert into table T1;
insert into table T2 ;
insert into table T3;

-- second record
insert into table T1;
insert into table T2 ;
insert into table T3;
COMMIT ==> commit for each record (aie aie aie)

EXCEPTION
WHEN OTHERS THEN
ROLLBACK; ==> rollback any wrong record
END;

END LOOP;

END;

Could you please let me know if you have an other solution. I hate the solution I propose but this the only way I see

Thanks in advance for your precious help

Tom Kyte
March 13, 2006 - 10:22 am UTC

read about savepoints in "the book"

insert
insert
insert

savepoint foo
begin
insert
insert
insert
exception
when <never_use_others_what_a_bug_that_would_be>
then
rollback to foo;
end;


that'll undo just the stuff in the savepoint.

Another way

Michel Cadot, March 13, 2006 - 12:07 pm UTC

Another way to do may be something like:

for r1 in C1 loop
begin
insert all
into t1 ...
into t2 ...
into t3 ...
select ...
insert all
into t1 ...
into t2 ...
into t3 ...
select ...
exception
when <something known> then null;
end;
end loop;

Regards
Michel


Mohamed from France, March 20, 2006 - 9:51 am UTC

Dear Tom,

Thanks for your answer. I read chapter 4 of your book. And I have implemented a "rollback to savepoint" into my LOOP.

Thanks again. I got what I was searching.

Cheers

Problem when commiting only once

A reader, August 01, 2006 - 3:47 am UTC

Dear Tom,

Following your book, I have taken the assumption that I have to commit only when needed. So let me explain you my problem. I have the following process:

BEGIN

FOR R1 IN C1 LOOP

BEGIN
../..
Big Process (insert/update) -- takes more than 24 hours to complete
../..

EXCEPTION
WHEN OTHERS THEN
rollback to savepoint sp;
log error with autonomous transactions -- here i've got: failed to extend rollback segment number 11;ORA-01562

END;

END LOOP;

COMMIT -- only one commit

EXCEPTION
WHEN OTHERS THEN
ROLLBACK;

END;

The job runs for more than 16 hours and then starts producing ORA-01562. As the process has been embeded into a BEGIN/END, this error has been raised 6523 times
untill the process has been killed.

In addition all successfully processed records have been rollbacked in the main other exception. DBA told be why you do not follow our advise and commit each 1000 records?

Anyway, I have changed my PLSQL code as follows:

DECLARE

CURSOR c1 is select ide from table T;

v_count number := -- select count(1) from T;

v_commit_val number := 1000;

v_record_nbr number := 0;

BEGIN

FOR j in 1..trunc(v_count/v_commit_val) + 1 LOOP

FOR R1 IN C1 LOOP

v_record_nbr := v_record_nbr + 1;
BEGIN

../..
Big Process (insert/update) -- takes more than 24 hours to complete
../..

EXCEPTION
WHEN OTHERS THEN
rollback to savepoint sp;
log error with autonomous transactions
END;

IF v_record_nbr > v_commit_val THEN
v_record_nbr := 0;
EXIT; -- in order to do not commit across fetch
END IF;

END LOOP; -- loop on c1;

COMMIT ;

END LOOP -- loop on j

COMMIT;

EXCEPTION
WHEN OTHERS THEN
ROLLBACK;

END;

Could you Please advise me.
I will appreciate
Thanks in advance


Tom Kyte
August 01, 2006 - 7:23 am UTC

my first piece of advice would be "ask myself why I have a process that takes 24 hours, that scares the something out of me"

My comments are commiting are:

commit when your transaction is done, not before, not after.

If you program 1,000's of micro transactions - fine, so be it, make sure you are restartable (remember, in the book I discussed that at great lengths - MOST applications I see that commit every N records ARE NOT RESTARTABLE, when they fail, the data is "hosed")

commit across fetch

A reader, August 01, 2006 - 9:13 am UTC

Yes my process is restartable.

I was telling to the Technical architect that the process of data base initial load we are doing is taking more than 24 hours is due to the complicated data base model he implemented with several tables and indexes.

Now we intend to compute statistics
DBMS_STATS.GATHER_SCHEMA_STATS (SCHEMA,30);

when we are about 25% of the process then when we are about 50% , about 75% and compute statistics when the entire process is finished.

So, I see no solution to improve performance of this load.

Kind Regards



Tom Kyte
August 01, 2006 - 10:26 am UTC

except to review your algorithms of course.

Block or Record needs previous image

Guest, September 06, 2006 - 4:36 am UTC

Tom,

I loved this thread which gave plenty of info as usual.
Please clarify some of my doubts.

The developer is doing some mass updations in historical records and getting ORA-1555. Say for an example he is doing some updation on one column ( some flag like paid ) for 2000 year records. Current insertion/updation/deletion will happen only as 2006.

My doubt is if other sessions doing some changes on year 2006 records, why we need that old image and ORA-1555.
Is it because some of the 2000 and 2006 year records are together in same block?
If we truncate and insert the records order by date, shall we get rid off this issue.

Please help me to understand.



Tom Kyte
September 06, 2006 - 7:54 am UTC

(updation, upgradation - these are not words :) I know what you mean but hey - they are just "not words" - "he is doing some update on one column"...)


You have no real control over what records are stored on which blocks - it all likelyhood, the developer is commiting in a cursor for loop (I can almost see the code now....) and causing the 1555 themselves.

I don't see how truncate comes into play here.

No, I would not suggest a reorganization in order to correct what is easily corrected by

a) not commiting in your cursor for loop
b) sizing undo for what your business requires
c) setting the undo retention period correct for b)

Guest, September 06, 2006 - 11:24 am UTC

Many thanks for the quick response.
Yes the code - committing in a cursor for loop ( fetch across commit).

I read many documents, everywhere seeing like we should not use fetch across commit, but no alternative coding solution.
From your reply, can we say move the commit out of the loop , that is the only solution from coding part.

This is from one metalink note

====================================
Add an "order by 1" statement to the query.
This creates a temp segement to hold the results of the order by and will not
require consistent read
====================================

Can I suggest this?
Also will work on UNDO stuff.

Tom Kyte
September 06, 2006 - 3:50 pm UTC

but no alternative coding solution.

of COURSE there is. You are just refusing to consider it. but OF COURSE there is an alternative. (not to commit!)

it depends on what statement is failing whether that will work (that order by 1 'trick'). basically, it copies the data into temp (that is the goal with it)

I'd be very afraid of code that

a) reads a table
b) updates table
c) commits after some, but not all, of the updates take place

what happens when code FAILS with error? is that code RESTARTABLE, how do you recover from that inconsistent database.

commit across Fetch

Mohamed, September 06, 2006 - 1:49 pm UTC

Dear Tom,

In order to do not commit across fetch we did the followings:

DECLARE

v_nbr_record number :=0;
v_commit_val number :=100;
v_count number;

-- main cursor
CURSOR C1 IS
select ide from table T
where ide between v_ide_begin and v_ide_end;

-- cursor to get count of cursor c1
select count(1) from table T
where ide between v_ide_begin and v_ide_end;

BEGIN

OPEN C2;
FETHC C2 into v_count;
CLOSE C2;

FOR j in 1..trunc(v_count/v_commit_val) + 1 LOOP

FOR R1 in C1 LOOP
v_nbr_record := v_nbr_record + 1;
-- My treatment goes here
-- .../...
if v_nbr_record > v-commit-val then
v_nbr_record := 0;
exit;
end if;
END LOOP;

COMMIT;

END LOOP;

COMMIT;
END;

This piece of code is executed by three process in parallel. There is an interval of one minute between each process. Each process treats
a range of ide(v_ide_begin and v_ide_end) that are differents.

The entire process took more than 36 hours to complete without any ORA-01555 nor ORA-01556. Processes are restartable and any produced error is stored via an autonomous transaction and is encapsuled within a begin/exception/end so that we can treat the next record.

1. What's your opinion about this approach ?

2. I realise that more than 9000 records over 540000(main cursor) records have not been considered by any of the 3 processes. I have been looking everywhere and do not understand why those records have been ignored or skipped?????


Thanks for your answer



Tom Kyte
September 06, 2006 - 4:05 pm UTC

1) how is that restartable?

A reader, September 07, 2006 - 2:54 am UTC

It is restartable because the main cursor is in fact as

CURSOR C1 IS
select ide from table T
where ide between v_ide_begin and v_ide_end
and fag_processed = 'N'
and error_code is not null;

Any ide processed, if it is successfull, it's fag_processed is updated to 'Y' if not its error_code is filled by the error that have caused it rejection.

Waiting for your precious advises




Tom Kyte
September 07, 2006 - 7:15 am UTC

you seem to be happy with your code the way it is, what advice more can be given then?

(changing the code on me like that. hmmm. that your cursor is "that" doesn't mean it is restartable yet, I presume you must also be updating someting from "n" to "y"...)

(but if I had something that took 36 hours using parallel - I'd be looking for better ways to implement my logic..)

A reader, September 07, 2006 - 7:39 am UTC

What will you do TOM if you came into a project where it is too late to change any logic. You have to be able to find a solution for the existing logic so that the initial load will avoid ora-01555 and ora-01556 even if it takes more than 36 hours?

This is what I have been asked to do.





Tom Kyte
September 07, 2006 - 8:20 am UTC

size undo for the work that needs be done.

it is NEVER too late to fix something that doesn't work is it? Think about it - you have a process that doesn't work, two ways to fix it:

a) fix it
b) don't run it because it doesn't work

GG, September 08, 2006 - 5:57 am UTC

If people from side join project, then code refactoring is risky - can make new bugs, can require regression tests etc. for large system... You have to understand business logic, otherwise can ruin business, isn't that?..

Tom Kyte
September 09, 2006 - 11:43 am UTC

? not sure what you were commenting on.

but if you are trying to say "don't fix something", then we will have to agree to disagree.

Are you still against fetch across commit??

martina, September 25, 2007 - 1:25 pm UTC

Hi Tom,

lately we had a problem with a loop, it was filling up the undo Tablespace. We had a SR opened.
It was a cursor - for Loop, commit after the Loop. So Oracle told us to place a commit after a set of rows - not after the loop ... What saved our undo Tablespace so they meant it was a reasonable workaround!

regards, martina
Tom Kyte
September 26, 2007 - 9:28 pm UTC

it was unreasonable.

what happens when you code fails next time? How do you restart??????? Have you even thought about that??????????????????????????????????????

eg: you had code that looks something like:


begin
   for x in ( select * from t ) 
   loop
       ... whatever....
       update t set ... where pk = t.pk;
       if ( we did 100 rows )
       then
          commit;
       end if;
   end loop;
end;


so, when this code fails, as it will (it is after all CODE, CODE fails, fact of life) how do you restart it?

what do you think transactions are about?

I do not approve of their rather simplistic, but wrong, answer.

Fetch acoss commit

martina, September 27, 2007 - 4:26 am UTC

Hello Tom,

I wrote my pervious post as i saw that loop for the first time 2 or 3 days ago and immediately got mad about it. 1st i saw the fetch across commit and started nagging about it. i learned it was suggested by Oracle. then i realized the whole thing itself (the contents of the loop) was kind of nonsense so the SR was closed. And then i had to tell somebody about it ...

My question is: i always preach fetch across commit is a nono. 1st it is non-Ansi and 2nd Oracle cannot manage their Rollback segments (which was true with 7 (or 8 also?)). Back than i got the lapidar answer from oracle that they don't need to manage the RBS as it is Non-Ansi.

But obviously now the Undo-Segments are managed properly, or can that not be trusted? . A (restartable) loop without fetch across commit is more complicated than a restartable loop with a commit. I fear to run out of arguments against fetch across commit!

sorry if i am too emotional!

regards, martina
Tom Kyte
September 27, 2007 - 7:15 am UTC

it is NOT that oracle cannot manage the rollback, it is that Oracle does manage the rollback and when you commit, you tell us "it is ok to reuse this space" - and that leads to you causing YOURSELF to have a 1555.

UNDO is managed pretty much the same with automatic undo - the rules are the same - but you may set an undo_retention which will cause an undo segment to extend to hold the data (in which case, if we do that - and undo retention is set such that you do not get a 1555 - the commit would be totally unnecessary - you would have just the same amount of undo stored as before!!!!)

ora-155 and ora-156

Mohamed Houri, September 29, 2007 - 7:33 am UTC

I have done many initial loads in systems that was't very performant and I have been successfull by applying the following rule for each initial load

1. make my initial load restartable (I always add a flag_processed to the main cursor(table) which is set to 'N' by default
2. I do all my commits outside the loop

My logic looks like:

declare
cursor c1
select my_attributes
from my_table
where fag_processed = 'N';


begin
for r1 in c1

loop
begin
savepoint my_save

... whatever....

update my_table set fag_processed='Y'
where my id = r1.current_id;

exception
when others then
rollback to my_save
insert into my_log_table(current_id, 'Error');
-- there is no raise here in order to treat the next
-- record. Sorry Tom I know you don't like this
end;
end loop;
commit;

Exception
when others then
rollback;
raise;
end;

-- The above logic is slightly updated to avoid both ora-0155 and ora-0156

Tom Kyte
October 03, 2007 - 1:59 pm UTC

but - you didn't need to make this restartable as it is one big transaction.

don't know what a 155 is (suppose that is 1555, and not sure what 156 is referring to)

Snapshot too old error

krishna, August 19, 2008 - 10:22 am UTC

Dear Tom,

Looking forword for your help even after modyfing some code i am repeatedly faching ora-01555 error. can you pls help me out. we are on 9.2.0.8 with automatic undo management.

   PROCEDURE PH_DOWNLOAD_HIST_90 IS
      VN_TOTDECLQTY                 NUMBER;
      VN_TOTRETVENQTY               NUMBER;
      V_PH_HEADERS_SEQ              NUMBER;
      V_PH_DETAILS_SEQ              NUMBER;
      VN_QTYMULTFACT                NUMBER;
      VN_CALRECDQTY                 NUMBER;
      VN_CALRECDQTY_HOLD            NUMBER;
      VN_CALSHIPQTY                 NUMBER;
      VN_MATCD                      NUMBER;
      VN_RECDQTY                    XXCCST_RCV_SHP_LINES.QUANTITY_RECEIVED%TYPE;
      VN_RCTPRICE                   XXCCST_RCV_TRANS.PO_UNIT_PRICE%TYPE;
      VC_PRTNO                      MTL_SYSTEM_ITEMS.SEGMENT1%TYPE;
      VC_ITEMTYPE                   MTL_SYSTEM_ITEMS.ITEM_TYPE%TYPE;
      VC_PUOM                       MTL_SYSTEM_ITEMS.PRIMARY_UNIT_OF_MEASURE%TYPE;
      P_ERRBUF                      VARCHAR2 (2000);
      P_RETCODE                     NUMBER;
      V_STATUS                      VARCHAR2 (1);
      VC_JPART                      VARCHAR2 (50);
      VC_GRR_FOR_TAXES              VARCHAR2 (1);
-------------------------------------------------------------------------------------------------------
      V_SUBINVENTORY                XXCCST_RCV_TRANS.SUBINVENTORY%TYPE;
      V_LOC_SEGMENT1                XXCCST_RCV_TRANS.LOC_SEGMENT1%TYPE;
      V_LOC_SEGMENT2                XXCCST_RCV_TRANS.LOC_SEGMENT2%TYPE;
      V_LOC_SEGMENT3                XXCCST_RCV_TRANS.LOC_SEGMENT3%TYPE;
   V_RECEIPT_SOURCE    VARCHAR2(20);
-------------------------------------------------------------------------------------------------------
      CURSOR CUR_JA_RCV_MAIN IS
         SELECT 
                *
           FROM XXCCST_JA_RCV_LINES
          WHERE COST_ORG_ID = P_COST_ORG_ID
                AND (TRUNC (RECEIPT_CREATION_DATE) BETWEEN P_PH_START_DATE AND P_PH_END_DATE)
                AND RECEIPT_SOURCE_CODE<>'CUSTOMER';           
      V_CUR_JA_RCV_MAIN             CUR_JA_RCV_MAIN%ROWTYPE;

      CURSOR CUR_JA_RCV_SHP_LINES IS
         SELECT 
                A.SHIPMENT_HEADER_ID, A.SHIPMENT_LINE_ID, A.EXCISE_INVOICE_NO, A.SHIP_TO_ORG_ID,
                A.RECEIPT_NUM, A.VENDOR_ID, A.VENDOR_CODE, A.ORGANIZATION_CODE,
                A.RECEIPT_CREATION_DATE, A.RECEIPT_SOURCE_CODE,
-------------------------------------------------------------------------------------------------------
                B.ITEM_ID, B.QUANTITY_SHIPPED,
                B.QUANTITY_RECEIVED, B.SHIPMENT_UNIT_PRICE, B.UNIT_OF_MEASURE, B.PO_HEADER_ID,
                B.PO_LINE_ID, B.PO_NO, B.CURRENCY_CODE, B.RECEIPT_LINE_NUM, B.FX_EXCHANGE_RATE,
                B.UNIT_PRICE, B.UOM_CODE, C.LINE_TYPE_ID, SUPPLIER_ORG_ID,E.DROP_SHIP_FLAG
-------------------------------------------------------------------------------------------------------
         FROM   XXCCST_JA_RCV_LINES A, XXCCST_RCV_SHP_LINES B, PO_LINES_ALL C, PO_LINE_LOCATIONS_ALL E 
          WHERE (TRUNC (A.RECEIPT_CREATION_DATE) BETWEEN P_PH_START_DATE AND P_PH_END_DATE)
                AND A.SHIP_TO_ORG_ID IN (SELECT DISTINCT ORGANIZATION_ID
                                                    FROM XXCCST_COST_ORG_MAP
                                                   WHERE COST_ORG_ID = P_COST_ORG_ID)
                AND NOT EXISTS (
                      SELECT 
                             GRRNO
                        FROM XXCCST_PH_IMPORTS D
                       WHERE D.COST_ORG_ID = P_COST_ORG_ID
                             /*Change By hemant on 13-dec-2007*/
                             AND COST_PERIOD_NAME = P_COST_PERIOD_NAME
                             AND D.ORGANIZATION_ID = A.SHIP_TO_ORG_ID AND D.GRRNO = A.RECEIPT_NUM
                             AND D.CURRENCY_CODE <> 'INR')
                AND A.COST_ORG_ID = P_COST_ORG_ID
-------------------------------------------------------------------------------------------------------
                AND B.PO_LINE_ID = C.PO_LINE_ID(+) AND B.SHIPMENT_LINE_ID = A.SHIPMENT_LINE_ID
                AND B.TO_ORGANIZATION_ID = A.SHIP_TO_ORG_ID AND B.COST_ORG_ID = P_COST_ORG_ID
-------------------------------------------------------------------------------------------------------
                AND A.SHIPMENT_HEADER_ID = V_CUR_JA_RCV_MAIN.SHIPMENT_HEADER_ID
                AND A.SHIPMENT_LINE_ID = V_CUR_JA_RCV_MAIN.SHIPMENT_LINE_ID
                AND A.COST_ORG_ID = V_CUR_JA_RCV_MAIN.COST_ORG_ID 
    AND B.PO_HEADER_ID=E.PO_HEADER_ID 
                AND B.PO_LINE_ID=E.PO_LINE_ID;
    

      V_CUR_JA_RCV_SHP_LINES        CUR_JA_RCV_SHP_LINES%ROWTYPE;

    
      CURSOR CUR_RCV_TRANS_DEL IS
         SELECT  
                  A.SUBINVENTORY, A.PO_UNIT_PRICE, A.LOC_SEGMENT1, A.LOC_SEGMENT2, A.LOC_SEGMENT3,
                  A.TRANSACTION_TYPE, ASSET_INVENTORY, SUM (A.QUANTITY) QUANTITY
             FROM XXCCST_RCV_TRANS A
            WHERE SHIPMENT_HEADER_ID = V_CUR_JA_RCV_SHP_LINES.SHIPMENT_HEADER_ID
                  AND SHIPMENT_LINE_ID = V_CUR_JA_RCV_SHP_LINES.SHIPMENT_LINE_ID
                  AND ORGANIZATION_ID = V_CUR_JA_RCV_SHP_LINES.SHIP_TO_ORG_ID
                  -- AND (TRUNC (TRANSACTION_DATE) BETWEEN P_PH_START_DATE AND P_PH_END_DATE)
                  AND ASSET_INVENTORY = 1 AND COST_ORG_ID = P_COST_ORG_ID
                  AND TRANSACTION_TYPE = 'DELIVER'
                  AND PARENT_TRANSACTION_ID IN (
                        SELECT 
                               TRANSACTION_ID
                          FROM XXCCST_RCV_TRANS
                         WHERE SHIPMENT_HEADER_ID = V_CUR_JA_RCV_SHP_LINES.SHIPMENT_HEADER_ID
                               AND SHIPMENT_LINE_ID = V_CUR_JA_RCV_SHP_LINES.SHIPMENT_LINE_ID
                               AND ORGANIZATION_ID = V_CUR_JA_RCV_SHP_LINES.SHIP_TO_ORG_ID
                               --  AND (TRUNC (TRANSACTION_DATE) BETWEEN P_PH_START_DATE AND P_PH_END_DATE)
                               AND COST_ORG_ID = P_COST_ORG_ID
                               AND TRANSACTION_TYPE IN ('ACCEPT', 'RECEIVE'))
         GROUP BY A.SUBINVENTORY, A.PO_UNIT_PRICE, A.LOC_SEGMENT1, A.LOC_SEGMENT2, A.LOC_SEGMENT3,
                  A.TRANSACTION_TYPE, ASSET_INVENTORY;

      V_CUR_RCV_TRANS_DEL           CUR_RCV_TRANS_DEL%ROWTYPE;

-------------------------------------------------------------------------------------------------------

   
   CURSOR CUR_RCV_TRANS_DEL_OSP IS
         SELECT  
                  B.COMPLETION_SUBINVENTORY, A.PO_UNIT_PRICE, A.LOC_SEGMENT1, A.LOC_SEGMENT2,
                  A.LOC_SEGMENT3, A.TRANSACTION_TYPE, ASSET_INVENTORY, A.WIP_ENTITY_ID,
                  A.DESTINATION_TYPE_CODE, SUM (A.QUANTITY) QUANTITY
             FROM XXCCST_RCV_TRANS A, WIP_DISCRETE_JOBS B
            WHERE A.WIP_ENTITY_ID = B.WIP_ENTITY_ID
                  AND A.SHIPMENT_HEADER_ID = V_CUR_JA_RCV_SHP_LINES.SHIPMENT_HEADER_ID
                  AND A.SHIPMENT_LINE_ID = V_CUR_JA_RCV_SHP_LINES.SHIPMENT_LINE_ID
                  AND A.ORGANIZATION_ID = V_CUR_JA_RCV_SHP_LINES.SHIP_TO_ORG_ID
                  AND COST_ORG_ID = P_COST_ORG_ID AND A.TRANSACTION_TYPE = 'DELIVER'
                  AND PARENT_TRANSACTION_ID IN (
                        SELECT 
                               TRANSACTION_ID
                          FROM XXCCST_RCV_TRANS
                         WHERE SHIPMENT_HEADER_ID = V_CUR_JA_RCV_SHP_LINES.SHIPMENT_HEADER_ID
                               AND SHIPMENT_LINE_ID = V_CUR_JA_RCV_SHP_LINES.SHIPMENT_LINE_ID
                               AND ORGANIZATION_ID = V_CUR_JA_RCV_SHP_LINES.SHIP_TO_ORG_ID
                               AND COST_ORG_ID = P_COST_ORG_ID
                               AND TRANSACTION_TYPE IN ('ACCEPT', 'RECEIVE'))
         GROUP BY B.COMPLETION_SUBINVENTORY, A.PO_UNIT_PRICE, A.LOC_SEGMENT1, A.LOC_SEGMENT2,
                  A.LOC_SEGMENT3, A.TRANSACTION_TYPE, ASSET_INVENTORY, A.WIP_ENTITY_ID,
                  A.DESTINATION_TYPE_CODE;

      V_CUR_RCV_TRANS_DEL_OSP       CUR_RCV_TRANS_DEL_OSP%ROWTYPE;

-------------------------------------------------------------------------------------------------------
      CURSOR CUR_RCV_TRANS_RTR IS
         SELECT   
                  A.SUBINVENTORY, A.PO_UNIT_PRICE, A.LOC_SEGMENT1, A.LOC_SEGMENT2, A.LOC_SEGMENT3,
                  A.TRANSACTION_TYPE, SUM (A.QUANTITY) QUANTITY
             FROM XXCCST_RCV_TRANS A
            WHERE SHIPMENT_HEADER_ID = V_CUR_JA_RCV_SHP_LINES.SHIPMENT_HEADER_ID
                  AND SHIPMENT_LINE_ID = V_CUR_JA_RCV_SHP_LINES.SHIPMENT_LINE_ID
                  AND ORGANIZATION_ID = V_CUR_JA_RCV_SHP_LINES.SHIP_TO_ORG_ID
                  -- AND (TRUNC (TRANSACTION_DATE) BETWEEN P_PH_START_DATE AND P_PH_END_DATE)
                  AND COST_ORG_ID = P_COST_ORG_ID AND TRANSACTION_TYPE = 'RETURN TO RECEIVING'
                  AND PARENT_TRANSACTION_ID IN (
                        SELECT 
                               TRANSACTION_ID
                          FROM XXCCST_RCV_TRANS
                         WHERE SHIPMENT_HEADER_ID = V_CUR_JA_RCV_SHP_LINES.SHIPMENT_HEADER_ID
                               AND SHIPMENT_LINE_ID = V_CUR_JA_RCV_SHP_LINES.SHIPMENT_LINE_ID
                               AND ORGANIZATION_ID = V_CUR_JA_RCV_SHP_LINES.SHIP_TO_ORG_ID
                               -- AND (TRUNC (TRANSACTION_DATE) BETWEEN P_PH_START_DATE AND P_PH_END_DATE)
                               AND COST_ORG_ID = P_COST_ORG_ID AND TRANSACTION_TYPE = 'DELIVER')
         GROUP BY A.SUBINVENTORY, A.PO_UNIT_PRICE, A.LOC_SEGMENT1, A.LOC_SEGMENT2, A.LOC_SEGMENT3,
                  A.TRANSACTION_TYPE;

      V_CUR_RCV_TRANS_RTR           CUR_RCV_TRANS_RTR%ROWTYPE;


-------------------------------------------------------------------------------------------------------
/*
     CURSOR CUR_CIL_DN_PRICE IS
         SELECT /*+ Choose */
/*                C.PRODUCT_ATTR_VALUE, A.OPERAND CILDN_PRICE
           FROM QP_LIST_LINES A, QP_PRICING_ATTRIBUTES C
          WHERE A.LIST_HEADER_ID = 6092                                      /*CIL DN Price List */
/*                AND TRUNC (V_CUR_JA_RCV_SHP_LINES.RECEIPT_CREATION_DATE) BETWEEN START_DATE_ACTIVE
                                                                             AND NVL
                                                                                   (END_DATE_ACTIVE,
                                                                                    '31-dec-4712')
                AND A.LIST_HEADER_ID = C.LIST_HEADER_ID AND A.LIST_LINE_ID = C.LIST_LINE_ID
                AND C.PRODUCT_ATTR_VALUE = V_CUR_JA_RCV_SHP_LINES.ITEM_ID;

      V_CUR_CIL_DN_PRICE            CUR_CIL_DN_PRICE%ROWTYPE;
*/
-------------------------------------------------------------------------------------------------------
/* CODE TO GET PO RATE IN CASE OF TRANSFER */


CURSOR CUR_GETTRANSID_TRANSFER IS 
SELECT TRANSACTION_ID, TRANSACTION_TYPE FROM RCV_TRANSACTIONS 
WHERE SHIPMENT_LINE_ID = V_CUR_JA_RCV_SHP_LINES.SHIPMENT_LINE_ID 
AND TRANSACTION_TYPE = 'DELIVER';

V_CUR_GETTRANSID_TRANSFER          CUR_GETTRANSID_TRANSFER%ROWTYPE;

CURSOR CUR_GETTRANSFERID_TRANSFER IS   
SELECT TRANSFER_TRANSACTION_ID, ORGANIZATION_ID FROM MTL_MATERIAL_TRANSACTIONS 
WHERE ORGANIZATION_ID=V_CUR_JA_RCV_SHP_LINES.SHIP_TO_ORG_ID 
AND RCV_TRANSACTION_ID = V_CUR_GETTRANSID_TRANSFER.TRANSACTION_ID;

V_CUR_GETTRANSFERID_TRANSFER           CUR_GETTRANSFERID_TRANSFER%ROWTYPE;

CURSOR CUR_GETPORATE_TRANSFER IS 
SELECT TRANSACTION_ID, INVENTORY_ITEM_ID, ORGANIZATION_ID, RATE_OR_AMOUNT 
FROM CST_INV_DISTRIBUTION_V 
WHERE TRANSACTION_ID = V_CUR_GETTRANSFERID_TRANSFER.TRANSFER_TRANSACTION_ID 
AND ORGANIZATION_ID = V_CUR_GETTRANSFERID_TRANSFER.ORGANIZATION_ID AND UPPER(LINE_TYPE_NAME) = 'INTRANSIT INVENTORY'; 

V_CUR_GETPORATE_TRANSFER          CUR_GETPORATE_TRANSFER%ROWTYPE;


-------------------------------------------------------------------------------------------------------


      CURSOR CUR_INV_MATCODE IS
         SELECT MATCODE
           FROM XXCCST_INV_MATCODE
          WHERE ITEM_TYPE = VC_ITEMTYPE;

      V_CUR_INV_MATCODE             CUR_INV_MATCODE%ROWTYPE;

      CURSOR CUR_PART_STATUS IS
         SELECT *
           FROM XXCCST_SUBINV_SETUPS
          WHERE COST_ORG_ID = P_COST_ORG_ID
                AND ORGANIZATION_ID = V_CUR_JA_RCV_SHP_LINES.SHIP_TO_ORG_ID
                /*V_CUR_JA_RCV_LINES.SHIP_TO_ORG_ID*/
                AND SUBINVENTORY = V_SUBINVENTORY;

      -- V_CUR_RCV_TRANS.SUBINVENTORY;
      V_CUR_PART_STATUS             XXCCST_SUBINV_SETUPS%ROWTYPE;

      CURSOR CUR_BASEPART IS
         SELECT D.SEGMENT1 BASEPART
           FROM BOM_OPERATION_RESOURCES_V A, BOM_OPERATIONAL_ROUTINGS_V B,
                BOM_OPERATION_SEQUENCES_V C, APPS.MTL_SYSTEM_ITEMS D
          WHERE B.ASSEMBLY_ITEM_ID = D.INVENTORY_ITEM_ID
                AND A.OPERATION_SEQUENCE_ID = C.OPERATION_SEQUENCE_ID
                AND B.ROUTING_SEQUENCE_ID = C.ROUTING_SEQUENCE_ID
                AND B.ORGANIZATION_ID = D.ORGANIZATION_ID AND A.RESOURCE_CODE = VC_PRTNO
                AND B.ORGANIZATION_ID = V_CUR_JA_RCV_SHP_LINES.SHIP_TO_ORG_ID;

      /*V_CUR_JA_RCV_LINES.SHIP_TO_ORG_ID;*/
      V_CUR_BASEPART                CUR_BASEPART%ROWTYPE;
   BEGIN
   EXECUTE IMMEDIATE 'alter session set query_rewrite_integrity=trusted';
      P_PROC_NAME := 'PH_DOWNLOAD_HIST_90';

      UPDATE XXCST.XXCCST_JA_RCV_LINES
         SET ATTRIBUTE1 = NULL,
             ATTRIBUTE2 = NULL,
             ATTRIBUTE3 = NULL,
             ATTRIBUTE4 = NULL,
             ATTRIBUTE5 = NULL,
             ATTRIBUTE6 = NULL,
             ATTRIBUTE7 = NULL,
             ATTRIBUTE8 = NULL,
             ATTRIBUTE9 = NULL,
             ATTRIBUTE10 = NULL,
             ATTRIBUTE11 = NULL,
             ATTRIBUTE12 = NULL,
             ATTRIBUTE13 = NULL,
             ATTRIBUTE14 = NULL,
             ATTRIBUTE15 = NULL
       WHERE SHIP_TO_ORG_ID IN (SELECT ORGANIZATION_ID
                                  FROM XXCST.XXCCST_COST_ORG_MAP
                                 WHERE COST_ORG_ID = P_COST_ORG_ID)
             AND (TRUNC (RECEIPT_CREATION_DATE) BETWEEN P_PH_START_DATE AND P_PH_END_DATE)
             AND COST_ORG_ID = P_COST_ORG_ID;
COMMIT; --- By Balu
-------------------------------------------------------------------------------------------------------
      OPEN CUR_JA_RCV_MAIN;

      LOOP
         FETCH CUR_JA_RCV_MAIN
          INTO V_CUR_JA_RCV_MAIN;

         EXIT WHEN CUR_JA_RCV_MAIN%NOTFOUND;

-------------------------------------------------------------------------------------------------------
         OPEN CUR_JA_RCV_SHP_LINES;

/*
         IF NVL (P_DEBUG_FLAG_SET, 'N') = 'Y' THEN
            FND_FILE.PUT_LINE (FND_FILE.LOG,
                               ' Indigenous Purchase History Inseration Process Start');
         END IF;
*/
         LOOP
            FETCH CUR_JA_RCV_SHP_LINES
             INTO V_CUR_JA_RCV_SHP_LINES;

            IF CUR_JA_RCV_SHP_LINES%NOTFOUND THEN
               UPDATE XXCST.XXCCST_JA_RCV_LINES
                  SET ATTRIBUTE7 = 'PO DETAILS NOT AVAILABLE'
                WHERE SHIPMENT_HEADER_ID = V_CUR_JA_RCV_MAIN.SHIPMENT_HEADER_ID
                      AND SHIPMENT_LINE_ID = V_CUR_JA_RCV_MAIN.SHIPMENT_LINE_ID
                      AND COST_ORG_ID = V_CUR_JA_RCV_MAIN.COST_ORG_ID;
            END IF;
COMMIT; --- By Balu
            EXIT WHEN CUR_JA_RCV_SHP_LINES%NOTFOUND;


--------------------------------------------------------------------------------------------------------------------



Regards
krishna
Tom Kyte
August 20, 2008 - 10:26 am UTC

either

a) increase undo retention - so that it is larger than the time it takes to execute your longest running query

b) make your queries run faster so they run in less time than your undo retention is set to


and query v$undostat to ensure that you are not prematurely expiring undo before the undo retention period - if you are, you need to increase the size of your undo tablespace until you do not do that anymore.

Continious for the prev query

krishna, August 19, 2008 - 10:23 am UTC


IF (V_CUR_JA_RCV_SHP_LINES.RECEIPT_SOURCE_CODE = 'INTERNAL ORDER' AND P_COST_ORG_ID = 90) THEN

-- ORGANIZATION ID - CSS SUPORGOD - CIL RS - CIL
IF V_CUR_JA_RCV_SHP_LINES.SHIP_TO_ORG_ID IN (151,152,153,154,155,156,157,158,159,160,161,162,163,164,165) AND
V_CUR_JA_RCV_SHP_LINES.SUPPLIER_ORG_ID IN (135,136,137,138) THEN
V_RECEIPT_SOURCE :='CIL';
-- ORGANIZATION ID - CSS SUPORGOD - CSS RS - TRANSFER
ELSIF V_CUR_JA_RCV_SHP_LINES.SHIP_TO_ORG_ID IN (151,152,153,154,155,156,157,158,159,160,161,162,163,164,165) AND
V_CUR_JA_RCV_SHP_LINES.SUPPLIER_ORG_ID IN (151,152,153,154,155,156,157,158,159,160,161,162,163,164,165) THEN
V_RECEIPT_SOURCE :='TRANSFER';
-- ORGANIZATION ID - CSS SUPORGOD - PGBU-D RS - PGBU-DAMAN
ELSIF V_CUR_JA_RCV_SHP_LINES.SHIP_TO_ORG_ID IN (151,152,153,154,155,156,157,158,159,160,161,162,163,164,165) AND
V_CUR_JA_RCV_SHP_LINES.SUPPLIER_ORG_ID IN (140,141) THEN
V_RECEIPT_SOURCE :='PGBU-DAMAN';
-- ORGANIZATION ID - CSS SUPORGOD - PGBU-DTA RS - PGBU-DTA
ELSIF V_CUR_JA_RCV_SHP_LINES.SHIP_TO_ORG_ID IN (151,152,153,154,155,156,157,158,159,160,161,162,163,164,165) AND
V_CUR_JA_RCV_SHP_LINES.SUPPLIER_ORG_ID IN (145,146) THEN
V_RECEIPT_SOURCE :='PGBU-DTA';
-- ORGANIZATION ID - CIL SUPORGOD - CSS RS - CSS
ELSIF V_CUR_JA_RCV_SHP_LINES.SHIP_TO_ORG_ID IN (135,136,137,138) AND
V_CUR_JA_RCV_SHP_LINES.SUPPLIER_ORG_ID IN (151,152,153,154,155,156,157,158,159,160,161,162,163,164,165) THEN
V_RECEIPT_SOURCE :='CSS';
-- ORGANIZATION ID - PGBU-DAMAN SUPORGOD - CSS RS - CSS
ELSIF V_CUR_JA_RCV_SHP_LINES.SHIP_TO_ORG_ID IN (140,141) AND
V_CUR_JA_RCV_SHP_LINES.SUPPLIER_ORG_ID IN (151,152,153,154,155,156,157,158,159,160,161,162,163,164,165) THEN
V_RECEIPT_SOURCE :='CSS';
-- ORGANIZATION ID - PGBU-DTA SUPORGOD - CSS RS - CSS
ELSIF V_CUR_JA_RCV_SHP_LINES.SHIP_TO_ORG_ID IN (145,146) AND
V_CUR_JA_RCV_SHP_LINES.SUPPLIER_ORG_ID IN (151,152,153,154,155,156,157,158,159,160,161,162,163,164,165) THEN
V_RECEIPT_SOURCE :='CSS';
-- ORGANIZATION ID - PGBU-DAMAN SUPORGOD - PGBU-DAMAN RS - TRANSFER
ELSIF V_CUR_JA_RCV_SHP_LINES.SHIP_TO_ORG_ID IN (140,141) AND
V_CUR_JA_RCV_SHP_LINES.SUPPLIER_ORG_ID IN (140,141) THEN
V_RECEIPT_SOURCE :='TRANSFER';
-- ORGANIZATION ID - PGBU-DTA SUPORGOD - PGBU-DTA RS - TRANSFER
ELSIF V_CUR_JA_RCV_SHP_LINES.SHIP_TO_ORG_ID IN (145,146) AND
V_CUR_JA_RCV_SHP_LINES.SUPPLIER_ORG_ID IN (145,146) THEN
V_RECEIPT_SOURCE :='TRANSFER';
END IF;
ELSIF (V_CUR_JA_RCV_SHP_LINES.RECEIPT_SOURCE_CODE = 'VENDOR' AND P_COST_ORG_ID = 90) THEN
V_RECEIPT_SOURCE :='VENDOR';
END IF;
--------------------------------------------------------------------------------------------------------------------


--For HHP ( cost_org_id = 1 ) do not consider GRRs against ISO or ICISO.
IF (V_CUR_JA_RCV_SHP_LINES.RECEIPT_SOURCE_CODE = 'INTERNAL ORDER' AND P_COST_ORG_ID = 90 AND
V_CUR_JA_RCV_SHP_LINES.SHIP_TO_ORG_ID IN(135,136,137,138)) THEN
--IF 1 for ignoring HHP receipts against ISO and ICISO.
NULL;
ELSE
BEGIN
SELECT SEGMENT1, ITEM_TYPE, PRIMARY_UOM_CODE
INTO VC_PRTNO, VC_ITEMTYPE, VC_PUOM
FROM MTL_SYSTEM_ITEMS
WHERE ORGANIZATION_ID = V_CUR_JA_RCV_SHP_LINES.SHIP_TO_ORG_ID
AND INVENTORY_ITEM_ID = V_CUR_JA_RCV_SHP_LINES.ITEM_ID;
EXCEPTION
WHEN OTHERS THEN
VC_PRTNO := NULL;
VC_ITEMTYPE := NULL;
VC_PUOM := NULL;
END;

VN_TOTDECLQTY := 0;
VN_TOTRETVENQTY := 0;

-------------------------------------------------------------------------------------------------------
IF V_CUR_JA_RCV_SHP_LINES.LINE_TYPE_ID = 3 THEN
OPEN CUR_RCV_TRANS_DEL_OSP;

LOOP
FETCH CUR_RCV_TRANS_DEL_OSP
INTO V_CUR_RCV_TRANS_DEL_OSP;

EXIT WHEN CUR_RCV_TRANS_DEL_OSP%NOTFOUND;
-------------------------------------------------------------------------------------------------------
V_SUBINVENTORY := V_CUR_RCV_TRANS_DEL_OSP.COMPLETION_SUBINVENTORY;
V_LOC_SEGMENT1 := V_CUR_RCV_TRANS_DEL_OSP.LOC_SEGMENT1;
V_LOC_SEGMENT2 := V_CUR_RCV_TRANS_DEL_OSP.LOC_SEGMENT2;
V_LOC_SEGMENT3 := V_CUR_RCV_TRANS_DEL_OSP.LOC_SEGMENT3;
-------------------------------------------------------------------------------------------------------
VN_TOTDECLQTY := VN_TOTDECLQTY + NVL (V_CUR_RCV_TRANS_DEL_OSP.QUANTITY, 0);
VN_RCTPRICE := V_CUR_RCV_TRANS_DEL_OSP.PO_UNIT_PRICE;
END LOOP;

UPDATE XXCST.XXCCST_JA_RCV_LINES
SET ATTRIBUTE1 = V_CUR_RCV_TRANS_DEL_OSP.TRANSACTION_TYPE,
ATTRIBUTE2 = V_CUR_RCV_TRANS_DEL_OSP.COMPLETION_SUBINVENTORY,
ATTRIBUTE3 = V_CUR_RCV_TRANS_DEL_OSP.QUANTITY,
ATTRIBUTE4 =
DECODE (V_CUR_RCV_TRANS_DEL_OSP.ASSET_INVENTORY,
1, 'ASSET',
'NON-ASSET'
)
WHERE SHIPMENT_HEADER_ID = V_CUR_JA_RCV_SHP_LINES.SHIPMENT_HEADER_ID
AND SHIPMENT_LINE_ID = V_CUR_JA_RCV_SHP_LINES.SHIPMENT_LINE_ID
AND COST_ORG_ID = P_COST_ORG_ID;

CLOSE CUR_RCV_TRANS_DEL_OSP;
ELSE

OPEN CUR_RCV_TRANS_DEL;

VN_TOTDECLQTY := 0;
VN_RCTPRICE := 0;
LOOP
FETCH CUR_RCV_TRANS_DEL
INTO V_CUR_RCV_TRANS_DEL;

EXIT WHEN CUR_RCV_TRANS_DEL%NOTFOUND;
-------------------------------------------------------------------------------------------------------
V_SUBINVENTORY := V_CUR_RCV_TRANS_DEL.SUBINVENTORY;
V_LOC_SEGMENT1 := V_CUR_RCV_TRANS_DEL.LOC_SEGMENT1;
V_LOC_SEGMENT2 := V_CUR_RCV_TRANS_DEL.LOC_SEGMENT2;
V_LOC_SEGMENT3 := V_CUR_RCV_TRANS_DEL.LOC_SEGMENT3;
-------------------------------------------------------------------------------------------------------
VN_TOTDECLQTY := VN_TOTDECLQTY + NVL (V_CUR_RCV_TRANS_DEL.QUANTITY, 0);
VN_RCTPRICE := V_CUR_RCV_TRANS_DEL.PO_UNIT_PRICE;
END LOOP;

-------------------------------------------------------------------------------------------------------



UPDATE XXCST.XXCCST_JA_RCV_LINES
SET ATTRIBUTE1 = V_CUR_RCV_TRANS_DEL.TRANSACTION_TYPE,
ATTRIBUTE2 = V_CUR_RCV_TRANS_DEL.SUBINVENTORY,
ATTRIBUTE3 = V_CUR_RCV_TRANS_DEL.QUANTITY,
ATTRIBUTE4 =
DECODE (V_CUR_RCV_TRANS_DEL.ASSET_INVENTORY,
1, 'ASSET',
'NON-ASSET'
)
WHERE SHIPMENT_HEADER_ID = V_CUR_JA_RCV_SHP_LINES.SHIPMENT_HEADER_ID
AND SHIPMENT_LINE_ID = V_CUR_JA_RCV_SHP_LINES.SHIPMENT_LINE_ID
AND COST_ORG_ID = P_COST_ORG_ID;

CLOSE CUR_RCV_TRANS_DEL;
END IF; -- MK this End if is not there in CSS and LHP

-------------------------------------------------------------------------------------------------------
OPEN CUR_RCV_TRANS_RTR;

VN_TOTRETVENQTY := 0;
LOOP
FETCH CUR_RCV_TRANS_RTR
INTO V_CUR_RCV_TRANS_RTR;

EXIT WHEN CUR_RCV_TRANS_RTR%NOTFOUND;
VN_TOTRETVENQTY := VN_TOTRETVENQTY + NVL (V_CUR_RCV_TRANS_RTR.QUANTITY, 0);
END LOOP;

UPDATE XXCST.XXCCST_JA_RCV_LINES
SET ATTRIBUTE5 = V_CUR_RCV_TRANS_RTR.TRANSACTION_TYPE,
ATTRIBUTE6 = V_CUR_RCV_TRANS_RTR.QUANTITY
WHERE SHIPMENT_HEADER_ID = V_CUR_JA_RCV_SHP_LINES.SHIPMENT_HEADER_ID
AND SHIPMENT_LINE_ID = V_CUR_JA_RCV_SHP_LINES.SHIPMENT_LINE_ID
AND COST_ORG_ID = P_COST_ORG_ID;

CLOSE CUR_RCV_TRANS_RTR;

-------------------------------------------------------------------------------------------------------
/*
IF VN_TOTDECLQTY >= VN_TOTRETVENQTY THEN
VN_RECDQTY := VN_TOTDECLQTY + VN_TOTRETVENQTY;
ELSE
VN_RECDQTY := VN_TOTDECLQTY;
END IF;
*/
-------------------------------------------------------------------------------------------------------
IF VN_TOTDECLQTY >= VN_TOTRETVENQTY THEN
VN_RECDQTY := VN_TOTDECLQTY - VN_TOTRETVENQTY;
ELSE
VN_RECDQTY := VN_TOTDECLQTY;
END IF;

-------------------------------------------------------------------------------------------------------
IF VN_RECDQTY > 0 THEN
OPEN CUR_INV_MATCODE;

FETCH CUR_INV_MATCODE
INTO V_CUR_INV_MATCODE;

IF CUR_INV_MATCODE%NOTFOUND THEN
V_CUR_INV_MATCODE.MATCODE := NULL;
END IF;

CLOSE CUR_INV_MATCODE;

/*
Vn_Matcd := Funcorgmatcode(Vc_Prtno, V_Cur_Ja_Rcv_Lines.Organization_Code,
Cur_Rcv_Trans.Subinventory,V_Cur_Inv_Matcode.Matcode);
*/
IF LTRIM (RTRIM (V_CUR_JA_RCV_SHP_LINES.UNIT_OF_MEASURE)) <> 'Each' THEN
BEGIN
Xxccst_Unit_Convert_Pkg.INV_UM_CONVERSION (V_CUR_JA_RCV_SHP_LINES.UOM_CODE,
VC_PUOM,
V_CUR_JA_RCV_SHP_LINES.ITEM_ID,
VN_QTYMULTFACT, P_ERRBUF,
P_RETCODE);
EXCEPTION
WHEN OTHERS THEN
P_RETCODE := SQLCODE;
P_ERRBUF := SQLERRM;

IF NVL (P_DEBUG_FLAG_SET, 'N') = 'Y' THEN
FND_FILE.PUT_LINE (FND_FILE.LOG,
'Error in '
|| P_PROC_NAME
|| ' Error Number:- '
|| P_RETCODE
|| ' Error Message:- '
|| P_ERRBUF);
END IF;
END;

VN_CALRECDQTY := ROUND (VN_RECDQTY * NVL (VN_QTYMULTFACT, 1), 2);
VN_CALSHIPQTY :=
ROUND ((V_CUR_JA_RCV_SHP_LINES.QUANTITY_SHIPPED) * NVL (VN_QTYMULTFACT, 1),
2);
VN_RCTPRICE := ROUND (VN_RCTPRICE / NVL (VN_QTYMULTFACT, 1), 2);

V_CUR_JA_RCV_SHP_LINES.UOM_CODE := VC_PUOM;
ELSE
VN_CALRECDQTY := VN_RECDQTY;
VN_CALSHIPQTY := V_CUR_JA_RCV_SHP_LINES.QUANTITY_SHIPPED;
END IF;

SELECT XXCCST_PH_HEADERS_SEQ.NEXTVAL
INTO V_PH_HEADERS_SEQ
FROM DUAL;

----------------------------------------------------------------------------------------------------
OPEN CUR_PART_STATUS;

FETCH CUR_PART_STATUS
INTO V_CUR_PART_STATUS;

IF CUR_PART_STATUS%FOUND THEN
IF V_CUR_PART_STATUS.PART_STATUS IN ('P', 'I') THEN
V_STATUS := V_CUR_PART_STATUS.PART_STATUS;
ELSE
IF SUBSTR (V_SUBINVENTORY, 5, 2) = 'DM' THEN
V_STATUS := 'P';
ELSIF SUBSTR (V_SUBINVENTORY, 5, 2) IN ('DP', 'DF') THEN
V_STATUS := 'I';
ELSE
V_STATUS := 'P';
END IF;
END IF;
END IF;

CLOSE CUR_PART_STATUS;

----------------------------------------------------------------------------------------------------------------

IF NVL(V_CUR_JA_RCV_SHP_LINES.DROP_SHIP_FLAG,'N') = 'Y' THEN
V_CUR_PART_STATUS.RATE_TYPE := 'DROPSHIP';
V_STATUS := 'P';
END IF;
----------------------------------------------------------------------------------------------------------------





-- IF V_CUR_PART_STATUS.SUBINV_TYPE = 'OSP' THEN
IF V_CUR_JA_RCV_SHP_LINES.LINE_TYPE_ID = 3 THEN
OPEN CUR_BASEPART;

FETCH CUR_BASEPART
INTO V_CUR_BASEPART;

IF CUR_BASEPART%FOUND THEN
VC_JPART := VC_PRTNO;
VC_PRTNO := V_CUR_BASEPART.BASEPART;
ELSE
VC_JPART := 'NOT AVAILABLE';
END IF;

CLOSE CUR_BASEPART;
END IF;

--Following change is for CSS and LHP. When material is transferred between CSS and its Depots or
--between LW1 and LP1 in LHP then ISO or ICISO is made and material is received using GRR ( identified
--as INTERNAL ORDER on rcv_shipment_headers ).
VC_GRR_FOR_TAXES := 'N';





------------------------------------------------------------------------------------------------------------
--IF VC_GRR_FOR_TAXES = 'C' OR VC_GRR_FOR_TAXES = 'D' THEN


-- FOR QTY
IF V_CUR_JA_RCV_SHP_LINES.RECEIPT_SOURCE_CODE = 'INTERNAL ORDER' AND V_RECEIPT_SOURCE ='TRANSFER' THEN
--VN_CALRECDQTY_HOLD := VN_CALRECDQTY;
VN_CALRECDQTY := 0;

END IF;



-- PO_RATE

IF V_CUR_JA_RCV_SHP_LINES.RECEIPT_SOURCE_CODE = 'VENDOR' THEN

VN_RCTPRICE := VN_RCTPRICE;

ELSIF V_CUR_JA_RCV_SHP_LINES.RECEIPT_SOURCE_CODE = 'INTERNAL ORDER' AND V_RECEIPT_SOURCE ='TRANSFER' THEN

VN_RCTPRICE := 0;


ELSIF V_CUR_JA_RCV_SHP_LINES.RECEIPT_SOURCE_CODE = 'INTERNAL ORDER' AND V_RECEIPT_SOURCE <> 'TRANSFER' THEN

OPEN CUR_GETTRANSID_TRANSFER;
FETCH CUR_GETTRANSID_TRANSFER INTO V_CUR_GETTRANSID_TRANSFER;
CLOSE CUR_GETTRANSID_TRANSFER;

OPEN CUR_GETTRANSFERID_TRANSFER;
FETCH CUR_GETTRANSFERID_TRANSFER INTO V_CUR_GETTRANSFERID_TRANSFER;
CLOSE CUR_GETTRANSFERID_TRANSFER;

OPEN CUR_GETPORATE_TRANSFER;
FETCH CUR_GETPORATE_TRANSFER INTO V_CUR_GETPORATE_TRANSFER;

VN_RCTPRICE := V_CUR_GETPORATE_TRANSFER.RATE_OR_AMOUNT;

CLOSE CUR_GETPORATE_TRANSFER;

END IF;
/*
OPEN CUR_GETTRANSID_TRANSFER;
LOOP
FETCH CUR_GETTRANSID_TRANSFER INTO V_CUR_GETTRANSID_TRANSFER;
EXIT WHEN CUR_GETTRANSID_TRANSFER%NOTFOUND;


OPEN CUR_GETTRANSFERID_TRANSFER;
LOOP
FETCH CUR_GETTRANSFERID_TRANSFER INTO V_CUR_GETTRANSFERID_TRANSFER;
EXIT WHEN CUR_GETTRANSFERID_TRANSFER%NOTFOUND;


OPEN CUR_GETPORATE_TRANSFER;
LOOP
FETCH CUR_GETPORATE_TRANSFER INTO V_CUR_GETPORATE_TRANSFER;
EXIT WHEN CUR_GETPORATE_TRANSFER%NOTFOUND;

VN_RCTPRICE := V_CUR_GETPORATE_TRANSFER.RATE_OR_AMOUNT;
END LOOP;
CLOSE CUR_GETPORATE_TRANSFER;
END LOOP;
CLOSE CUR_GETTRANSFERID_TRANSFER;
END LOOP;
CLOSE CUR_GETTRANSID_TRANSFER;

*/


--END IF;
---------------------------
--------------------------------------------------------------------------------------


INSERT INTO XXCCST_PH_HEADERS
(PH_HEADERS_ID, PH_HEADERS_DATE, COST_PERIOD_NAME, COST_ORG_ID,
SHIPMENT_HEADER_ID, SHIPMENT_LINE_ID, PO_NO, PARTNO,
INVENTORY_ITEM_ID, RECEIPT_NUM, RECEIPT_LINE_NUM, RECEIPT_DATE,
QUANTITY, VENDOR_NO, PO_RATE, CURRENCY_CODE, UOM_CODE,
EXCISE_INVOICE_NO, MATCODE, ORGANIZATION_CODE, ORGANIZATION_ID,
SUBINVENTORY, CARD_TYPE, STORE_CD, BIN_LOC, STATUS, EXCH_RATE,
NET_RATE, NET_RATE_BASIC, NET_RATE_MVAT, NET_RATE_STOFF,
NET_RATE_FACTOR, NET_RATE_PIA_ACT, NET_RATE_PIA_PROV,
NET_RATE_OCTROI, GRR_QTY, GRR_RECEIPT_QTY, GRR_DELIVERED_QTY,
GRR_RTR_QTY,ATTRIBUTE1, ATTRIBUTE2,
CREATED_BY, CREATION_DATE,
LAST_UPDATED_BY, LAST_UPDATE_DATE, LAST_UPDATED_LOGIN, REQUEST_ID,
PROGRAM_APPLICATION_ID, PROGRAM_ID, PROGRAM_UPDATE_DATE,
RATE_TYPE,RECEIPT_SOURCE)
VALUES (V_PH_HEADERS_SEQ, SYSDATE, P_COST_PERIOD_NAME, P_COST_ORG_ID,
V_CUR_JA_RCV_SHP_LINES.SHIPMENT_HEADER_ID,
V_CUR_JA_RCV_SHP_LINES.SHIPMENT_LINE_ID,
V_CUR_JA_RCV_SHP_LINES.PO_NO, VC_PRTNO,
V_CUR_JA_RCV_SHP_LINES.ITEM_ID, V_CUR_JA_RCV_SHP_LINES.RECEIPT_NUM,
V_CUR_JA_RCV_SHP_LINES.RECEIPT_LINE_NUM,
TRUNC (V_CUR_JA_RCV_SHP_LINES.RECEIPT_CREATION_DATE), VN_CALRECDQTY,
V_CUR_JA_RCV_SHP_LINES.VENDOR_ID,
/* V_CUR_RCV_TRANS_DEL.PO_UNIT_PRICE*/
VN_RCTPRICE,
V_CUR_JA_RCV_SHP_LINES.CURRENCY_CODE,
V_CUR_JA_RCV_SHP_LINES.UOM_CODE,
V_CUR_JA_RCV_SHP_LINES.EXCISE_INVOICE_NO, NULL,
V_CUR_JA_RCV_SHP_LINES.ORGANIZATION_CODE,
V_CUR_JA_RCV_SHP_LINES.SHIP_TO_ORG_ID, V_SUBINVENTORY,
V_LOC_SEGMENT1, V_LOC_SEGMENT2, V_LOC_SEGMENT3, V_STATUS,
V_CUR_JA_RCV_SHP_LINES.FX_EXCHANGE_RATE, NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL, VN_CALRECDQTY, VN_RECDQTY, VN_TOTDECLQTY,
VN_TOTRETVENQTY,VC_JPART,V_CUR_JA_RCV_SHP_LINES.RECEIPT_SOURCE_CODE,
P_CREATED_BY, P_CREATION_DATE, P_LAST_UPDATED_BY,
P_LAST_UPDATED_DATE, P_LAST_UPDATED_LOGIN, P_REQUEST_ID,
P_PROGRAM_APPLICATION_ID, P_PROGRAM_ID, P_PROGRAM_UPDATE_DATE,
V_CUR_PART_STATUS.RATE_TYPE,V_RECEIPT_SOURCE);

/*
IF VC_GRR_FOR_TAXES = 'C' OR VC_GRR_FOR_TAXES = 'D' THEN
IF V_CUR_JA_RCV_SHP_LINES.RECEIPT_SOURCE_CODE = 'INTERNAL ORDER' THEN
VN_CALRECDQTY := VN_CALRECDQTY_HOLD;
END IF;
END IF;
*/


COMMIT;

BEGIN
INSERT INTO XXCCST_PH_DETAILS
(PH_DETAILS_ID, PH_HEADERS_ID, COST_PERIOD_NAME, TAX_ID, TAX_NAME,
TAX_AMT, MODVAT_FLAG, TAX_TYPE, TAX_RATE, CURRENCY, CREATED_BY,
CREATION_DATE, LAST_UPDATED_BY, LAST_UPDATE_DATE,
LAST_UPDATED_LOGIN, REQUEST_ID, PROGRAM_APPLICATION_ID,
PROGRAM_ID, PROGRAM_UPDATE_DATE)
SELECT
XXCCST_PH_DETAILS_SEQ.NEXTVAL, V_PH_HEADERS_SEQ, P_COST_PERIOD_NAME,
TAX_ID, TAX_NAME,
ROUND ((TAX_AMOUNT * VN_CALRECDQTY / VN_CALSHIPQTY), 2),
MODVAT_FLAG,
DECODE
(SUBSTR (UPPER (TAX_NAME), 1, 4),
'DISC', 'DISC',
DECODE
(SUBSTR (UPPER (TAX_NAME), 1, 3),
'TDS', 'TDS',
DECODE
(SUBSTR (UPPER (TAX_NAME), 1, 7),
'FREIGHT', 'FREIGHT',
DECODE (SUBSTR (UPPER (TAX_NAME), 1, 3),
'WCA', 'TDS',
DECODE (SUBSTR (UPPER (TAX_NAME), 1, 6),
'OCTORI', 'OCTORI',
DECODE (SUBSTR (UPPER (TAX_NAME), 1, 4),
'MVAT', 'Sales Tax',
DECODE (SUBSTR (UPPER (TAX_NAME),
1, 2),
'PF', 'FREIGHT',
TAX_TYPE
)
)
)
)
)
)
),
TAX_RATE, CURRENCY, P_CREATED_BY, P_CREATION_DATE, P_LAST_UPDATED_BY,
P_LAST_UPDATED_DATE, P_LAST_UPDATED_LOGIN, P_REQUEST_ID,
P_PROGRAM_APPLICATION_ID, P_PROGRAM_ID, P_PROGRAM_UPDATE_DATE
FROM XXCCST_JA_RCT_TAX
WHERE SHIPMENT_LINE_ID = V_CUR_JA_RCV_SHP_LINES.SHIPMENT_LINE_ID
AND COST_ORG_ID =P_COST_ORG_ID;

COMMIT;
EXCEPTION
WHEN OTHERS THEN
P_RETCODE := SQLCODE;
P_ERRBUF := SQLERRM;

IF NVL (P_DEBUG_FLAG_SET, 'N') = 'Y' THEN
FND_FILE.PUT_LINE (FND_FILE.LOG,
'Error in '
|| P_PROC_NAME
|| ' Error Number:- '
|| P_RETCODE
|| ' Error Message:- '
|| P_ERRBUF);
END IF;

RAISE;
END;
END IF;

VC_JPART := NULL;
/*
END LOOP;

CLOSE CUR_RCV_SHP_LINES;
*/
END IF; --1 for ignoring HHP receipts against ISO and ICISO.

V_CUR_RCV_TRANS_DEL.TRANSACTION_TYPE := NULL;
V_CUR_RCV_TRANS_DEL.SUBINVENTORY := NULL;
V_CUR_RCV_TRANS_DEL.QUANTITY := NULL;
V_CUR_RCV_TRANS_DEL.ASSET_INVENTORY := NULL;
V_CUR_RCV_TRANS_RTR.TRANSACTION_TYPE := NULL;
V_CUR_RCV_TRANS_RTR.QUANTITY := NULL;
V_CUR_JA_RCV_MAIN.SHIPMENT_HEADER_ID := NULL;
V_CUR_JA_RCV_MAIN.SHIPMENT_LINE_ID := NULL;
V_CUR_JA_RCV_MAIN.COST_ORG_ID := NULL;
-------------------------------------------------------------------------------------------------------
/* MK Following not in CSS and LHP */

V_SUBINVENTORY := NULL;
V_LOC_SEGMENT1 := NULL;
V_LOC_SEGMENT2 := NULL;
V_LOC_SEGMENT3 := NULL;
-------------------------------------------------------------------------------------------------------
END LOOP;

/*
IF NVL (P_DEBUG_FLAG_SET, 'N') = 'Y' THEN
FND_FILE.PUT_LINE (FND_FILE.LOG, ' Indigenous Purchase History Inseration Process End');
END IF;
*/
CLOSE CUR_JA_RCV_SHP_LINES;
-------------------------------------------------------------------------------------------------------
END LOOP;

CLOSE CUR_JA_RCV_MAIN;
-------------------------------------------------------------------------------------------------------
EXCEPTION
WHEN OTHERS THEN
P_RETCODE := SQLCODE;
P_ERRBUF := V_CUR_JA_RCV_MAIN.SHIPMENT_LINE_ID || ' ' || SQLERRM;

IF NVL (P_DEBUG_FLAG_SET, 'N') = 'Y' THEN
FND_FILE.PUT_LINE (FND_FILE.LOG,
'Error in '
|| P_PROC_NAME
|| ' Error Number:- '
|| P_RETCODE
|| ' Error Message:- '
|| P_ERRBUF);
END IF;

RAISE;
END PH_DOWNLOAD_HIST_90;

Tom Kyte
August 20, 2008 - 10:27 am UTC

good gosh, that is a lot of code

what did you expect anyone to do with this?

see above for my original response.

Continious for the prev query

krishna, August 20, 2008 - 11:40 am UTC


Dear Tom,

Thanks for your reply . our undo retention is set to 10,000 seconds and our undo tablespace is 28Gb. I am failed to do it since it was a huge code for which i am not able to track where the problem and more ever i have tuned few queries in this pkg , but still the same problem.

Any tips appreciated ??

Regards

krishna
Tom Kyte
August 20, 2008 - 12:44 pm UTC

I already told you where you need to look.

I would say either

a) you take more than 10,000 seconds
b) 28gb is not sufficient

use v$undostat to determine which.

Fetching across commits: a case for failure

Duke Ganote, August 22, 2008 - 9:48 am UTC

"Fetching across commits" first became "obviously bad" for me when using the DBMS_HS_PASSTHROUGH package (HS = Heterogeneous Services).

The Oracle docs provide an example of using the package
http://www.stanford.edu/dept/itss/docs/oracle/10g/server.101/b10764/majfeat.htm#1006051
I tweaked it for my environment (a connection to DB2 v7).
* First I'll show the DBMS_HS_PASSTHROUGH implementation
* Second, the implicit HS implementation
* Third, the DBMS_HS_PASSTHROUGH implementation when 'fetching across commits'
* Fourth the implicit HS implementation when fetching across commits.

Bottom line: Fetching across commit with DBMS_HS_PASSTHROUGH won't work. COMMIT closes the cursor with the heterogeneous database! You only get one row fetched.

DBMS_HS_PASSTHROUGH implementation
     DECLARE
        val  VARCHAR2(100);
        c    INTEGER;
        nr   INTEGER;
     BEGIN
       c := DBMS_HS_PASSTHROUGH.OPEN_CURSOR@test_db2_v7_via_HS;
       DBMS_HS_PASSTHROUGH.PARSE@test_db2_v7_via_HS(c,
         'select ACCTCODE from CID1DBA.ACCAOR');
       LOOP
          nr := DBMS_HS_PASSTHROUGH.FETCH_ROW@test_db2_v7_via_HS(c);
          EXIT WHEN nr = 0;
          DBMS_HS_PASSTHROUGH.GET_VALUE@test_db2_v7_via_HS(c, 1, val);
          DBMS_OUTPUT.PUT_LINE(val);
       END LOOP;
       DBMS_HS_PASSTHROUGH.CLOSE_CURSOR@test_db2_v7_via_HS(c);
     END;
     /
     10000
     10000
     10000
     10000
     10000
     10000
     22957
     22957
     22958
     
     PL/SQL procedure successfully completed.


Implicit HS implementation
  
     DECLARE
        val  VARCHAR2(100);
        c    INTEGER;
        nr   INTEGER;
     BEGIN
       FOR REC IN ( select ACCTCODE
                      from CID1DBA.ACCAOR@test_db2_v7_via_HS )
       LOOP
         DBMS_OUTPUT.PUT_LINE(rec.ACCTCODE);
       END LOOP;
     END;
     /
     10000
     10000
     10000
     10000
     10000
     10000
     22957
     22957
     22958
     
     PL/SQL procedure successfully completed.


DBMS_HS_PASSTHROUGH implementation when 'fetching across commits'
  1  DECLARE
  2     val  VARCHAR2(100);
  3     c    INTEGER;
  4     nr   INTEGER;
  5  BEGIN
  6    c := DBMS_HS_PASSTHROUGH.OPEN_CURSOR@test_db2_v7_via_HS;
  7    DBMS_HS_PASSTHROUGH.PARSE@test_db2_v7_via_HS(c,
  8      'select ACCTCODE
  9       from   CID1DBA.ACCAOR');
 10    LOOP
 11      nr := DBMS_HS_PASSTHROUGH.FETCH_ROW@test_db2_v7_via_HS(c);
 12      EXIT WHEN nr = 0;
 13      DBMS_HS_PASSTHROUGH.GET_VALUE@test_db2_v7_via_HS(c, 1, val);
 14      DBMS_OUTPUT.PUT_LINE(val);
 15      COMMIT; -- THIS IS THE ONLY CHANGE!!!!!!!
 16    END LOOP;
 17    DBMS_HS_PASSTHROUGH.CLOSE_CURSOR@test_db2_v7_via_HS(c);
 18* END;
whse2000:prot\qtview> /
10000
DECLARE
*
ERROR at line 1:
ORA-28550: pass-through SQL: cursor not found
ORA-06512: at line 11


implicit HS implementation when fetching across commits.
  1  DECLARE
  2     val  VARCHAR2(100);
  3     c    INTEGER;
  4     nr   INTEGER;
  5  BEGIN
  6    FOR REC IN ( select ACCTCODE
  7                   from   CID1DBA.ACCAOR@test_db2_v7_via_HS)
  8    LOOP
  9      DBMS_OUTPUT.PUT_LINE(rec.ACCTCODE);
 10      COMMIT;  -- THIS IS THE ONLY CHANGE!!!!!!!!!
 11    END LOOP;
 12* END;
whse2000:prot\qtview> /
10000
10000
10000
10000
10000
10000
22957
22957
22958

PL/SQL procedure successfully completed.


Note: I'm using Oracle Database 10g Enterprise Edition Release 10.2.0.3.0

clarification

SH, October 22, 2008 - 4:32 am UTC

Hi Tom,

Would like to clarify on the 'before' image when commit command is triggered. what is the value of 'before' image store in the undo tablespace? snapshot of the data block or the change information of the sql statement eg. delete from x rowid=xxx, update x set value=before_change. How oracle reconstruct the data from the rollback ?

If 'before' image is referring to the snapshot of the data block, does it means that all the rows (could have few rows in one data block) inside the data block is considered 'dirty' from oracle point of view. Hope you understand my question. Thanks in advance.

Regards
Junior DBA - SH


Tom Kyte
October 22, 2008 - 8:51 am UTC

the 'before' image is created way before the commit

when you do "update t set x = 5 where y = 6", it'll save in undo the previous value of x - before it does the update. Then it'll do the update and change x to 5 and put into REDO the after image - the fact that the row should have x=5.


It is just enough information to put the row back. Think of it like this

if you INSERT a row - undo will have "delete + rowid" - to un-insert the row.

if you DELETE a row - undo will have "insert this rowid - with these values" to re-insert the row

if you UPDATE a row - undo will have "update this rowid - set these columns to these values" to un-update the row


only the information needed to undo the change is stored.

before image stores in bytes or block size

sh, October 24, 2008 - 6:06 am UTC

Hi Tom,

Thanks for the explanation. I still confused with the the actual size for 'before' image being stored in the undo segment. Based on the example below, 2 bytes (previous value of x) will be stored in undo tablespace if x value declare as char(2) and not 1 data block (db_block_size=8192). Thanks

example:
when you do "update t set x = 5 where y = 6", it'll save in undo the previous value of x - before it does the update. Then it'll do the update and change x to 5 and put into REDO the after image - the fact that the row should have x=5.

Regards
Junior DBA - SH
Tom Kyte
October 24, 2008 - 1:31 pm UTC

it will only store the minimum needed to put the values back. It does NOT record blocks, it records things like:


if you INSERT a row - undo will have "delete + rowid" - to un-insert the row.

if you DELETE a row - undo will have "insert this rowid - with these values" to re-insert the row

if you UPDATE a row - undo will have "update this rowid - set these columns to these values" to un-update the row



so, if you update that row, you will store in the undo segment the rowid of the row, and the value the column should be put back to.

Does undo for MERGE/UPDATE contain unchanged values?

Stew Ashton, October 26, 2008 - 6:10 am UTC


When doing a MERGE...WHEN MATCHED THEN UPDATE, the statement has to SET all the columns whose values might change, even if the source and target values happen to be the same in a particular row. For example,
create table dest_t
as select level key1
, 'Value 1 original ' || level val1
, 'Value 2 original ' || level val2
from dual connect by level <= 3;
/
--create table succeeded.

create table source_t as select * from dest_t;
/
--create table succeeded.

update source_t set val1 = 'Value 1 changed !!!' where key1 = 1;
/
--1 rows updated

update source_t set val2 = 'Value 2 changed !!!' where key1 < 3;
/
--2 rows updated

merge into dest_t d
using source_t s
on(d.key1 = s.key1)
when matched then update set d.val1 = s.val1, d.val2 = s.val2;
/
--3 rows merged

select * from dest_t;
/
--KEY1         VAL1                   VAL2               
-------------- ---------------------- -------------------
--1            Value 1 changed !!!    Value 2 changed !!! 
--2            Value 1 original 2     Value 2 changed !!! 
--3            Value 1 original 3     Value 2 original 3  
--
--3 rows selected

So in row 2 only one value really got changed and in row 3 no values really got changed.

My question is: on row 2, will the UNDO contain the column whose value didn't really change? If so, is there any way to "streamline" that behavior so that only truly changed values are recorded?

P.S. I do know how to eliminate the completely unchanged row by using your "compare two tables" code from https://asktom.oracle.com/Misc/oramag/on-injecting-and-comparing.html
Tom Kyte
October 26, 2008 - 10:25 am UTC

undo has to be generated - for read consistency and locking to happen correctly.

thanks for the clarification

SH, October 28, 2008 - 5:05 am UTC

Hi Tom,

Thanks a lots for the clarification. Appreciate it.

Regards
Junior DBA - SH

ORA-01555 on a table that hasn't been updated for over 2 days

Lisa Li, March 10, 2009 - 2:36 pm UTC

Hi, Tom,

I have a detail table in my data warehouse database, we used Direct Path load to load about 50 million records into it over two days ago, no other update/insert/delete since then. Users have done many scans against it during their stress test. I would think this has forced block cleanout for the table. But today, when our developer ran his 8 hour summary process, he got ORA-01555 error on this detail table. Here is the main loop for this process:

cursor select ... from detail_table where insert_date=current_date group by ...
cursor loop
process data, insert into or update summary_tables
commit
end loop

our developer says that if he commits after each cursor record, it works fine. But if he commits after many records, he always get ORA-01555 error on detail_table.
I read one of your other old threads, http://asktom.oracle.com/pls/asktom/f?p=100:11:5698552772125791::::P11_QUESTION_ID:895410916429
you mentioned that even if the whole data block is not revisited at all, after some duration of time, the block will be known to be "old enough".
My question is: Do you think my problem is still because of the delayed block cleanout? How long is this duration that I can be sure all blocks are cleaned out?
Thank you very much for your help!


Tom Kyte
March 11, 2009 - 2:53 am UTC

sure it is, because full scans tend to not actually clean out the blocks in current releases.

it'll happen if they commit after each row as well.

... you mentioned that even if the whole data block is not revisited at all, after
some duration of time, the block will be known to be "old enough". .... and that'll be after the 1555 happened.

block cleanout may not be done by a full table scan ?

Hemant K Chitale, March 12, 2009 - 4:43 am UTC

You said "because full scans tend to not actually clean out the blocks in current releases. "

How and when (which release) block cleanout "rules" been changed ?

ORA-01002 in 10.2.0.3.0 database when fetching more than 30000 rows in a table

Surendra Deora, April 28, 2009 - 12:54 pm UTC

Hi Tom,

We have a strange problem here. We have a program that works fine in other instances. But in one of our instances, we get a problem when the main cursor tries to fetch and process more than 30,000 records. We do not get any issue when we have a rownum < 30000 added to the main cursor or a lesser count.

Can you help us in diagnosing if this is some database problem or not ?

The Error is as below:
ORA-01002: fetch out of sequence

This Error comes when the cursor is being parsed and not during any processing, and the CURSOR is not a FOR UPDATE CURSOR.

Thanks,
Surendra
Tom Kyte
April 28, 2009 - 12:56 pm UTC

something is causing the cursor to be closed - do you have a code snippet to look at ?

ORA-01002 in 10.2.0.3.0 database when fetching more than 30000 rows in a table

Surendra Deora, April 28, 2009 - 1:44 pm UTC

Hi Tom,

I am sure the Cursor was not opened at all as I have messages printed in the logs if the processing in the loop starts.

What can cause the CURSOR be closed even before it is started ?

As I told you earlier, the code works if we put a limit on the number of records that will be fetched in the cursor with a rownum condition. Also the code works fine in other instances.

Below is a sample code of the cursor:


CURSOR member_cur IS
SELECT rowid,..,.., FROM test where WHERE NVL(validation_status, 'VALID') = 'VALID';
.
BEGIN
.
.
.
OPEN member_cur;
LOOP
FETCH member_cur BULK COLLECT
INTO member_rec LIMIT 1000;
EXIT WHEN member_rec.COUNT = 0;
FOR cg IN 1 .. member_rec.COUNT LOOP
... processing....
... processing....
... processing....
END LOOP;
COMMIT;
member_rec.delete;

END LOOP;
close member_cur;





Thanks,
Surendra
Tom Kyte
April 28, 2009 - 4:14 pm UTC

.. Also the
code works fine in other instances.
...

that is so "not relevant"

This code:

select * from t where x = 42;

runs perfectly well on 999,999 databases. It fails on one instance (out of a million!!!!)

ops$tkyte%ORA10GR2> select * from t where x = 42;
select * from t where x = 42
                      *
ERROR at line 1:



strange isn't it? Not really, x is a varchar, on that one database - there is an error in the data, x cannot be converted to a number without failing.


fill us in on the "processing" bit - if you just code:
    CURSOR member_cur IS
      SELECT rowid,..,.., FROM test where WHERE NVL(validation_status, 'VALID') 
= 'VALID';
.
BEGIN
    OPEN member_cur;
    LOOP
      FETCH member_cur BULK COLLECT
        INTO member_rec LIMIT 1000;
      EXIT WHEN member_rec.COUNT = 0; 
      FOR cg IN 1 .. member_rec.COUNT LOOP
            null;
      END LOOP; 
      COMMIT;
      member_rec.delete;
    END LOOP; 
    close member_cur;



does it reproduce? If not, add in processing - bit by bit - to see what triggers it - the cause will be there.

Fetch across commit: Oracle approved?

Gary, May 18, 2009 - 11:51 pm UTC

The following document (PL/SQL User's Guide and Reference 10GR2):

http://download-uk.oracle.com/docs/cd/B19306_01/appdev.102/b14261/sqloperations.htm#i3160

contains a workaround allowing you to fetch across commits in a loop on a FOR UPDATE query. It says:

"If you want to fetch across commits, use the ROWID pseudocolumn to mimic the CURRENT OF clause. Select the rowid of each row into a UROWID variable, then use the rowid to identify the current row during subsequent updates and deletes."

It follows up with an example. Should I be horrified? Filled with dread? Or is there some reason that this is OK in this case?

Tom Kyte
May 23, 2009 - 11:27 am UTC

that code is setting itself up for an ora-1555 easily.

excessively bad practice, should not be done.


the comment in there is entirely wrong too

... Because the fetched rows are not locked by a FOR UPDATE clause, other users might unintentionally overwrite your changes. The extra space needed for read consistency is not released until the cursor is closed, which can slow down processing for large updates. ...


The first bit is correct, but should read "since you don't lock your data in this case, you have thrown transactional integrity out the window"

that last bit about "extra space" is entirely wrong. As soon as you commit, your undo space is free for reuse - and therein lies the problem - you yourself will likely need that undo again and if it is reused - you will fail with 1555.


What if I commit for every 100 rows or so?

Guruparan, August 06, 2009 - 10:11 am UTC

Thanks Tom for your suggestions. Really useful.

I have a code which cause ORA-01555 error. But not every time it runs (it was fine for long time, now it errors out frequently causing major concern). After going through your suggestions, I came to know that I am doing a fetch across commit. I am looking at the possibilities to correct it.

* Can I commit for every 100 rows?

Like
for i in (select rownum, a.* 
            from emp a
           where status = lv_some_status)
loop
  delete 
    from emp 
   where emp_id = i.emp_id;

 if (mod(rownum,100)=0)
 then
    commit;
 end if;
end loop; 

Tom Kyte
August 06, 2009 - 10:59 am UTC

can you?

sure

should you?

almost certainly not.


that code should be erased, you would find

delete from emp where status = lv_some_status;

to be the most efficient (generates the least total amount of undo, redo, uses the least number of IO's, the least amount of cpu, etc) approach ever (I am assuming that emp_id is the primary key)

In a code review, we would first enjoy making fun of such an expensive approach (the slow by slow, let us query the table twice and do lots of extra work) before erasing it and replacing it with a single sql statement.

What if I commit for every 100 rows or so?

Guruparan, August 07, 2009 - 1:32 am UTC

Thanks Tom for your reply.

But I have a lengthy story. Just for brevity, I have pasted a code like that.

I have a cursor which fetches data from EMP table, open the cursor in a FOR LOOP, deletes some eight or ten child tables. And finally deletes the data from EMP table too. Then COMMIT with in the loop. We hit ORA-01555. The entire delete process takes so much time to run (four to five hours).

After going through your replies on "snapshot too old error". We increased, UNDO retention space (from 200 MB to 650 MB) and undo retention time (from 900 seconds to 9000 seconds). Still we could not get rid of ORA-01555. Then we realized this is because of "fetch across commit".

Fetch Across Sessions

Mohan Pyare, February 23, 2011 - 7:48 pm UTC

Here is a question

Session 1:
------------
create table T1 ( id number);
insert into T1 values ( 1) ;
COMMIT;


Now again Session 2 is launched and also Session 3 at almost the same time

Session 2:
------------
DECLARE v number;
BEGIN
select id into v from t1;
dbms_OUTPUT.PUT_LINE ( 'Before_Sleep ' || V);
DBMS_LOCK.sleep(60);
select id into v from t1;
dbms_OUTPUT.PUT_LINE ( 'After_Sleep ' || V);
END;


Session 3
---------
update t1 set id = 20;
commit;


The question is the first Select in Session 2 brings in "1" then sleeps...
Meanwhile Session 3 updates to "20" commits;
So will the second select in Session 2 bring in the value 1 or 20 ? I see it bringing in value 20 ?

Tom why does this happen ? shouldn't session 2 bring in value "1" ....for Read Consistency sake for the same transaction ?
Tom Kyte
February 24, 2011 - 12:34 pm UTC

read consistency by default is at the STATEMENT level, by default you in in read committed isolation and each statement (each sql statement) issued by you is read consistent.

If you want it at the transaction level, you would use the isolation mode known as serializable.

Certainly Apprciate Your Answers

Mohan Pyare, February 24, 2011 - 2:01 pm UTC

Thanks Again Tom..

Tung NS, October 11, 2019 - 4:34 am UTC

Hi TOM,

I have some process like:

--main_procedure
for dept in (select ID from department)
loop
  child_procedure(dept.ID);
end loop;

--child_procedure:
delete big_table where dept_id = p_dept.ID and year = 2019;
insert into big_table select from other__big_table;
commit;


department table has around 800 records and child_procedure completed between 30s and 1min making total runtime around 10hrs (I can break down to multi part of departments in a loop and run multi main_procedure to speed up the process). I place commit in child_procedure to release lock in big_table for each department after finished.

Is it "good" approach or it will throw out ORA-01555 error?
Connor McDonald
October 21, 2019 - 11:54 am UTC

Just get the 800 records in advance so you don't need to loop around the cursor, ie,

declare
  my_depts sys.odcinumberlist;
begin
  select id
  bulk collect into my_depts
  from dept;

  for i in 1 .. my_depts.count loop
     child_procedure(dept.ID); 
  end loop;
end;


That way you won't get into ora-1555 hassles