Skip to Main Content
  • Questions
  • Execute immediate and rollback segments

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Akil.

Asked: December 28, 2002 - 1:29 pm UTC

Last updated: June 07, 2007 - 2:43 pm UTC

Version: 8.1.7

Viewed 10K+ times! This question is

You Asked

Hi Tom,

I wanted to drop all the indexes of my schema. So I tried the following code:

begin
for i in (select index_name from user_indexes) loop
execute immediate 'drop index '||i.index_name||' ';
end loop;
end;
/

It started droping indexes but after some time I got an ORA-1555 error. I had about 500 indexes to drop. It gave me ORA-1555 midway through. When I re-ran the code, it droped the rest of the indexes.

I was surprised at this error. I could not understand what the rollback segments has got anything to do with a drop index command. Can you please explain the mechanism? Thanks for your help.

Regards,

Akil.


and Tom said...

The ora-01555 was the query against the data dictionary (user_indexes and its underlying tables)

You are committing in a cursor for loop ;) DDL commits -- the changes it commits were data dictionary updates against the very tables you were querying. You caused the 1555 yourself.



Consider if this was the process:

for x in ( select * from t )
loop
delete from t where pk = x.pk;
commit;
end loop;

You are basically doing that. A "safe" way to do this process would be:


begin
loop
select index_name into l_index_name from user_indexes where rownum=1;
execute immediate 'drop index ' || i.index_name;
end loop;
exception
when no_data_found then null;
end;


Here, you do not keep the cursor open across commits.

Rating

  (18 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Wonderful explanation.

A reader, December 28, 2002 - 5:12 pm UTC

Tom,

You mean to say the code should be written this Way :

1 declare
2 l_index_name user_indexes.index_name%TYPE;
3 begin
4 for i in (select index_name from user_indexes)
5 loop
6 select index_name into l_index_name from user_indexes where rownum=1;
7 --dbms_output.put_line('Index name : '|| i.index_name);
8 execute immediate 'drop index ' || i.index_name;
9 end loop;
10 exception
11 when no_data_found then null;
12* end;


BTW, What is the purpose of this

select index_name into l_index_name from user_indexes where rownum=1;

query.

Thanks


Tom Kyte
December 29, 2002 - 9:34 am UTC

yes -- pretty much.

The select .. where rownum=1

just opens the implicit cursor, gets the first index name, drops it and then does it all over again. Eventually, you'll have no indexes left and the select into will raise no data found -- ending the process.

Misread ?

Frank, December 29, 2002 - 10:27 am UTC

Did I or did you misread the previous posting ?
I don't think it is right.
You have to drop the for i in select bit !

hth,
Frank

Tom Kyte
December 29, 2002 - 10:51 am UTC

yes, I just glanced at it -- you are right -- that would get the ora-1555 again ;)


begin
loop
select index_name into l_index_name from user_indexes where rownum=1;
execute immediate 'drop index ' || i.index_name;
end loop;
exception
when no_data_found then null;
end;


is what I had -- to flesh it out:

1 declare
2 l_index_name varchar2(30);
3 begin
4 loop
5 select index_name into l_index_name from user_indexes where rownum=1;
6 execute immediate 'drop index ' || l_index_name;
7 end loop;
8 exception
9 when no_data_found then null;
10* end;



compiles and runs (but it'll fail of course when it hits an index used to enforce a constraint!)



clarification

A reader, December 29, 2002 - 3:37 pm UTC

Thanks for the great explanation Tom.

I just want to re-confirm one thing. I was thinking that a "for i in (select x from t) loop" will execute as follows:

1. The select statement will execute and fetch the rows into an implicit cursor.
2. The program control will loop through the cursor and execute the statements inside the loop.
3. Exit the loop at the last record.

Now, from your reply, I think that, the select statement will execute once for every iteration of the loop and the cursor will be repopulated with fresh data.

Is that so? Is my understanding right? Can you please give details? Thank you somuch for your lucid examples.


Tom Kyte
December 29, 2002 - 5:03 pm UTC

I see -- you think that a cursor gets the rows and then gives them to you -- that is the problem here -- the cursor is getting ROWS as you fetch them and eventually -- due to all of your commits -- it cannot get the rows anymore since the rollback doesn't exist. We need to take this construct:

begin
for i in (select index_name from user_indexes) loop
execute immediate 'drop index '||i.index_name||' ';
COMMIT against changes made to the tables behind
the user_indexes view happens here...

end loop;
end;


and remove the looping cursor -- for all of those commits of changes against the data your are querying are causing the 1555. I did that by recoding as an "infinite loop"

declare
l_index_name varchar2(30);
begin
loop
select index_name into l_index_name from user_indexes where rownum=1;
execute immediate 'drop index ' || l_index_name;
end loop;
exception
when no_data_found then null;
end;


that works by looping -- getting the first index it finds -- dropping it (and hence committing -- but that is OK, we don't have a cursor to fetch from anymore) and going back around until eventually all indexes are dropped and we get the no data found exception.



A reader, December 30, 2002 - 2:30 pm UTC

Thanks for your time, Tom. Sorry if I am not getting what you already said. I follow everything you say about how to fix the problem. But I am still confused about the cursor fetch mechanism. What do you mean by "the cursor is getting the rows as you fetch them"?

I can visualize only two ways of how this works:

1. The query executes ONCE and stores the result in a memory area called cursor. Then we loop through the result set (cursor) and executes the remaining statements. The memory area is not affected by the changes made to the base tables on the disk. There is no scope for a ORA-1555 here (simply because the query does not get executed again).

2. The query executes ONCE FOR EACH ITERATION OF THE LOOP. So the cursor will be repopulated with fresh data every time. Naturally, there is scope of ORA-1555 because we are re-reading the modified data.

Earlier I was thinking it was like in case 1. But now I am confused. I don't know if case 2 is right, becasue it does not make any sense to re-read the data like that. But I can not visualize any other way we can get ORA-1555. Shall be greatful if you can explain how it works. I hope I am clear about my question. Thank you.

Tom Kyte
December 30, 2002 - 3:09 pm UTC

1) is totally wrong. A cursor is just a "state". A pointer to the query plan in the shared pool, the bind variable values, a pointer perhaps to where in the query we last left off....


2) nope.

Have you ever read through a file a record at a time? Same concept applies in general for database tables. When you open a file, you don't "copy it". When you want to read record 2 -- you don't fseek back to the beginning -- you just pick up where you left off.


So, take a simple query like:


select * from t;


and t is a 500 block table with 10 rows / block.


You do this in sqlplus -- so you are fetching 15 rows/call by default(thats the default arraysize).. Ok,

o sqlplus does the first fetch
o the server reads block 1 of the table, gets 10 rows
o the server reads block 2 of the table, gets 5 rows
o 15 rows are returned
o sqlplus does 14 more fetches that don't bother the server (its already in
the clients process)
o sqlplus goes to fetch row 16 - over to the server
o the server gets block 2 again, gets the remaining 5 rows -- it REMEMBERED
where it left off
o the server gets block 3, gets 10 rows
o 15 rows are returned

o now at this point, someone issues "DELETE FROM T"; "COMMIT;". There are NO
rows in T anymore -- but your cursor will still see them all. Why? because
the UNDO for this delete was put in the rollback segments....

o sqlplus displays the 15 rows it just fetched
o sqlplus goes to fetch row 31 -- over to the server again
o the server gets block 3 -- notices "hey, the SCN on this block is too high
for me, it has changes I'm not allowed to see. Let me use the rollback data
to rollback this block to the point in time that it is OK for me to look at
it"
o the server reads from the rollback segment the necessary information, rolls
the block back....
o the server reads the 10 rows off of the block (the 10 deleted rows! they
were undeleleted for us).
o the same thing happens when we go to get the next 5 rows from block 4
o sqlplus gets the data and displays it

(above process repeated over and over)....

o sqlplus goes to fetch row 400 -- over to the server
o server gets the block -- finds the SCN isn't right for it -- goes to rollback
but discovers that the rollback that was generated by that delete is gone --
somebody OVERWROTE it (our rollback segments are too small). The query now
gets an ora-1555.




Excellent Explanation!!

Ashish, December 30, 2002 - 5:16 pm UTC


EXCELLENT

A reader, December 30, 2002 - 9:23 pm UTC


great explanation

A reader, December 30, 2002 - 9:34 pm UTC

Clears it as crystal. Thank you.

Splendid!

A reader, December 31, 2002 - 4:02 am UTC


A reader, December 31, 2002 - 11:15 am UTC

Good job Tom

Superb Stuff

Ik, December 31, 2002 - 11:23 am UTC


A reader, December 31, 2002 - 11:34 am UTC

Tom,

If this had been a regular table (SELECT in CURSOR and then UPDATE) would one get an ORA 1555 error? I mean, on a regular table it would go for a FTS since there is no where clause in the query and since the blocks would be read in a sequential manner there would be no reason to re-read the same blocks again.
I know that one should never COMMIT in a for loop but iam just curious.

Some thing like -
begin
for i in (select i_name from reg_table) loop
......
update reg_table set a = :bindvar1 where i_name = :bindvar;
COMMIT;
end loop;
end;

Thanks in advance....

regards

Tom Kyte
December 31, 2002 - 12:10 pm UTC

sure it could -- doesn't even have to be against the same table -- I can show you an ORA-01555 against a table in a read only tablespace -- caused purely by overcommitting...




Please Explain

Abhijit Mallick, February 18, 2005 - 7:12 am UTC

I am sending a Stored Procedure code . I am opening a CURSOR , deleting as well as doing a commit . But still it displays the intended data !!!! Plz have a look at that and kindly explain me the behavior of CURSORS . I am confused for a long time regarding this issue .


Thanks
Abhijit Mallick
/************* start of SP **************************/

Procedure TEST_CURSOR( P_EMPNO IN NUMBER)
IS
-- Purpose: To see where in the memory structure
-- Oracle stores data when a Cursor is declared/opened .

-- Person : Debashis Payin Date : Monday, 14 February 2005

CURSOR C_EMP IS
SELECT ENAME
FROM EMP
WHERE EMPNO = P_EMPNO ;

R_EMP C_EMP%ROWTYPE ;

BEGIN
OPEN C_EMP ;

DELETE FROM EMP; /* delete the table data */
COMMIT; /* do a COMMIT also !!!! */

LOOP
FETCH C_EMP INTO R_EMP;
EXIT WHEN C_EMP%NOTFOUND ;
DBMS_OUTPUT.PUT_LINE('The employee is : ' || R_EMP.ENAME );
END LOOP ;

EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Error ' || SQLERRM);
END; -- Procedure TEST_CURSOR



/************* end of SP **************************/

My confusion is after opening the cursor I deleted the concerned file, as the cursor going to fetch from the table, it not supposed to get any of them as all the records are already deleted and commited. Then how come the records are dispalyed by the cursor??

Tom Kyte
February 18, 2005 - 9:12 am UTC

plz? that seems to be something related to German postal codes, but I don't see the connection to Oracle.


The funny thing about this particular question is -- it is virutally the same example I use in Expert One on One Oracle to explain the very basic concept of read consistency and multi-versioning. You have just stumbled onto Oracles most amazing feature!

You definitely need to read the concepts guide from cover to cover, but start here:

</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96524/c21cnsis.htm#2414 <code>


You know now that "porting" from sqlserver to Oracle or vice versa -- is not something to be taken lightly (or to/from informix, db2, whatever). The fundemental principles of locking, concurrency control, how things *work* are totally different in all of these systems. Sure, they all "speak" sql, but they do things very very very differently.





An analogy?...

A reader, February 18, 2005 - 12:02 pm UTC

This might help some people understand the way Oracle works with cursors.

Imagine yourself opening up a large document (this is your table) with your favorite word processor.

You queue the document to be printed (this is you opening your cursor).

You exit your word processor before the printer finishes printing your document, and you delete the file that contains the document. (This is you clobbering your table.)

You walk over to your printer and pick up your whole document. You can read it from front to back and all of it is there. It doesn't disappear because you deleted the file. It is a whole copy of all of your file, and you can do whatever you want with it.

What do you think, Tom? Is this a good analogy?

Tom Kyte
February 18, 2005 - 2:52 pm UTC

no -- i don't like it, because it sounds like we make a copy of your result set when you open the cursor -- there are not "two" copies.

It is fair to say that when you open a result set, the data that will be retrieved is "pre-ordained". At the moment you open it, we already know what the answer will be -- and we have not touched a single row. It is as if "time stops" for that result set. Regardless of what else happens in the database -- that result set will either

a) be returned as of the point in time it was opened (assuming normal isolation levels in play here)

b) return an error like ora-1555 or ora-8103

So, you can open:

select * from one_billion_row_table

at 9am this morning (and it'll open instantly -- fast, we don't do any work until you actually make us with a fetch)... and then don't fetch anything -- but you go away.


You come back at 12 noon. And you finally fetch rows. While you were gone for 3 hours though, people have been inserting, updating and deleting from this table.

The rows you fetch -- all one billion of them -- will be "as of 9am", you will see NO changes that occured after the point in time your query began -- NONE.


It is as if time stops for that query. It will see no changes - none. All data returned is "as of" the same point in time.

Which is a good thing, consider this query:


select sum(bal) from bank;


Now, assume the table has:

my_checking_account $100
<millions of other rows>
my_savings_account $100


Now, suppose you fire off that query and it reads row 1, 2, 3.... It is in the middle of the rows. At that point in time - I walk up to an Automatic Teller Machine and transfer $50 from checking to savings, now we have:



my_checking_account $50
<millions of other rows>
my_savings_account $150


my select sum() is still running. It finally gets to the end of the table (as if they have a top and a bottom :)

In Oracle -- when we get to that last row, we'll notice "hey, this was not the value when we began, please put it back" . In Oracle we'll read $100 -- not $150.

The select sum(bal) will be returned to you -- and you get a value that actually existed in the database!


without this multi-versioning, the answer you get from the other databases? Well, it is off by at least $50 -- they will have read the $50 twice -- once from row 1 and again on the last row. They will give you a value that never existed at any point in time as the balance.


Can they give you the right answer? Sure, they would lock rows as they read them. So, when the select sum(bal) started - it would lock rows as it hit them. We would not have been able to update row 1 -- we would block and wait and wait and wait and wait. Even worse, we might update the last row (that would work -- because the query hasn't gotten there yet) and then block trying to update the first row. Now the query hits the last row and finds it locked (and cannot read it -- they block on locked data -- Oracle does not). But now the query and the update are deadlocked! with eachtother. The transfer transaction has been blocked and waiting for a long time, the read transaction has been working for a long time -- one of them will totally LOSE right now. One will get an error -- so sorry, you've totally deadlocked with something else, please try again. And the scenario begins all over again ;)




Excellent Response

Sherin, April 28, 2005 - 3:57 am UTC

Just out of curiousity... If oracle gives you data from a "single point in time" where does it get the data from if the table has already been deleted and commited ?

Sherin

Tom Kyte
April 28, 2005 - 7:53 am UTC

undo segments (also known as rollback segments).

see
</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96524/c21cnsis.htm#17882 <code>

Curious !!!

Sherin, April 28, 2005 - 7:37 am UTC

Tom,

You can ignore my "curiosity", I went through the link mentioned in the trail figured that in case of modification :-

1. Oracle picks up data from the rollback segment.
2. Rollback segment also contains recently committed data.
3. Once Oracle doesn't find the data from the Rollback segment (i.e. when the data is removed from rollback segment for other DMLs) then it gets upset and throws ORA-01555.

If am not correct then my 'curiosity' stays....

Sherin

Rollback within a loop

Gokul, May 29, 2007 - 3:22 am UTC

When we have a rollback within a nested cursor loop, we get the error message ORA-01002: fetch out of sequence.
The requirement is if a condition fails we need to capture it as an error and rollback the previous transaction.

We are trying to replicate our requirement with the below sample procedure.
The requirement we have is to capture the status at different levels, therefore more than one looping required,
and in the inner most loop based on a specific set of criteria we need to rollback all the previous transactions and capture the error details.
SQL> desc temp1
Name Type         Nullable Default Comments 
---- ------------ -------- ------- -------- 
COL1 NUMBER       Y                         
COL2 VARCHAR2(10) Y          

select * from temp1;
    COL1 COL2
1 2 xyz
2 3 abc
3 1 efg
4 4 lkm

SQL> desc temp2
Name Type         Nullable Default Comments 
---- ------------ -------- ------- -------- 
COL1 NUMBER       Y                         
COL2 VARCHAR2(10) Y       

select * from temp2;
1 2 xyz
2 3 abc
3 1 efg
4 4 lkm

SQL> desc temp_target
Name     Type          Nullable Default Comments 
-------- ------------- -------- ------- -------- 
TAR_COL1 NUMBER        Y                         
TAR_COL2 VARCHAR2(100) Y                         

SQL> desc temp_error
Name       Type          Nullable Default Comments 
---------- ------------- -------- ------- -------- 
ERROR_NO   NUMBER        Y                         
ERROR_DESC VARCHAR2(100) Y       


Here is procedure.

CREATE OR REPLACE PROCEDURE test_num AS
   TYPE cur_ref IS REF CURSOR;
   cur_temp1 cur_ref;
   cur_temp2 cur_ref;
   v_temp1 temp1%ROWTYPE;
   v_temp2 temp2%ROWTYPE;
BEGIN
   OPEN cur_temp1 FOR
    SELECT *
    FROM   temp1; --First loop
   LOOP
    FETCH cur_temp1
     INTO v_temp1;
    EXIT WHEN cur_temp1%NOTFOUND;
    OPEN cur_temp2 FOR
     SELECT *
     FROM   temp2
     WHERE  col1 = v_temp1.col1; -- second  loop
    LOOP
     FETCH cur_temp2
        INTO v_temp2;
     EXIT WHEN cur_temp2%NOTFOUND;
     dbms_output.put_line('After second loop       ' ||
            'col1 -->' || v_temp2

.col1 ||
            'col2 -->' || v_temp2

.col2);
     INSERT INTO temp_target
     VALUES
        (v_temp2.col1, v_temp2.col2);
     IF (v_temp2.col1 = '3') THEN
        dbms_output.put_line('Before rollback');
        ROLLBACK;
        dbms_output.put_line('after rollback');
        INSERT INTO temp_error
        VALUES
         (v_temp2.col1, v_temp2.col2);
        COMMIT;
        dbms_output.put_line('after commit into error table');
     END IF;
    END LOOP;
    CLOSE cur_temp2;
   END LOOP;
   COMMIT;
   CLOSE cur_temp1;
END test_num;



On execution, it fails as below

SQL> exec test_num;
After second loop col1 -->2col2 -->xyz
After second loop col1 -->3col2 -->abc
Before rollback
after rollback
after commit into error table

begin test_num; end;

ORA-01002: fetch out of sequence
ORA-06512: at "REVCOGS.TEST_NUM", line 20
ORA-06512: at line 1

And we observe that when we remove rollback, the above error does not occur.


Tom Kyte
May 30, 2007 - 10:27 am UTC

I do not understand your "transactional logic" at all - because the logic seems "really wrong"


do you understand what commit and rollback actually do?
and that databases ARE BORN TO JOIN
and that the logic above is really horrendous? I cannot imagine this is what you really intend.

here is your logic as it stands:

loop over rows randomly from t1
   loop over rows randomly from t2 where t2.col1 = t1.col1
        insert into t3;
        if t2.col1 = '3' 
        then
            ROLLBACK SET RANDOM SET OF ROWS WE JUST INSERTED - might 
            be every single row, might be some arbitrary set of rows
            because sometimes we commit.  But some UNKNOWN RANDOM SET OF ROWS
            will be rolled back.

            insert into error A SINGLE ROW (we just lost a bunch but we 
            log a single row
 
            COMMIT;
        end if;
   end loop
end loop
COMMIT;



basically, that is botched beyond repair.


If your last row you RANDOMLY HIT is a '3' record, you insert NOTHING into t3.

If on the other hand, the last row you RANDOMLY HIT is not a '3' record, you insert and commit into t3 all of the rows since the last time you RANDOMLY HIT a '3' record.


there is nothing good here. You are doing a do it yourself join (loops in loops) that is horrible. Your transactional logic makes no sense at all and leaves the database in sort of a random state after it is done.

How about you phrase your *requirement* in text...

oh, and tie it all back to "execute immediate and rollback segments" :)



Rollback within loop

Gokul, June 05, 2007 - 2:39 am UTC

Hi Tom,

Thanks for your response.

The sample code we have provided above, was to explain our problem.Yes we understand commit and rollback,
and agree the example we provided is very trival and could have achieved the results using joins. But unfortunately we cannot share the original code.

Basically we have to "rollback" the transactions based on certain Business rules ie based on business rule validation it is considered to be a failure/success.
On failure, we need to rollback and only capture the reason for failure(And error need to be captured in error table)

The above has to be looped more than once.Because we have logic something likethis

1.Quater level validation------outer loop
2.Region level validation---------
3.Channel level validation
4.... etc


Additionally based on the data of the outer loop we are dynamically creating the ref cursor for the inner loops.

we cannot avoid looping because the user wants to validate and capture errors at each level.
Hence our issue is when we rollback within an inner loop and a process the next record of the outer loop
This error occurs

"ORA-01002: fetch out of sequence"

Now our query is simple, when we rollback within a inner loop, why is the outer loop's fetch give this error?

Thanks
Gokul
Tom Kyte
June 06, 2007 - 12:52 pm UTC

... But unfortunately we cannot share the original code. ...

but fortunately, since you are a developer, you can abstract the problem and post something that remotely resembles reality - else no one can comment on anything because we'd be commenting on SOMETHING YOU ARE NOT DOING IN REAL LIFE - which frankly does not make sense and would just be a waste of both of our time.

so, give us a better example that actually DEMONSTRATES YOUR LOGIC flow.

I would guess, you are using select for update, rollback closes all select for update cursors. But, purely a guess - since we have no idea what you are really doing.

A reader, June 07, 2007 - 3:06 am UTC

Dears,

"Basically we have to "rollback" the transactions based on certain Business rules ie based on business rule validation it is considered to be a failure/success.
On failure, we need to rollback and only capture the reason for failure(And error need to be captured in error table)"
-- -------------------------------------------


If you are in a loop and you want to rollback in case of error and also you want to store the reason of this error, I would advise this

Declare
your cursor
begin

LOOP
-- set a savepoint here
SAVEPOINT your_savep;
do your plsql code here
..../...
IF you are in error then
ROLLBACK TO sps_n_savep;
use autonomous transaction to log your error
END IF;

END LOOP
COMMIT;


Tom Kyte
June 07, 2007 - 2:43 pm UTC

I want to hear from them what their actual logic is before commenting on "how"