Skip to Main Content
  • Questions
  • Closing a FOR UPDATE cursor does not release the lock (Pro*C)

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Matias.

Asked: October 18, 2007 - 3:12 pm UTC

Last updated: November 02, 2007 - 11:47 am UTC

Version: 9.2.0

Viewed 10K+ times! This question is

You Asked

i want to lock some records from a table (say, SELECT bla FROM table WHERE some_records_conditions) for not being modified from another transaction, for certain amount of time, process them, and then release them without doing ROLLBACK or COMMIT... in fact, i don't modify those records, but i need them to remain consistent during the time of process.
i'm using the SELECT...FOR UPDATE technique, but, when i close the cursor, the lock remains active... the only way i found to solve this is doing COMMIT or ROLLBACK... and, as i sad, i don't want to ROLLBACK or COMMIT...
notes:
-no, i don't want to lock the entire table
-no, i don't want to duplicate table records
-...

the questions are:
-how to effectively close the cursor and release the lock?
-is CLOSE "lying" to me?

and Tom said...

It (databases) do not work that way.

Locks grow in a transaction, you accumulate them, you give them all up when you commit or rollback.

close is not lying to you - close does not release locks (just like when you close a statement handle that was an update - the updated rows are not unlocked).

Select for update - that actually under the covers - updates the rows.


If the data just needs to be "consistently viewed", you could

a) use SERIALIZABLE as your isolation level, the database will appear to be devoid of other users - you will see only the changes you make - no one elses.

b) use flashback query on the queries that need to be "as of" a certain point in time.

Rating

  (6 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Consistent Read

djb, October 20, 2007 - 8:36 am UTC

I'm guessing here, but it sounds like it's just one SELECT from one table. If that's case then he would get a consistent read - locking would have no benefit in this case. It almost sounds like he's coming from the MS world where you *do* need to lock rows in a table to keep them consistent.

Consistent Read

djb, October 20, 2007 - 8:37 am UTC

I'm guessing here, but it sounds like it's just one SELECT from one table. If that's case then he would get a consistent read - locking would have no benefit in this case. It almost sounds like he's coming from the MS world where you *do* need to lock rows in a table to keep them consistent.

Matias Rodriguez, October 22, 2007 - 11:19 am UTC

Thanks Tom!

OK, if I use SERIALIZABLE as the isolation level: what if, while I'm (say, ORACLE is¿) OPENing the CURSOR (not using FOR UPDATE, off course) another user modifies the data? Could I get a "snapshot too old error"? (I don't want this either!)

In the other hand: is it completely crazy to think that a CLOSE should undo the "changes" made by an OPEN (particularly, locks)? If I explicitly UPDATE the rows, it's obvious that the lock must remain until COMMIT or ROLLBACK, but closing a pre-opened CURSOR... well, I (I, yes I) think it's a design decision, more than a "database must be".

Thanks again Tom!

Tom Kyte
October 23, 2007 - 11:17 am UTC

yes, you could get snapshot too old, you would set your undo retention appropriately.

it is not 'sensible' to think that closing a cursor would release locks. consider this sequence of events;

dbms_sql.open cursor
dbms_sql.parse (a statement )
dbms_sql.bind
dbms_sql.execute( that statement )
dbms_sql.close


now, that statement can be an insert, update, delete, merge OR SELECT.

select for update is just like UPDATE - it is just like update.

there are 2 possible outcomes

Antonio Vargas, October 23, 2007 - 5:46 am UTC

1. someone changes and commits and after that you reach the place where you ask for serializable ==> you see the changes

2. someone changes but does not reach their commit, then you ask for serializable ==> you dont see the changes even if they commit while you are working

Detail on why unlocking at cursor close is a bad idea

Gary, October 24, 2007 - 1:16 am UTC

"select for update is just like UPDATE - it is just like update. "

It is in Oracle because Oracle maintains its locks on the record itself rather than maintaining a separate list of 'locked records'.

"it is not 'sensible' to think that closing a cursor would release locks...."

To expand on this, you generally use SELECT...FOR UPDATE with the idea you may actually update the record. You want to hold that lock until you update or decide not to update.
The database can never know when you (or your program) may make that decision to not update, so it has to keep the lock until you tell it to release locks.
Closing the cursor isn't sufficient to tell it to release locks.
Firstly a record could actually have been locked for update by several cursors within your session, so you'd have to record each locked record by cursor, not just session. Very large, and unmanagable, lock list.
Secondly, expanding Tom's list, we could have
dbms_sql.open cursor 
dbms_sql.parse  (a statement ) 
dbms_sql.bind   (optional)
dbms_sql.execute( that statement ) 
dbms_sql.fetch  (optional repeated)
dbms_sql.bind   (optional)
dbms_sql.execute( that statement ) 
dbms_sql.fetch  (optional repeated)
dbms_sql.close

where we execute the same cursor twice, probably for different rowsets. You then have to decide when the locked records are released at the subsequent execute or at the final close. You could even argue for releasing them after a subsequent fetch as that is the last point at which they can be updated with the WHERE CURRENT OF syntax.
Thirdly, you may update some, but not all, of the records selected for update. Those actually updated now need an additional entry in your record of locks.
Finally, you could have used an implicit cursor which is immediately closed (or implicitly left open for reuse at the whim of the database).

Also worth pointing out that, under Oracle, if session A finds a record locked by session B, it then waits for session B's TRANSACTION to commit/rollback, rather than waiting for the record to be unlocked, so even a ROLLBACK to SAVEPOINT that releases those locks wouldn't free session A. So not only are you looking at a different locking mechanism, but also a different waiting mechanism.


About the SERIALIZABLE recommendation...

Matias Rodriguez, October 31, 2007 - 3:32 pm UTC

Thanks Tom!

As you recommended, I'm using SERIALIZABLE ISOLATION-LEVEL. The syntax I used in my process is just like this:

EXEC SQL ALTER SESSION SET ISOLATION_LEVEL=SERIALIZABLE;

Then the process INSERTs some records on a table and, when it reaches some ¿magic¿ number of INSERTs (in my case, 164), I get the error:
¿ORA-08177: can't serialize access for this transaction¿

First, I thought that some foreign transaction was writing on that table at the ¿same¿ time (taking into account the cause explained in https://asktom.oracle.com/Misc/oramag/on-transaction-isolation-levels.html );
So, I put a lock over it, just like this:

EXEC SQL LOCK TABLE bla IN EXCLUSIVE MODE;

I re-execute the process and I had the same result (¿ORA-08177: can't serialize access for this transaction¿) at the same point (at the ¿magic¿ number of INSERTs).

Notes (some details):
-Table Initial Trans: 1
-Table Max Trans: 255 (the maximum allowed)

Returning to our "philosophical discussion" about SELECT...FOR UPDATE:
if it were like an "UPDATE" (say, ¿Select for update - that actually under the covers - updates the rows¿), and if I don't have the "UPDATE GRANT" on the TABLE, the SELECT...FOR UPDATE should fail... doesn't it? OK, OK, I will not bother you again¿ with this ;)

Thanks again Tom!

Tom Kyte
November 02, 2007 - 11:47 am UTC

can you use flashback query to get your consistent read (the probability of the 8177 grows as the duration of the transaction grows, we can and do hit 8177's for other reasons, it is mostly suitable for short transactions).

With flashback query, you'll get a consistent read (and maybe a 1555 if undo retention is not set)