Hi Team
Have a scenario to select a particular set of rows from a table for further processing.
We need to ensure that multi users do not work on the same set of rows.
We use SELECT FOR UPDATE SKIP LOCKED in order to achieve this.
EG:a simplified version looks like this...
The calling program is in java and the locking portion is as follows.
select *
from a
where a.pkey in (select a1.pkey
from (SELECT rownum as rnk
,a.pkey
FROM a
WHERE col1=? /*input from java*/
AND col2=? /*input from java*/
ORDER BY <deterministic_column /*input from java*/>
)
where rnk<=i_val /*input from java*/
)
for update skip locked
Assume the query provides an output of 10 rows without the condition "where rnk<=i_val"
In the multiuser scenario
Assume the
user1 provides value of i_val=3 rows
user2 process i_val=5 rows
What happens is that user1 gets the 3 rows to be locked(say rowids==>1,2,3)
but user2 would not get only 2 rows(4,5) to be locked, even though he requested for 5 rows.
is there any option to get user2 to to lock rows 4,5,6,7,8
We tried
select * from(
select *
from a
order by deterministic_colum
) where rownum<=ival for update skip locked
but that query fails to parse.
The option we did is as follows
i=0;
for x in (select a1.pkey
from (SELECT rownum as rnk
,a.pkey
FROM a
WHERE col1=?
AND col2=?
ORDER BY <deterministic_column>
)
)
loop
select a.pkey from a where a.pkey=x.pkey for update skip locked;
i++;
if i== ival then
exit;
end if;
end loop;
Even though this works wanted to know if there is a better approach in skip locked rows with a limit clause or any alternative in SQL itself.
You probably want to do explicit control of your fetching, because
open cursor for select ... for update;
will lock all rows the moment you open the cursor. However,
open cursor for select ... for update skip locked;
does not lock *any* rows. They are locked as you *fetch*
So you could something like this (pseudo code)
cursor C is select ... for update skip locked;
open C;
loop
loop
fetch C bulk collect into :my_array limit 100;
append :my_array to :to_be_processed_array;
exit when c%notfound or :to_be_processed_array.count > 1000;
end loop;
-- process any rows in :to_be_processed_array
exit when c%notfound;
end loop;
close C;
So we're going to repeatedly try get/lock 100 rows, but of course, we might anywhere between 0 and 100 because they might already be locked), but we finally get to 1000 rows (or we run out entirely) then we'll go ahead and process them.
Hope this helps.