For long term I agree with you 100%, but for short term what is simple way
A reader, October 06, 2001 - 8:55 pm UTC
if there is one.
If t.c1 is a numeric data then last digit of t.c1 will be handled by p1 if between 0-5 else handled by p2. But this is very narrow solution
or if t.c1 is char then last char a-l by p1 remaining by p2.
You have brought to my attention AQ which I did not think in this context or new much at all. This gives me more reason to know that. Thanks for that.
I am sure all readers appreciate your help. I know I do.
October 07, 2001 - 3:59 pm UTC
It sounds like you are still in development (else you would have encountered this)
This is the short term answer, use AQ. I would strongly encourage you to NOT use the undocumented feature connor alludes to as its behaviour is subject to sudden change, it doesn't work with all sql queries, you don't know how it works, and support won't touch it with a 10 foot pole if there is any issue regarding it.
SKIP LOCKED
Connor, October 07, 2001 - 9:11 am UTC
There is a "select for update skip locked" mechanism which allows to lock a record after skipping over the ones already locked. This can be helpful for multiple consumers.
October 07, 2001 - 11:02 am UTC
sure there is -- AQ uses it.
It is undocumented.
It is not supported for external use.
It works only in special conditions (it is not a general purpose command).
I would still recommend using AQ since it is supported, has tons of other features they would need to build themselves, and provides all of the functionality they need "out of the box".
Tom, we are in production but recently we are seeing some blocking locks
A reader, October 07, 2001 - 10:24 pm UTC
between two processes I mentioned earlier.
select * from t for update -- this is ANSI I assume
select * from t for update skip locked -- this is not ANSI?
I suspect the design of the program was based on one of it running at a time, but in reality to keep up with volume someone has run muliple daemons and thus the problem. I guess the architect team will have to consider AQ to solve the problem. I was trying to see if any other way exists around such problem.
Thank you
October 08, 2001 - 8:01 am UTC
It is not only "not" ANSI, it is
o not documented
o not supported
o not a good idea to use it.
If you don't believe me, please contact support and ask them if it is OK to use this -- will you still be supported in your production application.
Ultimately, the decision is yours. You have my opinion. Technically, there is no reason it would not work -- however, given the above points, I cannot suggest you look at it.
Tom, I totally agree with you!
A reader, October 08, 2001 - 8:52 pm UTC
I do not wish to use undocumented processes at all.
Thank you
What about managing it as a "todo list"?
Maurizio De Giorgi, October 09, 2001 - 5:01 am UTC
I agree with Tom: AQ is definitely the solution.
Nevertheless, while reengineering the application, it could be useful a fast to implement workaround.
Manage it as a todo list. Instead of "skip locked" use for update with "nowait" and trap "already locked" exception.
What do you think Tom?
October 09, 2001 - 7:54 am UTC
Yes, you could:
create or replace function get_a_row return t%rowtype
as
l_rec t%rowtype;
begin
for x in ( select rowid rid from t )
loop
begin
select * into l_rec
from t
where rowid = x.rid
FOR UPDATE NOWAIT;
return l_rec;
exception
when others then NULL;
end;
end loop;
return NULL;
end;
/
that'll either return a record that is yours (locked) or NULL.
SKIP LOCKED - Batch Fetch
Anindya, September 03, 2009 - 12:00 pm UTC
Tom,
I know your opinion on queue management, still code is already in Prod and I am called for quick fixes...
Process is running multiple threads and SELECTing a batch of records (say 1000 by ROWNUM limit) from queue table (where Status = 'Not Started') FOR UPDATE. Then it actually UPDATEs those records (by ROWID) to set STATUS = 'In Process' and COMMITs. And finally DELETEs them once processing is done.
I have introduced a time delay/gap between each thread initiation to reduce mutual locking; it works fine until they again come close in the long run. Once the situation arises, process experience row contention.
The batch approach is helpful as I can do lot of "FORALL" (of course DEQUEUE_ARRAY is there in AQ).
In 11G, SKIP LOCKED is supported and I am planning to give it a try.
The issue is:
I have merged the two step process (SELECT FOR UPDATE and subsequent UPDATE) into one:
UPDATE statement limited by ROWNUM batch and RETURNING them BULK COLLECT into array.
So unless I go back to two step mode, cannot use SKIP LOCKED; did not find any relevant syntax.
Need your thought on this.
A secondary issue is the fragmentation due to inserts followed by deletes. We have the window to defragment and so are able to manage.
Interested to know how AQ deals with this situation.
September 04, 2009 - 3:30 pm UTC
... A secondary issue is the fragmentation due to inserts followed by deletes. ...
I'm not worried about that, we actually (believe it - or not) effectively and efficiently reuse the space. You'd have to define what you mean by fragmentation and explain how it would happen.
Because if you ask me - there is no fragmentation that will happen here, not in indexes, not in the table, nowhere. Explain what you mean?
as for a way to get N records out of a table you can use this technique
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1786336182160 select them "normally", then select for update nowait (avoids the contention) and then you would update by rowid.
might not be any faster - something to look at
SKIP LOCKED - Batch Fetch
Anindya, September 09, 2009 - 10:14 am UTC
Thank you Tom. Your technique avoids the "lock all immediate" behavior of FOR UPDATE and locks at it progresses.
Generally the original code (single update statement with rownum <=N) is fast...but collision rate becomes high once the table and index size increases very high (fragmentation?) thus slowing down the fetch rate.
I first noticed fragmentation when it was found table size as 7.5 GB while an index (2 column index - status_cd and event_type_cd) size was 27 GB! Index rebuilt reduced the size below 1 GB. We update Status_cd column value to 'In Process' and then delete once the record is processed. Table may be loaded with 25M records at peak, however no situation explains the inflation of index size to almost 4 times w.r.t. table size.
Somewhere the space reuse is being compromised, not sure if concurrency is also another factor apart from "deletes followed by inserts".
We have multiple threads inserting records and multiple other threads reading/updating/deleting them. These two process sets run from different nodes in RAC.
Our DBA also questioned the fetch based on this index as cardinality is very low. Only 2 status_cd and around 10 event_type_cd we have. When I tested, index scan generally results in better throughput and concurrency...until index size becomes comparable to table size.
DB version - 10.2.0.4
thanks,