Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, Amit.

Asked: June 22, 2000 - 5:18 am UTC

Last updated: October 03, 2007 - 1:56 pm UTC

Version: 8.0.5

Viewed 1000+ times

You Asked

Hi Tom,


Consider the following stored-procedure (implemented in
an Oracle 8.0.5 database):


create or replace
procedure MyProc(iKey in varchar2) is
vHashVal number;
vLockStat number;
vIdx number;
begin

/* Compute Hash value for iKey
*/
vHashVal := 0;
for vIdx in 1 .. length(iKey) loop
vHashVal := mod(vHashVal * 64
+ ascii(substr(iKey,vIdx,1)),
const.N2GIGS);
end loop;

/* Create a user lock for hash(iKey)
*/
vLockStat := sys.user_lock.request(
vHashVal,sys.user_lock.x_mode,
user_lock.maxwait,
sys.user_lock.global);

-----------------------------------
-- :
-- :
-- :
-- CRITICAL SECTION:
-- SOME CODE THAT MUST NOT RUN
-- CONCURRENTLY FOR THE SAME iKey
-- :
-- :
-- :
-----------------------------------

/* Release the user lock for hash(iKey)
*/
vLockStat := sys.user_lock.release(vHashVal);

exception
when others then
begin
/* Release the user lock for hash(iKey)
*/
vLockStat := sys.user_lock.release(vHashVal);
end;
end ;


There is a piece of code which I want to ensure will
not run concurrently for the same value of the iKey
input parameter (string).
It is a critical section (per iKey).

I am using "sys.user_lock" to accomplish this.
The package expects a numeric ID to identify the
user lock number, so I have to hash the string
to a number.

My questions:

1. What alternatives do I have to implementing
my own hash computation ?
Can I call oracle's internal hash functions
through some package ?

2. Can there be cases where user_locks are not
released (for example if a session is killed
while in the critical section)?

If there are such cases, can these locks
be removed somehow ?

3. Have you any other tips related
to user_locks that will be helpful ?


Thanks,
Amit.




and Tom said...

I am not familar with user_lock -- that was the name of the dbms_lock package in 7.0 of the database however it is called DBMS_LOCK since 7.0.12 at least....

Here is a response I've created for a very similar issue -- to avoid 2 people inserting the same primary key simultaneously. I don't want 2 people to do the same thing with the same value -- very much like yours:

(as for #2 above -- if the session goes away, the locks held by dbms_lock go away as well)

Try this:

create table demo ( x int primary key );

create or replace trigger demo_bifer
before insert on demo
for each row
declare
l_lock_id number;
resource_busy exception;
pragma exception_init( resource_busy, -54 );
begin
l_lock_id :=
dbms_utility.get_hash_value( to_char( :new.x ), 0, 1024 );

if ( dbms_lock.request
( id => l_lock_id,
lockmode => dbms_lock.x_mode,
timeout => 0,
release_on_commit => TRUE ) = 1 )
then
raise resource_busy;
end if;
end;
/

-------------------------------------------------------

If, in 2 separate sessions you execute:

insert into demo values (1);

it'll succeed in the first one but immediately issue:

SQL> insert into demo values ( 1 );
insert into demo values ( 1 )
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified
ORA-06512: at "TKYTE.DEMO_BIFER", line 12
ORA-04088: error during execution of trigger 'TKYTE.DEMO_BIFER'


in the second session (unless the first session commits and then unique constraint violation will be the error message).

The concept here is to take the PRIMARY KEY of the table in the trigger and put it in a character string. we can then use dbms_utility.get_hash_value to come up with a "mostly unique" hash value for the string. As long as we use a hash table smaller then 1,073,741,823, we can 'lock' that value exclusively using dbms_lock.

We take that hash value and use dbms_lock to request that hash to be X locked with a timeout of ZERO (returns immediately if someone else has locked that value). If we timeout, we raise ORA-54 resource busy. Else, we do nothing.

Of course, if the primary key of your table is an INTEGER and you don't expect the key to go over 1 billion, you can skip the hash and just use the number.

You'll need to play with the size of the hash table (1024 in my example) to avoid artificial 'resource busy' messages due to different strings hashing to the same number. Also, the owner of the trigger will need execute on DBMS_LOCK granted directly to them (not via a role). Lastly, you might find you run out
of enqueue_resources if you insert lots of rows this way without committing. If you do, you need to modify the init.ora parameter enqueue_resources to be high enough (you'll get an error message about enqueue_resources if you hit this).
You might add a flag to the trigger to allow people to turn the check on and off (if I am going to insert hundreds/thousands of records, I might not want this check enabled for example)



Rating

  (9 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

A reader, March 02, 2004 - 12:28 pm UTC


request lock for update

Anil, February 26, 2005 - 12:45 pm UTC

Hi Tom

Can we have the same query for an update of a different field other than primary key . I chnage the trigger for before update of field name but session waiting and I am not getting ORA-00054


create table flight_cap
2 (FLT_NO VARCHAR2(5),
3 FLT_DATE DATE,
4 CAPACITY NUMBER,
cap_id number);

insert some rows...


create or replace trigger test_trg
before update of capacity on flight_cap
for each row
declare
l_lock_id number;
resource_busy exception;
pragma exception_init (resource_busy, -54);
begin

if (dbms_lock.request(id=>:new.cap_id ,
lockmode => dbms_lock.x_mode,
timeout => 1,
release_on_commit => TRUE)=1)
then
raise resource_busy;
end if;
end;


update flight_cap
set capacity = capacity - 10;

What I am looking for is that an update should not wait if any of the row is locked by another session .. Similar to

select for update now wait
update flight_cap ...


Rgds
Anil



Tom Kyte
February 26, 2005 - 2:30 pm UTC

the trigger never gets "into" itself until the row is locked and ready to be updated (it's different from adding a new row and wanting to avoid a CONSTRAINT check -- the insert issue wasn't a "lock at the row level" issue, it was a constraint check issue....)


what sort of updates are you doing that you would hit this though? It does sound dangerous to a degree -- I mean, you are updating blind here? updating rows others are updating at the same time -- overwriting eachothers updates?

But select for update (to see what the data is) with no wait would be the approach to consider here.

unable to populate tables

Loann, October 20, 2005 - 4:50 pm UTC

Hi Tom,

I have to populate two tables which have artificial keys as PKs. I was able to load the first one but the second one gave me this message

ERROR at line 1:
ORA-01400: cannot insert NULL into ("FDMS"."ADDRESSES"."DEPO_SEQ")
ORA-06512: at line 51

Please help.



Tom Kyte
October 21, 2005 - 7:52 am UTC

stop trying to insert null into fdms.addresses.depo_seq???

unable to populate table cont.

Loann, October 21, 2005 - 11:32 am UTC

Hi,

I didn't give you enough info to my question yesterday, let me explain more. The two table are Depositors(parent) and Addresses (child). They both have SEQ # as their PKs. I loaded Depositors and it has the SEQ# already. Now when I loaded Addresses I got the can not insert null error message. I think my question is how do I match SEQ# in Addresses to SEQ# in Depositors. Ex: Seq#2 in Addresses should match with Seq#2 in Depositors.

Thanks

Tom Kyte
October 21, 2005 - 11:41 am UTC

I cannot answer this, I don't even have a simple example to look at.



If I may.........

A reader, October 22, 2005 - 5:37 am UTC

Hi,
I think you can do the following:
Find out all the rows in the parent table which do not have a referencing key in the child table.
Then using that sequence number you can do a normal insert into the child table.
Hope I am correct.



need more clarifications on insert conflicts

Lijo, September 29, 2007 - 6:58 am UTC

Dear Tom,
I exactly have the same case as you have started this post with. I have two transactions hitting the table at the same time and there is a primary key violation (though very rarely as the primary key is defined on 7 fields). And the reason the primary key is defined this way is for a set of uniquie combination of values we want to have a single record in the table so that for reporting purposes its easier to pullout the data rather than summing up the data on run time.

So, the logic defined in the procedure is try inserting first, if it fails (check for DUP_VAL_ON_INDEX) then update. But in this case as both the transactions hit the db at the same time for inserts the 2nd transaction will not succeed as it does not have visibility to the uncommited trasaction by first session. For the 2nd transaction neither the insert nor the update succeeds. So, finally one transaction commits and the other is returned back
with a dead lock error and also says there is a unique constraint violation.

By some means, I want the second transaction to wait till the first transaction succeeds and then go for an update. As you were suggesting that we can take out the primary key constraint in the trigger and handle this duplication logic from the trigger, is there any way to raise an exception of resource busy for the 2nd transaction and wait till such time that the first transaction succeeds and then make an attempt to update for the 2nd transaction?

I guess a more appropriate approach would be is rather than insert first and update later, do the reverse, but still it can run into the same dead lock situation.

Can you please help me to figure out a solution for this? I'm eagerly looking for your response

My table structure

glentry(CNY#, RECORD#, STATUS, ENTRY_DATE, BATCH#, DOCUMENT, DESCRIPTION, ACCOUNT#, AMOUNT, UNITS, CURRENCY, USER#, LOCATION#, DEPT#)

basegltotals(CNY#, ACCOUNT#, LOCATION#, DEPT#, TIMEPERIOD, CURRENCY, AMOUNT, DEBIT, CREDIT, BOOKID)
PRIMARY KEY (CNY#, BOOKID, ACCOUNT#, LOCATION#, DEPT#, TIMEPERIOD, CURRENCY)

--Trigger on glentry
TRIGGER GLENTRY_DO_GLTOTALS AFTER DELETE OR INSERT OR UPDATE
OF RECORD# ,ENTRY_DATE ,BATCH# ... plus some other fields
ON GLENTRY

if inserting or updating then
acct_utils.updategltotalsinsert(:new.cny#, :new.entry_date, my_amount, :new.account#, :new.location#,
:new.dept#, :new.adj, :new.timeperiod, :new.batch#, :new.currency, my_trx_amount);

--The procedure updategltotalsinsert does the insertion into basegltotals
PROCEDURE updategltotalsinsert (
a_cny# IN glentry.cny#%TYPE,
a_entry_date IN glentry.entry_date%TYPE,
....
)
BEGIN
INSERT INTO basegltotals
(cny#, bookid, account#,
location#, dept#,
timeperiod, amount, debit, credit,
adjdebit, adjcredit, currency
)
VALUES (a_cny#, rec.bookid, a_account#,
NVL (a_location#, 0), NVL (a_dept#, 0),
my_time_period, a_amount, my_debit, my_credit,
my_adjdebit, my_adjcredit, basecurr
);
EXCEPTION
WHEN DUP_VAL_ON_INDEX
THEN
UPDATE basegltotals gltotals
SET gltotals.amount = gltotals.amount + a_amount,
gltotals.debit = gltotals.debit + my_debit,
gltotals.credit = gltotals.credit + my_credit,
gltotals.adjdebit = gltotals.adjdebit + my_adjdebit,
gltotals.adjcredit =
gltotals.adjcredit + my_adjcredit
WHERE gltotals.cny# = a_cny#
AND gltotals.bookid = rec.bookid
AND gltotals.account# = a_account#
AND gltotals.location# = NVL (a_location#, 0)
AND gltotals.dept# = NVL (a_dept#, 0)
AND gltotals.timeperiod = my_time_period
AND gltotals.currency = basecurr;
WHEN OTHERS
THEN
RAISE;
END;
Tom Kyte
October 03, 2007 - 1:53 pm UTC

....
So, finally one transaction commits and the other is returned back
with a dead lock error and also says there is a unique constraint violation.

....

sorry, but no, it quite simply does not work that way.

If you have a table with an N-column unique constraint

and two sessions "at about the same time" insert duplicate values then ONE of them will BLOCK (not deadlock, block). It will block until the other session a) commits or b) rollsback. If A) happens, the blocked session will become unblocked and return dup_val_on_index - then the update will succeed. If B) happens then the session that was blocked will succeed in its insert and continue.

If you are getting a deadlock, you have not described the scenario as it really happens to us yet

more info to above post

Lijo, September 29, 2007 - 7:19 am UTC

Adding more information to the above post

From the production queries that have failed for such cases, here is a scenario how it happened.. The first session posts entry into basegltotals to account 1000 followed by account 2000. The second second session posts to account 2000 followed by account 1000 (reverse of one).

If the first session transaction does not commit before the second session transaction begins, you will get a deadlock condition as transaction one waits for journal entry two to release the lock on account 2000 and vice versa.

The solution may be to order the account# in the same order for both transactions. But that is a difficult
situation as in a given transactions there can be lots of sets of entries which are typically coupled in pairs
as debits and credits and if we try ordering by account# before posting the line_no storage values in the glentry table would be messed up.
Tom Kyte
October 03, 2007 - 1:56 pm UTC

if you gain locks on resources in a non-predicable order (eg: I lock accounts 1000 and then go for 2000 - you on the other hand lock account 2000 and then go for 1000) deadlocks are unavoidable - the name of the game - the only solution.

so the application needs to understand "deadlocks will happen" and treat that as a condition it is expecting - just like it currently does with dup_val_on_index, you'll have to code it to deal with that possibility (of the deadlock)

...
situation as in a given transactions there can be lots of sets of entries which
are typically coupled in pairs
as debits and credits and if we try ordering by account# before posting the
line_no storage values in the glentry table would be messed up.
......

if that is true, then how can two different processes working on the SAME ACCOUNTS independently of each other be working???

will ordering resolve the problem?

Lijo, October 03, 2007 - 2:57 pm UTC

What I claimed was not quite true, line_no just maintains the sequence of entries for a given transaction, its just that the total debits should match the total credits.

Are you saying that ordering the account# in both sessions would solve this problem?

If its so, then not only that I've to get the account# in order but all the other fields on which there is a unique constraint needs to be ordered, which means it requires a lot more processing and can potentially slow down the system.

How do you suggest to handle this, is there a better way?

update first then insert will help?

Lijo, October 04, 2007 - 5:55 am UTC

Do you think this approach of updating first and if it fails then insert will help?

BEGIN

UPDATE basegltotals gltotals
SET gltotals.amount = gltotals.amount + a_amount
.....
WHERE gltotals.cny# = a_cny#
.......

IF SQL%ROWCOUNT == 0 THEN
INSERT INTO basegltotals
(cny#, bookid, account#..
)
VALUES (a_cny#, rec.bookid, a_account#...
);
...
END IF;

END;

It makes sense to update first then insert, because there are lesser chances of update failing when compared to an insert as once a row is present, you will always want to update than insert.

Thinking again with the same kind of inserts, first session inserts into basegltotals to account 1000 followed by account 2000. The second second session inserts to account 2000 followed by account 1000 (reverse of one). In this case, since we are trying to update first, will one set of transaction wait for the transaction from the other session (insert) to complete?

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library