Skip to Main Content
  • Questions
  • PARALLEL DML with exclusive table locks

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question, Thomas.

Asked: August 02, 2017 - 7:54 am UTC

Last updated: August 14, 2017 - 3:14 pm UTC

Version: 12.1

Viewed 1000+ times

You Asked

Hi !

Just running into a funny situation.

Developing a package that has to update a set of very large tables with Parallel DML.

Because of parallel DML restrictions I need to disable the trigger on the tables first.

To make sure that nobody else is doing updates on these tables with disabled triggers I try to manually LOCK the tables in exclusive mode via LOCK TABLE....

I need to lock all 3 tables at the same time to avoid data inconsistency.

I run into issues, because I need to commit every update on the tables separately ( parallel DML restriction )
But I need to commit the LOCK TABLE as well because the next parallel DML statement is erroring because of "Open Transactions"

Any recommendation, how to implement an exclusive lock inside parallel dml session ?

Best regards,
Thomas





with LiveSQL Test Case:

and Chris said...

So you're just trying to get around the fact you can't have an enabled trigger for parallel DML? Or is there some other reason for locking the tables?

If it's just the trigger issue, you could go the DIY parallelism route with DBMS_PARALLEL_EXECUTE:

create table demo 
partition by list (object_type) 
subpartition by list (owner) 
subpartition template 
( 
 subpartition sp1   values ('SYS') 
,subpartition spdef values (DEFAULT) 
)  
(  
 partition p1   values ('TABLE') 
,partition p2   values ('INDEX') 
,partition p3   values ('SYNONYM') 
,partition pdef values (DEFAULT) 
) 
as select owner, object_type, object_name from all_objects;

CREATE OR REPLACE TRIGGER TRIGGER1  
BEFORE UPDATE ON DEMO FOR EACH ROW  
BEGIN 
  :NEW.object_name := lower(:new.OBJECT_NAME); 
END; 
/

select count(*) from demo;

COUNT(*)  
73823  

select count(*) from demo
where  object_name = lower(object_name);

COUNT(*)  
1187  

declare
  l_sql_stmt varchar2(1000);
  l_try number;
  l_status number;
begin
 
  -- create the task
  dbms_parallel_execute.create_task ('mytask');
 
  -- chunk the table by rowid
  dbms_parallel_execute.create_chunks_by_rowid('mytask', user, 'DEMO', true, 1000);
 
  -- execute the dml in parallel
  l_sql_stmt := 'update demo set object_name = object_name || ''#'' 
      where rowid between :start_id and :end_id';
  dbms_parallel_execute.run_task('mytask', l_sql_stmt, dbms_sql.native,
                                 parallel_level => 4);
  
  dbms_output.put_line(DBMS_PARALLEL_EXECUTE.TASK_STATUS('mytask'));
  -- done with processing; drop the task
  dbms_parallel_execute.drop_task('mytask');
   
end;
/

select count(*) from demo
where  object_name = lower(object_name);

COUNT(*)  
73823   


https://docs.oracle.com/database/121/ARPLS/d_parallel_ex.htm#ARPLS233
https://oracle-base.com/articles/11g/dbms_parallel_execute_11gR2

If it's for other data consistency reasons, why don't you take (writing parts) the application offline? After all, if you've locked the tables no one else can change them anyway...

Rating

  (2 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Thanks a lot

A reader, August 11, 2017 - 3:52 pm UTC


But still have questions...

Thomas Hilke, August 11, 2017 - 4:13 pm UTC

Thanks for your answer.

BUT...we try to make technically sure, that nobody else is changing the data during the time of this large batch processing.

And, I assume, that even with dbms_parallel_execute we can not "LOCK" the table in front of the parallel execution, because the parallel executions have it's own session context ?

Is my understanding correct ?

Best regards,
Thomas

Chris Saxon
August 14, 2017 - 3:14 pm UTC

The call to run the task (as well as creating it) creates a scheduler job for each parallel process. Which commits. So any table locking you do in this session is lost anyway!

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.