Hi Tom-Team,
I want to process statements in parallel by setting "parallel_degree_policy=auto". They are a mixture of DML and DDL statements. In this context, the order in which statements execute, is undefined. I know that I can amend that (partially) with DBMS_RESOURCE_MANAGER.BEGIN_SQL_BLOCK/END_SQL_BLOCK, but that is not sufficient for me.
What I would need is a "wait" command - halt script execution until all parallel sessions are finished, and then continue. Doing this in one or two locations of my script would solve my problem.
Does such a wait-command exist? Or alternatively, can SQL_BLOCKs be nested?
-- Update starts here.
I have added my script at
https://livesql.oracle.com/apex/livesql/s/dkwk2cauf1velr5jei3xk55wh Probably it's better I explain what I want to do in symbolic Terms (The General Setting is: a new database is loaded from older databases with dump files. The imported dumpfiles do not have Basic compression. To achieve Basic compression, the data are then direct-path-copied with an intelligent order by clause to their final Location. I start only one session; it is left for the database to start parallel slaves as required.) I set
alter session set PARALLEL_DEGREE_POLICY=AUTO;
alter session enable parallel dml;
alter session enable parallel ddl;
to enable parallel processing. Then (the Iteration over partitions is necessary because otherwise Transactions would become too large. During this operations, target Indexes are set unusable):
-- Loop 1: copy source to target.
for my source-partitions Loop
DBMS_RESOURCE_MANAGER.BEGIN_SQL_BLOCK;
copy source Partition to target with intelligent order by for Maximum compression;
erase source Partition;
DBMS_RESOURCE_MANAGER.END_SQL_BLOCK;
end Loop;
-- Loop 2: Rebuild target Indexes
for my target_index_partitions Loop
rebuild index Partition;
end Loop;
Because of PARALLEL_DEGREE_POLICY=AUTO the Statements are expected to be queued and executed in parallel slave sessions as they become available (therefore their order of processing of Statements may be different from the order of submission). The BEGIN/END_SQL_BLOCK should ensure that source data are not erased before they are copied to the target.
After that, index partions are rebuild in Loop 2. I have noticed that in particular in context of index (Partition) rebuilds the database uses Statement queueing and execution in parallel slaves rather heavily, and that is quite welcome. The only Thing that SHOULD NOT HAPPEN is that a "rebuild" Statement from Loop 2 is executed before the corresponding Partition is loaded in Loop 1. So what I would Need is, between Loops 1 and 2 to wait until all parallel slaves from Loop 1 are finished.
If parallel statements are queued, then Oracle Database processes them in a strict first-in, first-out scheme. So if your statements are queued, whichever started first will be processed first.
You can read more about this at:
https://blogs.oracle.com/datawarehousing/entry/auto_dop_and_parallel_statemen But your whole process seems bizarre to me! You don't get any compression by "inserting with an intelligent order by". You get compression by enabling compression. Your script doesn't do that.
https://oracle-base.com/articles/11g/table-compression-enhancements-11gr1 In any case, any advantage you get with the "intelligent ordering" will be lost over time. If you're loading into heap tables then Oracle will add new rows wherever there's space.