Skip to Main Content
  • Questions
  • DBMS_PARALLEL_EXECUTE and assigned chunks

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question.

Asked: April 01, 2016 - 1:45 pm UTC

Last updated: April 08, 2016 - 3:21 am UTC

Version: Oracle Database 11g 11.2.0.4.0

Viewed 10K+ times! This question is

You Asked

Hi Tom,

I'm using DBMS_PARALLEL_EXECUTE package to run in parallel my PL/SQL procedure's work (a set of DELETE statement on some tables).
The chunks are generated by my own SQL.

...
query_chunk_generator := 'SELECT ... '; --the statement to generate chunks
my_stmt := ...; --the statement to be executed in parallel
DBMS_PARALLEL_EXECUTE.CREATE_TASK (parallel_task_name);
DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL (parallel_task_name,
                                            query_chunk_generator,
                                            TRUE);
DBMS_PARALLEL_EXECUTE.RUN_TASK (parallel_task_name,
                                my_stmt,
                                DBMS_SQL.NATIVE,
                                parallel_level   => 10);


I used 10 as parallel level. Now, when I run my procedure in my test environment, all works fine. While it is running I can see that the ASSIGNED chunks are always 10. The query I used to check is the following

SELECT status, COUNT (status) FROM user_parallel_execute_chunks GROUP BY status


that returns something like this:

ASSIGNED       10
UNASSIGNED  11769
PROCESSED     120


I suppose that the number of chunks assigned coincides with the level of parallelism that I indicated, and these chunks are currently running simultaneously. Am I wrong?.

Anyway, if I try to run the same procedure in production environment (same DB version, same tables and number of records), I see that the number of ASSIGNED chunks is always 1. If my assumption (ASSIGNED chunks = parallel_level value) is correct, I think that in my production environment the execution is not working in parallel.
Is there any parameter (eg concerning the job/scheduler management) to set in database configuration that can affect the number of chunks effectively assigned?
It might also depend on the table definition?

Do you have any further suggestions?

Thanks in advance.

and Connor said...

Under the covers, as I'm sure you're aware, we're just using the scheduler to run each of the components in parallel.

So check 'job_queue_processes' on your production box.

Also, if there are profile limits on the relevant user accounts (eg sessions per user) etc, this could also have an impact.

Hope this helps.

PS - 12000 chunks seems a lot of chunks....

Rating

  (10 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

A reader, April 04, 2016 - 8:41 am UTC

Thank you for your answer.
The parameter "job_queue_processes" is set to 1000.
The user executing the procedure has DEFAULT profile, having Session for user = UNLIMITED.
I know that the number of chunks is too high, but the purpose of this procedure is to delete many rows from a table with millions of rows. The table has foreign keys to itself (which I suppose greatly slow down the operation).
I already tried different solutions (FORALL ... BULK COLLECT ... etc) but the execution time is too long. The CTAS solution is not applicable, since the procedure is performed while the production environment is in use. So, now I'm trying to speed up the process by exploiting the high number of server processors, parallelizing the process.
The cancellation is made by identifying the records via the rowid (which I assume speed up the operation). So, I created a chunk for each record to be deleted (start_row_id = end_row_id) since I can not use a range of values ​​or the procedure would delete lines that must not be deleted. My DELETE statement is a "simple" DELETE FROM TABLE WHERE ROWID IN (:start_id,:end_id).

Do you have other tips?

Thanks in advance.
Connor McDonald
April 04, 2016 - 8:56 am UTC

Could be time to check with Support. In particular, if you are being impacted by bug 18966843

Hope this helps.

Partition Exchange option.

Rajeshwaran Jeyabal, April 04, 2016 - 11:37 am UTC

Do you have other tips?

Other options would be partition exchange option like this.

rajesh@ORA11G> create table big_table_new
  2  partition by range(created)
  3  ( partition pmax values less than (maxvalue) )
  4  nologging
  5  as
  6  select *
  7  from big_table
  8  where id < 1000;

Table created.

rajesh@ORA11G>
rajesh@ORA11G> select count(*) from big_table;

  COUNT(*)
----------
    100000

1 row selected.

rajesh@ORA11G> select count(*) from big_table_new;

  COUNT(*)
----------
       999

1 row selected.

rajesh@ORA11G> alter table big_table_new
  2  exchange partition pmax
  3  with table big_table;

Table altered.

rajesh@ORA11G> select count(*) from big_table;

  COUNT(*)
----------
       999

1 row selected.

rajesh@ORA11G> select count(*) from big_table_new;

  COUNT(*)
----------
    100000

1 row selected.

rajesh@ORA11G>


Just retain the rows that not to be delete in a new table and have them exchanged back to original table. ( the time to exchange partition would be pretty fast, since it involves just data dictionary updates - no data movements-, and requires very minimal downtime).

A reader, April 04, 2016 - 12:41 pm UTC

I don't think that this approach is applicable in my case. The operation must occur while the databse is in use, so it may happen that the rows inserted immediately after the creation of the new table, are lost since they were not included in the select that populates the new table during its creation.

EBR

Rajeshwaran Jeyabal, April 04, 2016 - 1:21 pm UTC

Then EBR would be the case to go.

A reader, April 04, 2016 - 4:01 pm UTC

update:
I asked my system administrator to verify the bug that you reported me. In the meantime I tried to run again the execution, and I noticed that for some tables are assigned more chunks (but still less than I expect, that is, 10).
For other tables, instead, it is always assigned only one chunk (including the table with millions of records).
I would like to know if the definition of a table, with all its parameters and / or constraints, could affect the assignment of chunks.
Chris Saxon
April 05, 2016 - 1:16 am UTC

Can you give us then a full test case of one of the examples where you are *not* seeing the 10 assigned, and we'll try reproduce here.

A reader, April 05, 2016 - 9:33 am UTC

Ok, I'll try to explain the whole scenario (although my problem is in the last step). The code is quite long but it is very simple, since it is quite repetitive.
My scenario is the following: I've to backup some tables of my production environment to another database that is on another server. On backup-database side there is a job scheduled to run every night at 2. This job executes a procedure that performs the following operations:

- copy, through dblink, all data I need from production environment
- delete, through dblink, the copied data from production environment

The delete operation is carried out through three different calls to three functions defined in a package on production environment.

BACKUP_DATABASE SIDE:
CREATE OR REPLACE PROCEDURE BACKUP_SCHEMA.BACKUP_DATABASE (BACKUP_DATE IN OUT DATE)
AS
   /*DECLARE */
   .....
BEGIN
   .....
   --copy data from DOCUMENTS tables through DBLINK to production environment
   copy_documents (STEP, documentid);  
   .....
   --copy data from ACCOUNT tables through DBLINK to production environment
   copy_accounts (STEP, REFERENCE, ACC_VERSION, STARTDATE);  
   .....
   --copy data from NOTIFICATION through DBLINK to production environment
   copy_notifications (STEP, NOTIFICATIONS);  

   .....
   
   --delete data from DOCUMENTS tables through DBLINK from production environment
   warning_message := TO_CLOB (backup_pkg.delete_documents@DBLINK_TO_PROD); 
   log_result(warning_message);

   --delete data from NOTIFICATIONS tables through DBLINK from production environment
   warning_message := TO_CLOB (backup_pkg.delete_notifications@DBLINK_TO_PROD); 
   log_result(warning_message);

   --delete data from ACCOUNT tables through DBLINK from production environment
   warning_message := TO_CLOB (backup_pkg.delete_accounts@DBLINK_TO_PROD); 
   log_result(warning_message);
   ...
END;
/


My problem is on last delete call (backup_pkg.delete_accounts@DBLINK_TO_PROD), since the previous delete calls are performed on tables with few data (the execution is too fast to check if multiple chunks are assigned or not for these tables, but for medium tables it seems to work (almost) properly).

On production environment side the package is defined as following:
CREATE OR REPLACE PACKAGE PRODUCTION_SCHEMA.backup_pkg
AS

   FUNCTION delete_from_table (STEP                 IN VARCHAR2,
                               field_name_1_param   IN VARCHAR2,
                               field_name_2_param   IN VARCHAR2,
                               TABLE_NAME_PARAM     IN VARCHAR2)
      RETURN VARCHAR2;

   FUNCTION delete_accounts
      RETURN VARCHAR2;

   FUNCTION delete_documents
      RETURN VARCHAR2;

   FUNCTION delete_notifications
      RETURN VARCHAR2;

END;
/

CREATE OR REPLACE PACKAGE BODY PRODUCTION_SCHEMA.backup_pkg
IS
   
   FUNCTION delete_from_table (STEP                 IN VARCHAR2,
                                        field_name_1_param   IN VARCHAR2,
                                        field_name_2_param   IN VARCHAR2,
                                        TABLE_NAME_PARAM     IN VARCHAR2
                                        )
      RETURN VARCHAR2
   AS
      PRAGMA AUTONOMOUS_TRANSACTION;

      check_task_exists       CHAR (1);
      query_chunk_generator   VARCHAR2 (1000);
      message_to_return       CLOB;
      message_string          VARCHAR2 (4000) := '';
      parallel_task_name      VARCHAR2 (100) := 'delete_parallel_task';
      attempt_number          NUMBER;
      task_status             NUMBER;
      delete_stmt VARCHAR2(500);
      retry BOOLEAN;
   BEGIN
      message_to_return := EMPTY_CLOB ();

      .....

      IF field_name_2_param IS NULL OR field_name_2_param = ''
      THEN
         query_chunk_generator := 'SELECT ROWID, ROWID FROM ' || table_name_param || ' WHERE ' || field_name_1_param || ' IN ( select ' || field_name_1_param || ' from ' || table_name_param || '@dblink_to_backup) order by ' || field_name_1_param || ' desc';
      ELSE
         query_chunk_generator := 'SELECT ROWID, ROWID FROM ' || table_name_param || ' WHERE (' || field_name_1_param || ',' || field_name_2_param || ')IN ( select ' || field_name_1_param || ',' || field_name_2_param || ' from ' || table_name_param || '@dblink_to_backup) order by ' || field_name_1_param || ' desc, ' || field_name_2_param || ' DESC';
      END IF;

      DBMS_PARALLEL_EXECUTE.CREATE_TASK (parallel_task_name);

      DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL (parallel_task_name, query_chunk_generator, TRUE);

      delete_stmt := 'DELETE FROM ' || table_name_param || ' WHERE ROWID IN (:start_id,:end_id) ';

      DBMS_PARALLEL_EXECUTE.RUN_TASK (parallel_task_name,
                                      delete_stmt,
                                      DBMS_SQL.NATIVE,
                                      parallel_level   => 10);

      -- If there is error, RESUME it for at most 10 times.
      attempt_number := 0;
      task_status := DBMS_PARALLEL_EXECUTE.TASK_STATUS (parallel_task_name);
      retry := task_status = DBMS_PARALLEL_EXECUTE.FINISHED_WITH_ERROR OR task_status = DBMS_PARALLEL_EXECUTE.CRASHED;
      WHILE (retry)
      LOOP
         attempt_number := attempt_number + 1;
         DBMS_PARALLEL_EXECUTE.RESUME_TASK (parallel_task_name);
         task_status := DBMS_PARALLEL_EXECUTE.TASK_STATUS (parallel_task_name);
         retry := (attempt_number < 10 AND (task_status = DBMS_PARALLEL_EXECUTE.FINISHED_WITH_ERROR OR task_status = DBMS_PARALLEL_EXECUTE.CRASHED));
      END LOOP;


      IF task_status = DBMS_PARALLEL_EXECUTE.FINISHED_WITH_ERROR
      THEN
         message_to_return := 'ERRORS ON STEP: ' || step || (CHR (13) || CHR (10));
         message_to_return := message_to_return || 'TABLE: ' || table_name_param || (CHR (13) || CHR (10));

         FOR messages IN (SELECT error_message
                            FROM user_parallel_execute_chunks
                           WHERE task_name = parallel_task_name AND status = 'PROCESSED_WITH_ERROR')
         LOOP
            message_to_return := message_to_return || 'ERROR: '||messages.error_message|| (CHR (13) || CHR (10));
         END LOOP;
      ELSE
         message_to_return := EMPTY_CLOB ();
      END IF;

      DBMS_PARALLEL_EXECUTE.drop_task (parallel_task_name);

      

      COMMIT;

      message_string := DBMS_LOB.SUBSTR (message_to_return, 4000, 1);

      RETURN message_string;
   EXCEPTION
      WHEN OTHERS
      THEN
         message_to_return := 'ERRORS ON STEP: ' || step || (CHR (13) || CHR (10));
         message_to_return := message_to_return || 'DELETING RECORDS FROM TABLE: ' || table_name_param || (CHR (13) || CHR (10));
         message_to_return := message_to_return || 'ERROR: '||SQLERRM || (CHR (13) || CHR (10));

         message_string := DBMS_LOB.SUBSTR (message_to_return, 4000, 1);

         RETURN message_string;
   END;


   FUNCTION delete_bookings
      RETURN VARCHAR2
   IS
      /* DECLARE */
      table_name          VARCHAR2 (30);
      field_name_1        VARCHAR2 (30);
      field_name_2        VARCHAR2 (30);
      message_to_return   VARCHAR2 (4000);
      step                VARCHAR2 (50) := 'DELETE BOOKINGS';
   BEGIN
      message_to_return := EMPTY_CLOB ();

      table_name := 'ACC_ACCOUNT';
      field_name_1 := 'accountreference';
      field_name_2 := 'accountversion';
      message_to_return :=
         delete_from_table (step,
                            field_name_1,
                            field_name_2,
                            TABLE_NAME
                            );

      IF LENGTH (message_to_return) > 0
      THEN
         RETURN message_to_return;
      END IF;
   
   table_name := 'ACC_ACCOUNTCHANNEL';
   message_to_return :=
         delete_from_table (step,
                            field_name_1,
                            field_name_2,
                            TABLE_NAME
                            );

      IF LENGTH (message_to_return) > 0
      THEN
         RETURN message_to_return;
      END IF;

   .....

      table_name := 'ACC_ACCOUNTDETAIL';
      message_to_return :=
         delete_from_table (step,
                            field_name_1,
                            field_name_2,
                            TABLE_NAME
                            );

      RETURN message_to_return;
   END;


   FUNCTION delete_documents
      RETURN VARCHAR2
   IS
      /* DECLARE */
      table_name          VARCHAR2 (30);
      field_name          VARCHAR2 (30) := 'documentid';
      message_to_return   VARCHAR2 (4000);
      step                VARCHAR2 (16) := 'DELETE DOCUMENTS';
   BEGIN
      message_to_return := EMPTY_CLOB ();

      table_name := 'OA_STOREDDOCUMENT';
      message_to_return :=
         delete_from_table (step,
                            field_name,
                            NULL,
                            TABLE_NAME
                            );

      IF LENGTH (message_to_return) > 0
      THEN
         RETURN message_to_return;
      END IF;

      table_name := 'OA_DOCACCOUNTINGDETAILRES';
      message_to_return :=
         delete_from_table (step,
                            field_name,
                            NULL,
                            TABLE_NAME
                            );

      IF LENGTH (message_to_return) > 0
      THEN
         RETURN message_to_return;
      END IF;

      .....
      
   table_name := 'OA_DOCUMENT';
      message_to_return :=
         delete_from_table (step,
                            field_name,
                            NULL,
                            TABLE_NAME
                            );

      RETURN message_to_return;
   END;


   FUNCTION delete_notifications
      RETURN VARCHAR2
   IS
      /* DECLARE */
      table_name          VARCHAR2 (30);
      field_name          VARCHAR2 (30) := 'id';
      message_to_return   VARCHAR2 (4000);
      step                VARCHAR2 (20) := 'DELETE NOTIFICATIONS';
   BEGIN
      message_to_return := EMPTY_CLOB ();

      table_name := 'ESB_APCHEQUENOTIFICATION';
      field_name := 'notificationid';
      message_to_return :=
         delete_from_table (step,
                            field_name,
                            NULL,
                            TABLE_NAME
                            );

      IF LENGTH (message_to_return) > 0
      THEN
         RETURN message_to_return;
      END IF;

      table_name := 'ESB_ATTACHMENTNOTIFICATION';
      field_name := 'id';
      message_to_return :=
         delete_from_table (step,
                            field_name,
                            NULL,
                            TABLE_NAME
                            );

      IF LENGTH (message_to_return) > 0
      THEN
         RETURN message_to_return;
      END IF;
      .....
      
      table_name := 'ESB_BOOKINGHOLDERNOTIFICATION';
      message_to_return :=
         delete_from_table (step,
                            field_name,
                            NULL,
                            TABLE_NAME
                            );

      RETURN message_to_return;
   END;
END;
/


The key of my problem is on delete_from_table function.
This function retrieves all rowid of records inside "table_name_param" table name that are in both environments (through a dblink to backup environemnt), identifying these records through the table key (parameters "field_name_1_param" and "field_name_2_param"). So, my query_chunk_generator statement is something like this:

SELECT ROWID, ROWID FROM OA_STOREDDOCUMENT WHERE documentid IN (select documentid from OA_STOREDDOCUMENT@dblink_to_backup) order by documentid desc

(in oder words, all records in backup database must be deleted from production environment)

So, the action to be performed by the task is a delete statement through rowid, like following:

DELETE FROM OA_STOREDDOCUMENT WHERE ROWID IN (:start_id,:end_id)


Here is my problem. Tables with million of records and lot of record to be deleted, assign only 1 chunk.
At the moment, the most big table has 60.897.786 records, and 239.215 of them must be deleted in next execution
Connor McDonald
April 07, 2016 - 2:50 am UTC

Am I reading this right?

query_chunk_generator := 'SELECT ROWID, ROWID FROM ' || table_name_param || ' WHERE ' || ...

That suggests a chunk per row ?

Whilst that should not a barrier to multiple tasks being assigned, that strikes me as a very very very slow inefficient means of deleting rows (and certainly not was dbms_parallel_execution was intended to be used for).

I'm curious - is there a reason you need to do it this way ?

A reader, April 05, 2016 - 9:41 am UTC

Sorry, in my previous post I made a copy/paste error in the definition of the package body.


function delete_bookings

is
function delete_accounts


;-)

A reader, April 07, 2016 - 8:24 am UTC

Yes you're right... one chunk per row. I know it is not a good strategy but I think that I cannot use a range of rowid.
Suppose that TABLE_X is the table to backup and its primary key is COL_1 and COL_2. First I have to identify the rowid of the records of this table in my production environment, whose key is in the same table on backup environment.
Suppose that the table has the following records in each environment:
TABLE_X (production environment)

ROWID     COL_1(pk)     COL_2(pk)    COL_3 ....
-----     ---------     ---------    ----- 
AAAAA1       V11           V12         V13
AAAAA2       V21           V22         V23
AAAAA3       V31           V22         V23


TABLE_X (backup environment)

ROWID     COL_1(pk)     COL_2(pk)    COL_3 ....
-----     ---------     ---------    ----- 
BBBBB1       V11           V12         V13
BBBBB2       V31           V22         V23


As you can see, in the backup environment, the line identified by the key (V21, V22) is not present, therefore it should not be deleted from the production environment. So if I use the approach that involves the use of CREATE_CHUNKS_BY_ROWID (instead CREATE_CHUNKS_BY_SQL that I am currently using), the system may generate a range of rowid with START_ID = AAAAA1 and end_id = AAAAA3. Then the statement
DELETE FROM TABLE_X WHERE ROWID BETWEEN :start_id AND :end_id

deletes the row that should not be deleted. That's why I create ranges of one element.
How can I create chunks with wider range, making sure not to include rowid that should not be included?
I could do something like following, but that would imply that every delete statement, runs a SELECT query to always identify the same records, going to negatively impact performance:
DELETE FROM TABLE_X
      WHERE     ROWID BETWEEN :start_id AND :end_id
            AND ROWID IN (SELECT ROWID
                            FROM TABLE_X
                           WHERE (col_1, col_2) IN (SELECT col_1,col_2 FROM TABLE_X@DBLINK_TO_BACKUP))

Am I wrong? Any suggestions on other possible strategies to use?

Thanx.
Connor McDonald
April 07, 2016 - 11:32 am UTC

Sorry, I should have explained my line of thought more clearly.

My point is, we use dbms_parallel_execute to speed up a task that is slow in serial.

I would be willing to bet, that using dbms_parallel_execute in this way (even with 10 parallel tasks running) will be *way* slower than doing it in more batched delete operation. (Search this site for FORALL etc).


A reader, April 07, 2016 - 12:34 pm UTC

Yes, I have already tried with FORALL. My first implementation was based on the use of FORALL and BULK COLLECT, without the use of DBMS_PARALLEL_EXECUTE. The execution took still too much time (the time interval available to avoid overload to the server while the application is used is relatively short, a few hours).
So I decided to try to implement parallel processing, to take advantage of the CPU number available to the server.
So, if I understand correctly, you're telling me that even though I have 10 chunks in ASSIGNED status simultaneously, they run serially even if they point to different rows? The chunks having status ASSIGNED are not executed simultaneously?
Connor McDonald
April 08, 2016 - 3:21 am UTC

No I'm saying that 10,000 jobs, with 10 of them running concurrently, each deleting one row, wont be a quick (and will be a much larger server load) than a single batched delete.

Here's an example, we're going to delete ~200k from our ~2million row table

SQL> set timing off
SQL> drop table t purge;

Table dropped.

SQL>
SQL> create table T
  2  as select a.* from all_Objects a,
  3     ( select 1 from dual connect by level <= 30 );

Table created.

SQL>
SQL> select count(*),
  2         count(case when owner like 'APEX%' then 1 end)
  3  from t;

  COUNT(*) COUNT(CASEWHENOWNERLIKE'APEX%'THEN1END)
---------- ---------------------------------------
   2819880                                  187050

1 row selected.

SQL> variable msg clob
SQL> set timing on
SQL> declare
  2        check_task_exists       CHAR (1);
  3        query_chunk_generator   VARCHAR2 (1000);
  4        message_to_return       CLOB;
  5        parallel_task_name      VARCHAR2 (100) := 'delete_parallel_task';
  6        task_status             NUMBER;
  7        delete_stmt             VARCHAR2(500);
  8  BEGIN
  9        dbms_lob.createtemporary(message_to_return,true);
 10
 11        query_chunk_generator := 'SELECT ROWID, ROWID FROM t where owner like ''APEX%''';
 12
 13        DBMS_PARALLEL_EXECUTE.CREATE_TASK (parallel_task_name);
 14        DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL (parallel_task_name, query_chunk_generator, TRUE);
 15
 16        delete_stmt := 'DELETE FROM t WHERE ROWID IN (:start_id,:end_id) ';
 17
 18        DBMS_PARALLEL_EXECUTE.RUN_TASK (parallel_task_name,
 19                                        delete_stmt,
 20                                        DBMS_SQL.NATIVE,
 21                                        parallel_level   => 10);
 22
 23        task_status := DBMS_PARALLEL_EXECUTE.TASK_STATUS (parallel_task_name);
 24
 25        IF task_status = DBMS_PARALLEL_EXECUTE.FINISHED_WITH_ERROR
 26        THEN
 27           FOR messages IN (SELECT error_message
 28                              FROM user_parallel_execute_chunks
 29                             WHERE task_name = parallel_task_name AND status = 'PROCESSED_WITH_ERROR')
 30           LOOP
 31              message_to_return := message_to_return || 'ERROR: '||messages.error_message|| (CHR (13) || CHR (10));
 32           END LOOP;
 33        END IF;
 34
 35        DBMS_PARALLEL_EXECUTE.drop_task (parallel_task_name);
 36        COMMIT;
 37        :msg := message_to_return;
 38  END;
 39  /

PL/SQL procedure successfully completed.

Elapsed: 00:02:54.87
SQL>
SQL> print msg

MSG
--------------------------------------------------------------------------------


<code>

So we took just under 3 mins, and we definitely were having 10 delete's going in parallel - from another session whilst the above was running

<code>
SQL> SELECT status, COUNT (status) FROM user_parallel_execute_chunks GROUP BY status;

STATUS               COUNT(STATUS)
-------------------- -------------
ASSIGNED                         7
UNASSIGNED                  112384
PROCESSED                    74719

SQL> /

STATUS               COUNT(STATUS)
-------------------- -------------
ASSIGNED                        10
UNASSIGNED                   36399
PROCESSED                   150701



So now lets look at options...I'll start with basic delete

SQL> set timing off
SQL> drop table t purge;

Table dropped.

SQL>
SQL> create table T
  2  as select a.* from all_Objects a,
  3     ( select 1 from dual connect by level <= 30 );

Table created.

SQL>
SQL> select count(*),
  2         count(case when owner like 'APEX%' then 1 end)
  3  from t;

  COUNT(*) COUNT(CASEWHENOWNERLIKE'APEX%'THEN1END)
---------- ---------------------------------------
   2819880                                  187050

1 row selected.

SQL>
SQL> set timing on
SQL> set serverout on
SQL> delete from T where owner like 'APEX%';

187050 rows deleted.

Elapsed: 00:00:09.83
SQL> commit;

Commit complete.


9 seconds ! That's a whole lot better than 174 seconds.

Now let's say for some reason, doing that big delete in one swoop causes an issue for you or your server admins etc...Let's check out doing it in batches (but still in serial).

SQL> set timing off
SQL> drop table t purge;

Table dropped.

SQL>
SQL> create table T
  2  as select a.* from all_Objects a,
  3     ( select 1 from dual connect by level <= 30 );

Table created.

SQL>
SQL>
SQL> set timing on
SQL> set serverout on
SQL> declare
  2    type rid_list is table of rowid index by pls_integer;
  3    l_rid rid_list;
  4  begin
  5    select rowid bulk collect into l_rid
  6    from t where owner like 'APEX%';
  7
  8    dbms_output.put_line(l_rid.count);
  9    for x in 0 .. trunc(l_rid.count / 1000) loop
 10      forall i in x*1000+1 .. least( (x+1)*1000, l_rid.count )
 11        delete from t where rowid = l_rid(i);
 12      dbms_output.put_line('Deleted '||sql%rowcount||' rows');
 13    end loop;
 14    commit;
 15  end;
 16  /
187050
Deleted 1000 rows
Deleted 1000 rows
Deleted 1000 rows
[snip]
Deleted 1000 rows
Deleted 1000 rows
Deleted 1000 rows
Deleted 1000 rows
Deleted 50 rows

PL/SQL procedure successfully completed.

Elapsed: 00:00:12.63
SQL>



I could put some "sleep" time after each batch to control server utilization..and I'm still going to come in way way ahead of the row-at-a-time approach.

Hope this helps.

A reader, April 08, 2016 - 8:34 am UTC

Ok, thank you very much for your time. I had already tried both approaches ("single" delete and "bulk" delete), and now I was trying this additional solution using parallelism.
I will continue my study to see which approach is more efficient.
The problem, however, is related to a single table that has 9 children tables, plus a foreign key to itself. This, of course, affects the cancellation performance. Most likely this table needs a refactor.

Thank you

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library