Skip to Main Content
  • Questions
  • PARALLEL Hint not getting used in SELECT

Breadcrumb

May 4th

Question and Answer

Connor McDonald

Thanks for the question.

Asked: October 26, 2017 - 7:56 pm UTC

Last updated: October 08, 2021 - 1:13 am UTC

Version: 11g

Viewed 50K+ times! This question is

You Asked

Hi,

I have a SQL written in stored procedure:
SELECT /*+ PARALLEL(TB_TEST, 8) +/
FROM TB_TEST
WHERE DT <= TO_DATE('01/01/2015', 'MM/DD/YYYY');

When i execute the procedure and monitor v$session, I could see only one session with having SQL_ID which belong to above SQL.

Could you please helpto understand -
When we use parallel (8),so ideally it should show 8 entries in v$ssesion right?
my DB CPU_COUNT is 32.
Can you help why its not executing in parallel in my case?


and Connor said...

There are a *lot* of potential reasons why you are not running in parallel, eg some of the parameters listed here

https://asktom.oracle.com/pls/apex/asktom.search?tag=parallel-processing-200504

Resource manager,
Other sessions running in parallel
Initialization parameters,
Optimizer costing

all play a part. For example, here's a "parallel" query against a small table

SQL> select /*+ parallel */ count(*) from t;

Execution Plan
----------------------------------------------------------
Plan hash value: 1206915961

-------------------------------------------------------------------------
| Id  | Operation        | Name         | Rows  | Cost (%CPU)| Time     |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT |              |     1 |     0   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE  |              |     1 |            |          |
|   2 |   INDEX FULL SCAN| SYS_C0041841 |     1 |     0   (0)| 00:00:01 |
-------------------------------------------------------------------------

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 1


Why ? It's a one row table, and the best way to scan it was via the index...so no parallel operations performed.

There no parameter "db_cpu_count" so I assume you mean cpu_count ? But run your query under autotrace like I've done to get more information.

After that, you can look at setting a parallel query trace, eg

alter session set "_px_trace"=high,all;

This will write a trace file with (a lot of) diagnostic information about the decisions on choosing parallel processes.

Rating

  (10 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

No of cpu used

A reader, October 27, 2017 - 3:05 pm UTC

Hi Connor
I am deleting records from table using 5 chunk using parallel_level = 5 in dbms parallel execute run task.

In delete statement I am putting parallel(5).NO_OF_CPU my side is 32.
Is there any way I can find/guarantee whether my 5 chunks and parallel(5) are completely getting used?
Any session view?
Connor McDonald
October 28, 2017 - 2:40 am UTC

And how is this related to the original question ?

Can i have some code of your delete in parallel task ?

GO, December 19, 2018 - 9:25 am UTC

I need some examples of code with hint /*+PARALLEL */ or/and TASK method with
DBMS_PARALLEL_EXECUTE ?

Thanks,
Connor McDonald
December 20, 2018 - 7:52 am UTC

SQL> create table t as select * from dba_objects
  2  where object_id is not null;

Table created.

SQL>
SQL> exec dbms_stats.gather_table_stats('','T');

PL/SQL procedure successfully completed.

SQL>
SQL> exec dbms_parallel_execute.create_task(task_name => 'task1');

PL/SQL procedure successfully completed.

SQL>
SQL> begin
  2    dbms_parallel_execute.create_chunks_by_number_col
  3      (task_name=>'task1',
  4       table_owner=>user,
  5       table_name=>'T',
  6       table_column=>'OBJECT_ID',
  7       chunk_size=>1000);
  8  end;
  9  /

PL/SQL procedure successfully completed.

SQL>
SQL> select count(*)
  2  from   user_parallel_execute_chunks
  3  where  task_name = 'task1';

  COUNT(*)
----------
       148

1 row selected.

SQL>
SQL> declare
  2    l_delete varchar2(200);
  3  begin
  4    l_delete := 'delete t where object_id between :start_id and :end_id';
  5
  6    dbms_parallel_execute.run_task
  7      (task_name=> 'task1',
  8       sql_stmt=> l_delete,
  9       language_flag=> dbms_sql.native,
 10       parallel_level=>8);
 11  end;
 12  /

PL/SQL procedure successfully completed.

SQL> select count(*) from t;

  COUNT(*)
----------
         0

1 row selected.

SQL>


Parallel on Stats gathering process

Rajeshwaran, Jeyabal, October 08, 2019 - 8:17 am UTC

Team,

very recently we found this in our customer production database.
exec dbms_stats.gather_table_stats(user,'T',degree=>4);

the above stats gather command used to complete in 15 to 20 min done using parallel slaves since 9/5 to 9/23.
verified that from "dba_hist_active_sess_history" using the below sql.
select trunc(sample_time), qc_session_id,qc_session_serial#,
    count(*), 
    min(sample_time) as start_time,
    max(sample_time) as end_time,
    round( (cast( max(sample_time) as date) -
    cast( min(sample_time) as date) )*24*60,2) as diff
from dba_hist_active_sess_history
where module =??
and action =??
group by trunc(sample_time), qc_session_id,qc_session_serial#
having count(*) > 1
order by trunc(sample_time),start_time;

However 24th to till date it was running more than 3 hours and don't do parallel execution for stats gathering.
qc_session_id and qc_session_serial# were reported as NULL values.

Also this sql id "6z1wvv7v2ubpk" - which is the underlying SQL generated from dbms_stats command - dont have any entry in "dba_hist_reports" after 9/24 - verified that from below sql.
select *
from dba_hist_reports
where component_name='sqlmonitor'
and report_name ='main'
and key1 ='6z1wvv7v2ubpk'
order by period_start_time DESC;

looked into dba_hist_sqlstats from 9/5 to 9/23 the plan_hash_value is 2936370353 which is a parallel plan with PX_SERVERS_EXECS_DELTA set to 4
however on 9/24 to till date the plan_hash_value is 2763545799 - which is a serial plan with PX_SERVERS_EXECS_DELTA set to 0

any inputs/directions - where we could get the information about why this sql got degraded from parallel to serial execution since 9/24 ? Kindly advice.

BTW: this is on Oracle Exadata running on 12.2.0.1
Chris Saxon
October 08, 2019 - 3:50 pm UTC

Parallel on Stats gathering process

Rajeshwaran, Jeyabal, October 09, 2019 - 2:46 am UTC

Thanks for the above link.

as mentioned in that link.

we dont use estimate_percent, it is sent to its default value.

the table on which stats being gathered dont have any virtual column or function based indexes and its of size 117GB.

It was being gathered in parallel till 9/23, starting 9/24 it was not parallel. so clue/specific data dictionary available to see why it got downgraded from parallel to serial execution?
Connor McDonald
October 21, 2019 - 12:10 pm UTC

Try using a PX trace

alter session set "_px_trace"="xxx","yyy",...

where those values are one, some of all of:

compilation
messaging
execution

And see what comes out of the trace. It will show what slaves are *attempted* to be obtained, versus which ones *were* obtained

Ununsed hint in sql plan without reason given

Naresh, October 03, 2021 - 11:40 pm UTC

Hello Masters,

I have an sql plan for an sql with a parallel hint - the plan shows the hint as UNUSED but there is no reason specified. Anything you can advise on this? This is 19c on Exadata.


INSERT INTO TABLE_X_PURGE_DATA (OBJID, X_PURGE_OBJID, X_PURGE_TABLE)
SELECT /*+ full(xa) parallel(xa 8) */ PURGE_OBJID_SEQ.NEXTVAL,
XA.OBJID, 'TABLE_X' FROM TABLE_X XA WHERE XA.X_DATE <
PURGE.FUNC_DATE_MINUS (:B2 ) AND XA.X_DATE > TO_DATE ('1753.01.01',
'yyyy.mm.dd') AND ROWNUM <= :B1

Plan hash value: 3916535678

-----------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name               | Rows  | Bytes | Cost  | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT               |                    |       |       |  4959K|       |       |
|   1 |  LOAD TABLE CONVENTIONAL       | TABLE_X_PURGE_DATA |       |       |       |       |       |
|   2 |   SEQUENCE                     | PURGE_OBJID_SEQ    |       |       |       |       |       |
|*  3 |    COUNT STOPKEY               |                    |       |       |       |       |       |
|*  4 |     FILTER                     |                    |       |       |       |       |       |
|   5 |      PARTITION RANGE ALL       |                    |   150M|  2300M|  4959K|     1 |1048575|
|*  6 |       TABLE ACCESS STORAGE FULL| TABLE_X            |   150M|  2300M|  4959K|     1 |1048575|
-----------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter(ROWNUM<=:B1)
   4 - filter("PURGE"."FUNC_DATE_MINUS"(:B2)>TO_DATE(' 1753-01-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))
   6 - storage("XA"."X_DATE">TO_DATE(' 1753-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       filter(("XA"."X_DATE">TO_DATE(' 1753-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "XA"."X_DATE"<"PURGE"."FUNC_DATE_MINUS"(:B2)))

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------
   6 -  SEL$1 / XA@SEL$1
         U -  parallel(xa 8)

The sessions statistics dont show any downgraded PX operations - although DML statements parallelized is also 0. There are lot of DFO trees parallelized and this is the only sql run by the session - so maybe it means the insert in serial but the query is parallel (I mean, as per the session stats, not in reality).

   INST_ID        SID NAME                                                                  VALUE
---------- ---------- ---------------------------------------------------------------- ----------
         4      30201 DDL statements parallelized                                               0
         4      30201 DFO trees parallelized                                                  151
         4      30201 DML statements parallelized                                               0
         4      30201 Parallel operations downgraded 1 to 25 pct                                0
         4      30201 Parallel operations downgraded 25 to 50 pct                               0
         4      30201 Parallel operations downgraded 50 to 75 pct                               0
         4      30201 Parallel operations downgraded 75 to 99 pct                               0
         4      30201 Parallel operations downgraded to serial                                  0
         4      30201 Parallel operations not downgraded                                      151
         4      30201 queries parallelized                                                      1






Connor McDonald
October 04, 2021 - 4:51 am UTC

Can we get the DDL for the table please (via DBMS_METADATA)

DDL for the table 1

Naresh, October 04, 2021 - 12:14 pm UTC

DDL for the table being inserted into:

select dbms_metadata.get_ddl( object_type, object_name, owner )
from dba_objects where lower(object_name) like lower('table_x_purge_data')

DBMS_METADATA.GET_DDL(OBJECT_TYPE,OBJECT_NAME,OWNER)
--------------------------------------------------------------------------------

  CREATE OR REPLACE PUBLIC SYNONYM "TABLE_X_PURGE_DATA" FOR "SA"."TABLE_X_PURGE_
DATA"


  CREATE TABLE "SA"."TABLE_X_PURGE_DATA"
   (    "OBJID" NUMBER,
        "DEV" NUMBER,
        "X_PURGE_OBJID" NUMBER,
        "X_PURGE_TABLE" VARCHAR2(80),
        "X_PURGE_TRANSACTION" NUMBER,
        "X_THREAD_ID" NUMBER,
        "X_MAIN_ENTITY" VARCHAR2(80),
        "X_PURGE_BAN" VARCHAR2(40),
         PRIMARY KEY ("OBJID")
  USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
  STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "POOL_IX"  ENABLE,
         SUPPLEMENTAL LOG GROUP "GGS_360225" ("OBJID") ALWAYS,
         SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS,
         SUPPLEMENTAL LOG DATA (UNIQUE INDEX) COLUMNS,
         SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS,
         SUPPLEMENTAL LOG DATA (ALL) COLUMNS
   ) SEGMENT CREATION IMMEDIATE
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
  STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "POOL_DATA"


Connor McDonald
October 05, 2021 - 4:16 am UTC

It is our improved handling in 19c.

11g was actually wrong because we should not have allowed a parallel operation on a PLSQL function that was not guaranteed to be parallel safe.

Here's a full walk through


SQL>
SQL>  CREATE TABLE "TABLE_X_PURGE_DATA"
  2     (    "OBJID" NUMBER,
  3          "DEV" NUMBER,
  4          "X_PURGE_OBJID" NUMBER,
  5          "X_PURGE_TABLE" VARCHAR2(80),
  6          "X_PURGE_TRANSACTION" NUMBER,
  7          "X_THREAD_ID" NUMBER,
  8          "X_MAIN_ENTITY" VARCHAR2(80),
  9          "X_PURGE_BAN" VARCHAR2(40),
 10           PRIMARY KEY ("OBJID")
 11    USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
 12     ) SEGMENT CREATION IMMEDIATE;

Table created.

SQL>
SQL>
SQL>
SQL> CREATE TABLE "TABLE_X"
  2     (    "OBJID" NUMBER,
  3          "DEV" NUMBER,
  4          "X_ACCOUNTID" VARCHAR2(40),
  5          "S_X_ACCOUNTID" VARCHAR2(40),
  6          "X_DATE" DATE,
  7          "X_USERID" VARCHAR2(20),
  8          "X_SITED" VARCHAR2(80),
  9          "X_ACTIVITY" VARCHAR2(40),
 10          "X_SCREEN_NAME" VARCHAR2(60),
 11          "X_DATA_NAME_1" VARCHAR2(60),
 12          "X_VALUE_NAME_1" VARCHAR2(255),
 13          "X_DATA_NAME_2" VARCHAR2(60),
 14          "X_VALUE_NAME_2" VARCHAR2(60),
 15          "X_DATA_NAME_3" VARCHAR2(60),
 16          "X_VALUE_NAME_3" VARCHAR2(60),
 17          "S_X_USERID" VARCHAR2(20),
 18          "X_CLIENT_HOST" VARCHAR2(60),
 19          "X_SERVER_HOST" VARCHAR2(60)
 20     ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 21    PARTITION BY RANGE ("X_DATE") INTERVAL (NUMTOYMINTERVAL(1,'MONTH'))
 22   (
 23   PARTITION "WEEK_01_2016"  VALUES LESS THAN (date '2016-01-04') SEGMENT CREATION IMMEDIATE,
 24   PARTITION "WEEK_02_2016"  VALUES LESS THAN (date '2016-01-11') SEGMENT CREATION IMMEDIATE,
 25   PARTITION "WEEK_03_2016"  VALUES LESS THAN (date '2016-01-18') SEGMENT CREATION IMMEDIATE,
 26   PARTITION "WEEK_04_2016"  VALUES LESS THAN (date '2016-01-25') SEGMENT CREATION IMMEDIATE,
 27   PARTITION "WEEK_05_2016"  VALUES LESS THAN (date '2016-02-01') SEGMENT CREATION IMMEDIATE
 28  );

Table created.

SQL>
SQL> create sequence PURGE_OBJID_SEQ;

Sequence created.

SQL>
SQL> create or replace package purge is
  2  function func_date_minus(d date) return date;
  3  end;
  4  /

Package created.

SQL>
SQL> create or replace package body purge is
  2  function func_date_minus(d date) return date is begin return sysdate; end;
  3  end;
  4  /

Package body created.

SQL>
SQL> explain plan for
  2  INSERT INTO TABLE_X_PURGE_DATA (OBJID, X_PURGE_OBJID, X_PURGE_TABLE)
  3  SELECT /*+ full(xa) parallel(xa 8) */ PURGE_OBJID_SEQ.NEXTVAL, XA.OBJID, 'TABLE_X'
  4  FROM TABLE_X XA
  5  WHERE XA.X_DATE < date '2021-01-01' -- PURGE.FUNC_DATE_MINUS (:B2 )
  6  AND XA.X_DATE > TO_DATE ('1753.01.01', 'yyyy.mm.dd')
  7  AND ROWNUM <= :B1;

Explained.

SQL> select * from dbms_xplan.display();

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------
Plan hash value: 1394344913

--------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name               | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |                    |     1 |    22 |     2   (0)| 00:00:01 |       |       |        |      |            |
|   1 |  LOAD TABLE CONVENTIONAL | TABLE_X_PURGE_DATA |       |       |            |          |       |       |        |      |            |
|   2 |   SEQUENCE               | PURGE_OBJID_SEQ    |       |       |            |          |       |       |        |      |            |
|*  3 |    COUNT STOPKEY         |                    |       |       |            |          |       |       |        |      |            |
|   4 |     PX COORDINATOR       |                    |       |       |            |          |       |       |        |      |            |
|   5 |      PX SEND QC (RANDOM) | :TQ10000           |     1 |    22 |     2   (0)| 00:00:01 |       |       |  Q1,00 | P->S | QC (RAND)  |
|*  6 |       COUNT STOPKEY      |                    |       |       |            |          |       |       |  Q1,00 | PCWC |            |
|   7 |        PX BLOCK ITERATOR |                    |     1 |    22 |     2   (0)| 00:00:01 |     1 |    64 |  Q1,00 | PCWC |            |
|*  8 |         TABLE ACCESS FULL| TABLE_X            |     1 |    22 |     2   (0)| 00:00:01 |     1 |    64 |  Q1,00 | PCWP |            |
--------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter(ROWNUM<=TO_NUMBER(:B1))
   6 - filter(ROWNUM<=TO_NUMBER(:B1))
   8 - filter("XA"."X_DATE"<TO_DATE(' 2021-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "XA"."X_DATE">TO_DATE(' 1753-01-01
              00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - Degree of Parallelism is 8 because of table property
   - PDML is disabled in current session

29 rows selected.

SQL>
SQL> explain plan for
  2  INSERT INTO TABLE_X_PURGE_DATA (OBJID, X_PURGE_OBJID, X_PURGE_TABLE)
  3  SELECT /*+ full(xa) parallel(xa 8) */ PURGE_OBJID_SEQ.NEXTVAL, XA.OBJID, 'TABLE_X'
  4  FROM TABLE_X XA
  5  WHERE XA.X_DATE < PURGE.FUNC_DATE_MINUS (:B2 )
  6  AND XA.X_DATE > TO_DATE ('1753.01.01', 'yyyy.mm.dd')
  7  AND ROWNUM <= :B1;

Explained.

SQL> select * from dbms_xplan.display();

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------
Plan hash value: 2575781037

---------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name               | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |                    |     1 |    22 |     2   (0)| 00:00:01 |       |       |
|   1 |  LOAD TABLE CONVENTIONAL | TABLE_X_PURGE_DATA |       |       |            |          |       |       |
|   2 |   SEQUENCE               | PURGE_OBJID_SEQ    |       |       |            |          |       |       |
|*  3 |    COUNT STOPKEY         |                    |       |       |            |          |       |       |
|*  4 |     FILTER               |                    |       |       |            |          |       |       |
|   5 |      PARTITION RANGE ALL |                    |     1 |    22 |     2   (0)| 00:00:01 |     1 |1048575|
|*  6 |       TABLE ACCESS FULL  | TABLE_X            |     1 |    22 |     2   (0)| 00:00:01 |     1 |1048575|
---------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter(ROWNUM<=TO_NUMBER(:B1))
   4 - filter("PURGE"."FUNC_DATE_MINUS"(:B2)>TO_DATE(' 1753-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
   6 - filter("XA"."X_DATE">TO_DATE(' 1753-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "XA"."X_DATE"<"PURGE"."FUNC_DATE_MINUS"(:B2))

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------

   6 -  SEL$1 / XA@SEL$1
         U -  parallel(xa 8)

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)

32 rows selected.

SQL>
SQL> create or replace package purge is
  2  function func_date_minus(d date) return date parallel_enable;
  3  end;
  4  /

Package created.

SQL>
SQL> create or replace package body purge is
  2  function func_date_minus(d date) return date parallel_enable is begin return sysdate; end;
  3  end;
  4  /

Package body created.

SQL> explain plan for
  2  INSERT INTO TABLE_X_PURGE_DATA (OBJID, X_PURGE_OBJID, X_PURGE_TABLE)
  3  SELECT /*+ full(xa) parallel(xa 8) */ PURGE_OBJID_SEQ.NEXTVAL, XA.OBJID, 'TABLE_X'
  4  FROM TABLE_X XA
  5  WHERE XA.X_DATE < date '2021-01-01' -- PURGE.FUNC_DATE_MINUS (:B2 )
  6  AND XA.X_DATE > TO_DATE ('1753.01.01', 'yyyy.mm.dd')
  7  AND ROWNUM <= :B1;

Explained.

SQL> select * from dbms_xplan.display();

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------
Plan hash value: 1394344913

--------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name               | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |                    |     1 |    22 |     2   (0)| 00:00:01 |       |       |        |      |            |
|   1 |  LOAD TABLE CONVENTIONAL | TABLE_X_PURGE_DATA |       |       |            |          |       |       |        |      |            |
|   2 |   SEQUENCE               | PURGE_OBJID_SEQ    |       |       |            |          |       |       |        |      |            |
|*  3 |    COUNT STOPKEY         |                    |       |       |            |          |       |       |        |      |            |
|   4 |     PX COORDINATOR       |                    |       |       |            |          |       |       |        |      |            |
|   5 |      PX SEND QC (RANDOM) | :TQ10000           |     1 |    22 |     2   (0)| 00:00:01 |       |       |  Q1,00 | P->S | QC (RAND)  |
|*  6 |       COUNT STOPKEY      |                    |       |       |            |          |       |       |  Q1,00 | PCWC |            |
|   7 |        PX BLOCK ITERATOR |                    |     1 |    22 |     2   (0)| 00:00:01 |     1 |    64 |  Q1,00 | PCWC |            |
|*  8 |         TABLE ACCESS FULL| TABLE_X            |     1 |    22 |     2   (0)| 00:00:01 |     1 |    64 |  Q1,00 | PCWP |            |
--------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter(ROWNUM<=TO_NUMBER(:B1))
   6 - filter(ROWNUM<=TO_NUMBER(:B1))
   8 - filter("XA"."X_DATE"<TO_DATE(' 2021-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "XA"."X_DATE">TO_DATE(' 1753-01-01
              00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - Degree of Parallelism is 8 because of table property
   - PDML is disabled in current session

29 rows selected.

SQL>
SQL>


DDL for table 2

Naresh, October 04, 2021 - 12:40 pm UTC

getting table 2 DDL turned out to be more complicated :)

The table has 146 weekly partitions and I think somewhere along the way, the DB was upgraded and the manually created weekly partition became the automatic interval partitioning - and I think the dbms_metadata.get_ddl is not showing the later partitions - I had to get that part from toad.

So here it is - I kept only a few of the 146 partitions to show the template - THANK YOU for your attention and expertise:

CREATE TABLE "SA"."TABLE_X"
   (    "OBJID" NUMBER,
        "DEV" NUMBER,
        "X_ACCOUNTID" VARCHAR2(40),
        "S_X_ACCOUNTID" VARCHAR2(40),
        "X_DATE" DATE,
        "X_USERID" VARCHAR2(20),
        "X_SITED" VARCHAR2(80),
        "X_ACTIVITY" VARCHAR2(40),
        "X_SCREEN_NAME" VARCHAR2(60),
        "X_DATA_NAME_1" VARCHAR2(60),
        "X_VALUE_NAME_1" VARCHAR2(255),
        "X_DATA_NAME_2" VARCHAR2(60),
        "X_VALUE_NAME_2" VARCHAR2(60),
        "X_DATA_NAME_3" VARCHAR2(60),
        "X_VALUE_NAME_3" VARCHAR2(60),
        "S_X_USERID" VARCHAR2(20),
        "X_CLIENT_HOST" VARCHAR2(60),
        "X_SERVER_HOST" VARCHAR2(60),
         SUPPLEMENTAL LOG GROUP "GGS_359028" ("OBJID") ALWAYS,
         SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS,
         SUPPLEMENTAL LOG DATA (UNIQUE INDEX) COLUMNS,
         SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS,
         SUPPLEMENTAL LOG DATA (ALL) COLUMNS
   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
  STORAGE(
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "POOL_DATA"
  PARTITION BY RANGE ("X_DATE") INTERVAL (NUMTOYMINTERVAL(1,'MONTH'))
 (PARTITION "WEEK_01_2016"  VALUES LESS THAN (TO_DATE(' 2016-01-04 00:00:00', 'S
YYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) SEGMENT CREATION IMMEDIATE
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
  STORAGE(INITIAL 8388608 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "POOL_DATA",

-- removed all partitions code till the last one that seems the manually created one
PARTITION WEEK_07_2018 VALUES LESS THAN (TO_DATE(' 2018-02-12 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING NOCOMPRESS TABLESPACE POOL_DATA PCTFREE    10 INITRANS   1 MAXTRANS   255
    STORAGE    ( INITIAL          8M NEXT             1M MINEXTENTS       1 MAXEXTENTS       UNLIMITED BUFFER_POOL      DEFAULT),

-- this is there the automatic interval partitions start I guess

PARTITION VALUES LESS THAN (TO_DATE(' 2018-03-12 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING NOCOMPRESS TABLESPACE POOL_DATA PCTFREE    10 INITRANS   1 MAXTRANS   255
    STORAGE    ( INITIAL          8M NEXT             1M MINEXTENTS       1 MAXEXTENTS       UNLIMITED BUFFER_POOL      DEFAULT),
  PARTITION VALUES LESS THAN (TO_DATE(' 2021-10-12 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING NOCOMPRESS TABLESPACE POOL_DATA PCTFREE    10 INITRANS   1 MAXTRANS   255
    STORAGE    ( INITIAL          8M NEXT             1M MINEXTENTS       1 MAXEXTENTS       UNLIMITED BUFFER_POOL      DEFAULT)
)
NOCACHE;

Naresh, October 04, 2021 - 12:42 pm UTC

to clarify the last 2 partitions showsn in previous DDL,

PARTITION VALUES LESS THAN (TO_DATE(' 2018-03-12 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING NOCOMPRESS TABLESPACE POOL_DATA PCTFREE    10 INITRANS   1 MAXTRANS   255
    STORAGE    ( INITIAL          8M NEXT             1M MINEXTENTS       1 MAXEXTENTS       UNLIMITED BUFFER_POOL      DEFAULT),

<several partitions between these two - removed for brevity>

  PARTITION VALUES LESS THAN (TO_DATE(' 2021-10-12 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING NOCOMPRESS TABLESPACE POOL_DATA PCTFREE    10 INITRANS   1 MAXTRANS   255
    STORAGE    ( INITIAL          8M NEXT             1M MINEXTENTS       1 MAXEXTENTS       UNLIMITED BUFFER_POOL      DEFAULT)
)

11.2 vs 19c

Naresh, October 04, 2021 - 1:05 pm UTC

And last thing, this sql was doing PX execution in 11.2 - we upgraded recently to 19c - below 11.2 plan:

Plan hash value: 3377935494

--------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name            | Rows  | Bytes | Cost  | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                |                 |       |       |   773K|       |       |        |      |            |
|   1 |  LOAD TABLE CONVENTIONAL        |                 |       |       |       |       |       |        |      |            |
|   2 |   SEQUENCE                      | PURGE_OBJID_SEQ |       |       |       |       |       |        |      |            |
|   3 |    COUNT STOPKEY                |                 |       |       |       |       |       |        |      |            |
|   4 |     PX COORDINATOR FORCED SERIAL|                 |       |       |       |       |       |        |      |            |
|   5 |      PX SEND QC (RANDOM)        | :TQ10000        |   193M|  2953M|   773K|       |       |  Q1,00 | P->S | QC (RAND)  |
|   6 |       COUNT STOPKEY             |                 |       |       |       |       |       |  Q1,00 | PCWC |            |
|   7 |        FILTER                   |                 |       |       |       |       |       |  Q1,00 | PCWC |            |
|   8 |         PX BLOCK ITERATOR       |                 |   193M|  2953M|   773K|     1 |   KEY |  Q1,00 | PCWC |            |
|   9 |          TABLE ACCESS FULL      | TABLE_X_        |   193M|  2953M|   773K|     1 |   KEY |  Q1,00 | PCWP |            |
--------------------------------------------------------------------------------------------------------------------------------

Note
-----
   - cpu costing is off (consider enabling it)

Thank You!!!

Naresh, October 05, 2021 - 6:13 pm UTC

Thank you very much Connor for the last response about the parallel safe function.
Connor McDonald
October 08, 2021 - 1:13 am UTC

Glad we could help

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.