Skip to Main Content
  • Questions
  • Statistics Gathering using older version feature

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question, Narendra.

Asked: October 17, 2023 - 10:45 pm UTC

Last updated: February 16, 2024 - 6:33 pm UTC

Version: 19.19

Viewed 10K+ times! This question is

You Asked

Hello Connor/Chris,

I am in the middle of troubleshooting a slow-running application maintenance job for a database that has been migrated/upgraded from on-premise 12.1.0.2 on linux x64 to Exadata Cloud @ Customer setup. Many of the "Top SQLs" that have regressed are internal SQLs generated by call to GATHER_TABLE_STATS call that is part of the application job. The internal SQLs are similar to below but for some reason are running 5-6 times slower on 19c database. I am running out of options to figure out what could be causing this but I was wondering whether there exists a way to replicate the behaviour of statistics gathering from 12.1.0.2 in 19c database.

Example internal SQL that has regressed:

select /*+  full(t)    no_parallel(t) no_parallel_index(t) dbms_stats 
  cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring 
  xmlindex_sel_idx_tbl opt_param('optimizer_inmemory_aware' 'false') 
  no_substrb_pad  */to_char(count("OWNER")),substrb(dump(min("OWNER"),16,0,64)
  ,1,240),substrb(dump(max("OWNER"),16,0,64),1,240),
  to_char(count("OBJECT_NAME")),substrb(dump(min("OBJECT_NAME"),16,0,64),1,
  240),substrb(dump(max("OBJECT_NAME"),16,0,64),1,240),
  to_char(count("SUBOBJECT_NAME")),substrb(dump(min("SUBOBJECT_NAME"),16,0,64)
  ,1,240),substrb(dump(max("SUBOBJECT_NAME"),16,0,64),1,240),
  to_char(count("OBJECT_ID")),substrb(dump(min("OBJECT_ID"),16,0,64),1,240),
  substrb(dump(max("OBJECT_ID"),16,0,64),1,240),
  to_char(count("DATA_OBJECT_ID")),substrb(dump(min("DATA_OBJECT_ID"),16,0,64)
  ,1,240),substrb(dump(max("DATA_OBJECT_ID"),16,0,64),1,240),
  to_char(count("OBJECT_TYPE")),substrb(dump(min("OBJECT_TYPE"),16,0,64),1,
  240),substrb(dump(max("OBJECT_TYPE"),16,0,64),1,240),
  to_char(count("CREATED")),substrb(dump(min("CREATED"),16,0,64),1,240),
  substrb(dump(max("CREATED"),16,0,64),1,240),to_char(count("LAST_DDL_TIME")),
  substrb(dump(min("LAST_DDL_TIME"),16,0,64),1,240),
  substrb(dump(max("LAST_DDL_TIME"),16,0,64),1,240),to_char(count("TIMESTAMP")
  ),substrb(dump(min("TIMESTAMP"),16,0,64),1,240),
  substrb(dump(max("TIMESTAMP"),16,0,64),1,240),to_char(count("STATUS")),
  substrb(dump(min("STATUS"),16,0,64),1,240),substrb(dump(max("STATUS"),16,0,
  64),1,240),to_char(count("TEMPORARY")),substrb(dump(min("TEMPORARY"),16,0,
  64),1,240),substrb(dump(max("TEMPORARY"),16,0,64),1,240),
  to_char(count("GENERATED")),substrb(dump(min("GENERATED"),16,0,64),1,240),
  substrb(dump(max("GENERATED"),16,0,64),1,240),to_char(count("SECONDARY")),
  substrb(dump(min("SECONDARY"),16,0,64),1,240),substrb(dump(max("SECONDARY"),
  16,0,64),1,240),to_char(count("NAMESPACE")),substrb(dump(min("NAMESPACE"),
  16,0,64),1,240),substrb(dump(max("NAMESPACE"),16,0,64),1,240),
  to_char(count("EDITION_NAME")),substrb(dump(min("EDITION_NAME"),16,0,64),1,
  240),substrb(dump(max("EDITION_NAME"),16,0,64),1,240),
  to_char(count("SHARING")),substrb(dump(min("SHARING"),16,0,64),1,240),
  substrb(dump(max("SHARING"),16,0,64),1,240),to_char(count("EDITIONABLE")),
  substrb(dump(min("EDITIONABLE"),16,0,64),1,240),
  substrb(dump(max("EDITIONABLE"),16,0,64),1,240),
  to_char(count("ORACLE_MAINTAINED")),substrb(dump(min("ORACLE_MAINTAINED"),
  16,0,64),1,240),substrb(dump(max("ORACLE_MAINTAINED"),16,0,64),1,240),
  to_char(count("APPLICATION")),substrb(dump(min("APPLICATION"),16,0,64),1,
  240),substrb(dump(max("APPLICATION"),16,0,64),1,240),
  to_char(count("DEFAULT_COLLATION")),substrb(dump(min("DEFAULT_COLLATION"),
  16,0,64),1,240),substrb(dump(max("DEFAULT_COLLATION"),16,0,64),1,240),
  to_char(count("DUPLICATED")),substrb(dump(min("DUPLICATED"),16,0,64),1,240),
  substrb(dump(max("DUPLICATED"),16,0,64),1,240),to_char(count("SHARDED")),
  substrb(dump(min("SHARDED"),16,0,64),1,240),substrb(dump(max("SHARDED"),16,
  0,64),1,240),to_char(count("CREATED_APPID")),
  substrb(dump(min("CREATED_APPID"),16,0,64),1,240),
  substrb(dump(max("CREATED_APPID"),16,0,64),1,240),
  to_char(count("CREATED_VSNID")),substrb(dump(min("CREATED_VSNID"),16,0,64),
  1,240),substrb(dump(max("CREATED_VSNID"),16,0,64),1,240),
  to_char(count("MODIFIED_APPID")),substrb(dump(min("MODIFIED_APPID"),16,0,64)
  ,1,240),substrb(dump(max("MODIFIED_APPID"),16,0,64),1,240),
  to_char(count("MODIFIED_VSNID")),substrb(dump(min("MODIFIED_VSNID"),16,0,64)
  ,1,240),substrb(dump(max("MODIFIED_VSNID"),16,0,64),1,240),
  count(rowidtochar(rowid)) 
from
 "NP"."T5" t  /* TOPN,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,ACL,NIL,NIL,NDV,NIL,NIL,
  NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,
  NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,
  NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,
  NDV,NIL,NIL,NDV,NIL,NIL,RWID,U254,U254,U254,U254,U254,U254,U254,U254,U254,
  U254,U254,U254,U254,U254,U254,U254,U254,U254,U254,U254,U254,U254,U254,U254,
  U254,U254U*/



Thanks in advance

and Chris said...

I reached out to Nigel Bayliss, optimizer PM about this

- Check whether fixed object and dictionary stats have been gathered; if not gathering these may help
- There are some internal queries that will run slower
- Check if you're using AUTO sample size; switch to it if not
- If necessary you can use fix controls to revert the behaviour in specific cases

If none of the above applies or helps and this is causing problems, please contact support for further help.


Rating

  (17 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Re: Performance of stats gathering

Narendra, November 24, 2023 - 9:40 am UTC

Hello Chris,

Thank you for looking into this.

> Check whether fixed object and dictionary stats have been gathered; if not gathering these may help
We have done this already and still the issue persists

> There are some internal queries that will run slower
> Check if you're using AUTO sample size; switch to it if not
We are using AUTO sample size on both 12c as well as 19c databases

> If necessary you can use fix controls to revert the behaviour in specific cases
Any pointers about which fix controls we can use?

>If none of the above applies or helps and this is causing problems, please contact support for further help.

We have already opened a SR a couple of months ago (3-34320127831 : Performance Degradation
) but we don't seem to be getting right focus on the same and hence we are reaching out for any help from any source

Chris Saxon
November 24, 2023 - 5:56 pm UTC

I'll see what I can find.

Other than taking longer, what impact is this having on your system?

Re: Statistics gathering performance

Narendra, November 25, 2023 - 6:15 pm UTC

Thanks Chris.
The "bigger picture" is the application batch jobs are running 3-4 times slower on 19c on exac@c as compared to 12c running on a virtual server. Many application SQLs are running slower on 19c and all of the"Top SQLs" from awr reports are displaying the same symptoms as those internal stats gathering SQLs. SQL tuning advisor is reporting no recommendations and frankly the execution plans look optimized.
The only (??) way that helps achieve acceptable performance on 19c is to increase DOP. But at all times the CPU utilisation is never more than 60% on the underlying server when the application batch job runs.
Chris Saxon
November 27, 2023 - 3:02 pm UTC

So you're saying lots of SQL statements are affected - not just stats gathering jobs? What exactly are those symptoms?

The MOS note you've listed below is one case we were thinking of.

Re: Statistics gathering performance

Narendra, November 27, 2023 - 5:46 am UTC

Hello Chris/Connor,

I thought I would mention that in our troubleshooting so far we did come across the support note
19c Introduces Enhanced Statistics Gathering for Index Number of Distinct Keys (NDK) (Doc ID 2819849.1) and did manage to get some improvement by implementing the recommendation from the note but that has affected a different internal SQL than the one shown above. The above SQL and the performance issue still persists.

Re: Statistics Gathering Performance

Narendra, November 27, 2023 - 4:14 pm UTC

So you're saying lots of SQL statements are affected - not just stats gathering jobs? What exactly are those symptoms?

The biggest symptom is that at least one of the application jobs, is taking between 45 minutes to 1 hour on 12c but is taking anywhere between 3:5 hours to 4 hours on 19c ExaCC setup. Initial observations pointed to lack of CPU so we doubled the CPU for 19c database. While the symptoms (high system load) went away, the app job still took 2.5 - 3 hours to complete on 19c.
We did carry out following work as part of our investigation so far
1. Compared execution plans of sqls between 12c and 19c for any possible plan regression. There is no plan regression for any application SQLs
2. Ran SQL Tuning Advisor on "Top 5" application SQLs, excluding the internal SQLs generated by GATHER_TABLE_STATS API call. The SQL Tuning Advisor came back with no recommendations at all.

Since then we have managed to bring down the app job run times on 19c to almost match 12c timings but had to double the degree of parallelism on 19c for many tables involved and also the GATHER_TABLE_STATS API (We can't do any app code changes due to challenges with vendor code).
So latest status is that 12c is able to service the same application workload with half the DOP as compared to 19c and we are kind of running out of options to describe/explain this behaviour, let alone fix it.
Connor McDonald
November 28, 2023 - 3:52 am UTC

Do you have capability to run sql trace on these SQL's ? (with waits)

Re: Statistics Gathering Performance

Narendra, November 29, 2023 - 8:37 am UTC

Do you have capability to run sql trace on these SQL's ? (with waits)

Hello Connor/Chris,

We tried once to run sql trace on entire app job but we did not manage to capture all the work (thanks to the DBAs who did it).
However, we have managed to capture sql trace for GATHER_TABLE_STATS call for a table, from 12c and 19c.

12c:

PQ Coordinator:
SQL ID: b0vg2w1auktgw Plan Hash: 4250307043

select /*+  full(t)    parallel(t,2) parallel_index(t,2) dbms_stats 
  cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring 
  xmlindex_sel_idx_tbl no_substrb_pad  
  */to_char(count("STANDARD_REFERENCE_ID")),
  substrb(dump(min("STANDARD_REFERENCE_ID"),16,0,64),1,240),
  substrb(dump(max("STANDARD_REFERENCE_ID"),16,0,64),1,240),
  to_char(count("BILLED_AND_UNPAID_PRINCIPAL")),
  substrb(dump(min("BILLED_AND_UNPAID_PRINCIPAL"),16,0,64),1,240),
  substrb(dump(max("BILLED_AND_UNPAID_PRINCIPAL"),16,0,64),1,240),
  to_char(count("ESCROW_SHORTAGE_BAL")),
  substrb(dump(min("ESCROW_SHORTAGE_BAL"),16,0,64),1,240),
  substrb(dump(max("ESCROW_SHORTAGE_BAL"),16,0,64),1,240),
  to_char(count("PROCESSING_GROUP_CODE")),
  substrb(dump(min("PROCESSING_GROUP_CODE"),16,0,64),1,240),
  substrb(dump(max("PROCESSING_GROUP_CODE"),16,0,64),1,240),
  to_char(count("LOAN_STATUS")),substrb(dump(min("LOAN_STATUS"),16,0,64),1,
  240),substrb(dump(max("LOAN_STATUS"),16,0,64),1,240),
  to_char(count("PRODUCT_KEY")),substrb(dump(min("PRODUCT_KEY"),16,0,64),1,
  240),substrb(dump(max("PRODUCT_KEY"),16,0,64),1,240),
  to_char(count("CUSTOMER_KEY")),substrb(dump(min("CUSTOMER_KEY"),16,0,64),1,
  240),substrb(dump(max("CUSTOMER_KEY"),16,0,64),1,240),
  to_char(count("INVESTOR_KEY")),substrb(dump(min("INVESTOR_KEY"),16,0,64),1,
  240),substrb(dump(max("INVESTOR_KEY"),16,0,64),1,240),
  to_char(count("OWNER_KEY")),substrb(dump(min("OWNER_KEY"),16,0,64),1,240),
  substrb(dump(max("OWNER_KEY"),16,0,64),1,240),
  to_char(count("OWNERSHIP_PERCENT")),substrb(dump(min("OWNERSHIP_PERCENT"),
  16,0,64),1,240),substrb(dump(max("OWNERSHIP_PERCENT"),16,0,64),1,240),
  to_char(count("CURRENT_PRINCIPAL_BALANCE")),
  substrb(dump(min("CURRENT_PRINCIPAL_BALANCE"),16,0,64),1,240),
  substrb(dump(max("CURRENT_PRINCIPAL_BALANCE"),16,0,64),1,240),
  to_char(count("INTEREST_BALANCE")),substrb(dump(min("INTEREST_BALANCE"),16,
  0,64),1,240),substrb(dump(max("INTEREST_BALANCE"),16,0,64),1,240),
  to_char(count("FEE_BALANCE")),substrb(dump(min("FEE_BALANCE"),16,0,64),1,
  240),substrb(dump(max("FEE_BALANCE"),16,0,64),1,240),
  to_char(count("OTHER_BALANCES")),substrb(dump(min("OTHER_BALANCES"),16,0,64)
  ,1,240),substrb(dump(max("OTHER_BALANCES"),16,0,64),1,240),
  to_char(count("DAILY_INTEREST")),substrb(dump(min("DAILY_INTEREST"),16,0,64)
  ,1,240),substrb(dump(max("DAILY_INTEREST"),16,0,64),1,240),
  to_char(count("SCHEDULED_PRINCIPAL_BALANCE")),
  substrb(dump(min("SCHEDULED_PRINCIPAL_BALANCE"),16,0,64),1,240),
  substrb(dump(max("SCHEDULED_PRINCIPAL_BALANCE"),16,0,64),1,240),
  to_char(count("PAST_DUE_INTEREST_ACTUAL")),
  substrb(dump(min("PAST_DUE_INTEREST_ACTUAL"),16,0,64),1,240),
  substrb(dump(max("PAST_DUE_INTEREST_ACTUAL"),16,0,64),1,240),
  to_char(count("TOTAL_PAST_DUE_ACTUAL")),
  substrb(dump(min("TOTAL_PAST_DUE_ACTUAL"),16,0,64),1,240),
  substrb(dump(max("TOTAL_PAST_DUE_ACTUAL"),16,0,64),1,240),
  to_char(count("CA_CURRENCY_FX_RATE")),
  substrb(dump(min("CA_CURRENCY_FX_RATE"),16,0,64),1,240),
  substrb(dump(max("CA_CURRENCY_FX_RATE"),16,0,64),1,240),
  to_char(count("CA_CURRENCY_FX_OPERATOR")),
  substrb(dump(min("CA_CURRENCY_FX_OPERATOR"),16,0,64),1,240),
  substrb(dump(max("CA_CURRENCY_FX_OPERATOR"),16,0,64),1,240),
  to_char(count("CURRENT_USER_DEFINED_BALANCE_1")),
  substrb(dump(min("CURRENT_USER_DEFINED_BALANCE_1"),16,0,64),1,240),
  substrb(dump(max("CURRENT_USER_DEFINED_BALANCE_1"),16,0,64),1,240),
  to_char(count("CURRENT_USER_DEFINED_BALANCE_2")),
  substrb(dump(min("CURRENT_USER_DEFINED_BALANCE_2"),16,0,64),1,240),
  substrb(dump(max("CURRENT_USER_DEFINED_BALANCE_2"),16,0,64),1,240),
  to_char(count("CA_CURRENCY_CODE")),substrb(dump(min("CA_CURRENCY_CODE"),16,
  0,64),1,240),substrb(dump(max("CA_CURRENCY_CODE"),16,0,64),1,240),
  to_char(count("CURRENCY_MONTH_ID")),substrb(dump(min("CURRENCY_MONTH_ID"),
  16,0,64),1,240),substrb(dump(max("CURRENCY_MONTH_ID"),16,0,64),1,240),
  to_char(count("DAY_ID")),substrb(dump(min("DAY_ID"),16,0,64),1,240),
  substrb(dump(max("DAY_ID"),16,0,64),1,240),to_char(count("SEQUENCE_NO")),
  substrb(dump(min("SEQUENCE_NO"),16,0,64),1,240),
  substrb(dump(max("SEQUENCE_NO"),16,0,64),1,240),
  to_char(count("PORTFOLIO_ID")),substrb(dump(min("PORTFOLIO_ID"),16,0,64),1,
  240),substrb(dump(max("PORTFOLIO_ID"),16,0,64),1,240),
  to_char(count("NET_INTEREST_BILLING_RATE")),
  substrb(dump(min("NET_INTEREST_BILLING_RATE"),16,0,64),1,240),
  substrb(dump(max("NET_INTEREST_BILLING_RATE"),16,0,64),1,240),
  to_char(count("CURRENT_GL_BALANCE")),substrb(dump(min("CURRENT_GL_BALANCE"),
  16,0,64),1,240),substrb(dump(max("CURRENT_GL_BALANCE"),16,0,64),1,240),
  to_char(count("ORIG_ESCROW_BAL")),substrb(dump(min("ORIG_ESCROW_BAL"),16,0,
  64),1,240),substrb(dump(max("ORIG_ESCROW_BAL"),16,0,64),1,240),
  to_char(count("CURR_ESCROW_BAL")),substrb(dump(min("CURR_ESCROW_BAL"),16,0,
  64),1,240),substrb(dump(max("CURR_ESCROW_BAL"),16,0,64),1,240),
  to_char(count("BILLED_AND_UNPAID_ESCROW")),
  substrb(dump(min("BILLED_AND_UNPAID_ESCROW"),16,0,64),1,240),
  substrb(dump(max("BILLED_AND_UNPAID_ESCROW"),16,0,64),1,240),
  to_char(count("PREPAID_ESCROW")),substrb(dump(min("PREPAID_ESCROW"),16,0,64)
  ,1,240),substrb(dump(max("PREPAID_ESCROW"),16,0,64),1,240),
  to_char(count("ESCROW_CHANGES_MEMO_POST_TODAY")),
  substrb(dump(min("ESCROW_CHANGES_MEMO_POST_TODAY"),16,0,64),1,240),
  substrb(dump(max("ESCROW_CHANGES_MEMO_POST_TODAY"),16,0,64),1,240),
  to_char(count("LOAN_CURRENCY_CODE")),substrb(dump(min("LOAN_CURRENCY_CODE"),
  16,0,64),1,240),substrb(dump(max("LOAN_CURRENCY_CODE"),16,0,64),1,240),
  count(rowidtochar(rowid)) 
from
 "FOWNER"."FACTTBL1" t  /* NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,
  NDV,NIL,NIL,TOPN,NIL,NIL,TOPN,NIL,NIL,TOPN,NIL,NIL,TOPN,NIL,NIL,TOPN,NIL,
  NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,
  NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,
  NIL,NIL,NDV,NIL,NIL,TOPN,NIL,NIL,TOPN,NIL,NIL,NDV,NIL,NIL,TOPN,NIL,NIL,NDV,
  NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,
  NIL,TOPN,NIL,NIL,RWID,U254,U254,U254,U254,U254,U254,U254,U254,U254,U254,
  U254,U254,U254,U254,U254,U254,U254,U254,U254,U254,U254,U254,U254,U254,U254,
  U254,U254,U254,U254,U254,U254,U254,U254,U254,U254U*/


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.01          3         13          0           0
Fetch        1      0.41     899.28          0          0          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.42     899.31          3         13          0           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 223     (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE (cr=13 pr=3 pw=0 time=899303148 us)
         2          2          2   PX COORDINATOR  (cr=13 pr=3 pw=0 time=899300825 us)
         0          0          0    PX SEND QC (RANDOM) :TQ10000 (cr=0 pr=0 pw=0 time=0 us)
         0          0          0     SORT AGGREGATE (cr=0 pr=0 pw=0 time=0 us)
         0          0          0      OPTIMIZER STATISTICS GATHERING  (cr=0 pr=0 pw=0 time=0 us cost=1753384 size=85039113824 card=841971424)
         0          0          0       PX BLOCK ITERATOR (cr=0 pr=0 pw=0 time=0 us cost=1753384 size=85039113824 card=841971424)
         0          0          0        TABLE ACCESS FULL FACTTBL1 (cr=0 pr=0 pw=0 time=0 us cost=1753384 size=85039113824 card=841971424)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  Disk file operations I/O                        3        0.00          0.00
  KSV master wait                                 6        0.00          0.00
  ASM file metadata operation                     3        0.00          0.00
  db file sequential read                         3        0.00          0.00
  PX Deq: Join ACK                                4        0.00          0.00
  PX Deq: Parse Reply                             2        0.00          0.00
  PX Deq: Execute Reply                         700       58.10        898.88
  PX Deq: Signal ACK EXT                          2        0.00          0.00
  PX Deq: Slave Session Stats                     2        0.00          0.00
  enq: PS - contention                            1        0.00          0.00
********************************************************************************


PQ Slave 1:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1    898.50     899.28    5845366    5847704          0           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2    898.50     899.28    5845366    5847704          0           0

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 223     (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  SORT AGGREGATE (cr=0 pr=0 pw=0 time=0 us)
         0          0          0   PX COORDINATOR  (cr=0 pr=0 pw=0 time=0 us)
         0          0          0    PX SEND QC (RANDOM) :TQ10000 (cr=0 pr=0 pw=0 time=0 us)
         1          1          1     SORT AGGREGATE (cr=5847704 pr=5845366 pw=0 time=899284133 us)
 425594545  425594545  425594545      OPTIMIZER STATISTICS GATHERING  (cr=5847704 pr=5845366 pw=0 time=602932896 us cost=1753384 size=85039113824 card=841971424)
 425594545  425594545  425594545       PX BLOCK ITERATOR (cr=5847704 pr=5845366 pw=0 time=119367894 us cost=1753384 size=85039113824 card=841971424)
 425594545  425594545  425594545        TABLE ACCESS FULL FACTTBL1 (cr=5847704 pr=5845366 pw=0 time=76216225 us cost=1753384 size=85039113824 card=841971424)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  PX Deq: Execution Msg                         131        0.00          0.01
  Disk file operations I/O                       34        0.00          0.00
  KSV master wait                                34        0.04          0.07
  ASM file metadata operation                    17        0.00          0.00
  direct path read                              160        0.00          0.31
  PX Deq Credit: send blkd                      102        0.01          0.13
  PX Deq Credit: need buffer                      1        0.00          0.00
  PX Deq Credit: free buffer                      7        0.00          0.00
  PX qref latch                                   1        0.00          0.00
********************************************************************************


PQ Slave 2:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1    887.67     888.44    5709314    5711637          0           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2    887.67     888.44    5709314    5711637          0           0

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 223     (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  SORT AGGREGATE (cr=0 pr=0 pw=0 time=0 us)
         0          0          0   PX COORDINATOR  (cr=0 pr=0 pw=0 time=0 us)
         0          0          0    PX SEND QC (RANDOM) :TQ10000 (cr=0 pr=0 pw=0 time=0 us)
         1          1          1     SORT AGGREGATE (cr=5711637 pr=5709314 pw=0 time=888440424 us)
 416376879  416376879  416376879      OPTIMIZER STATISTICS GATHERING  (cr=5711637 pr=5709314 pw=0 time=594194927 us cost=1753384 size=85039113824 card=841971424)
 416376879  416376879  416376879       PX BLOCK ITERATOR (cr=5711637 pr=5709314 pw=0 time=116119851 us cost=1753384 size=85039113824 card=841971424)
 416376879  416376879  416376879        TABLE ACCESS FULL FACTTBL1 (cr=5711637 pr=5709314 pw=0 time=73632031 us cost=1753384 size=85039113824 card=841971424)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  PX Deq: Execution Msg                         132       10.84         10.86
  Disk file operations I/O                       35        0.00          0.00
  latch free                                      1        0.00          0.00
  oracle thread bootstrap                         1        0.02          0.02
  KSV master wait                                19        0.00          0.01
  ASM file metadata operation                    10        0.00          0.00
  direct path read                              149        0.00          0.30
  PX Deq Credit: send blkd                      119        0.01          0.20
  PX Deq Credit: free buffer                      5        0.00          0.00
  PX Deq Credit: need buffer                      1        0.00          0.00
********************************************************************************


Re: STatistics Gathering Performance

Narendra, November 29, 2023 - 8:55 am UTC

Hello Connor/Chris,

Continued from previous update

19c:

PQ Coordinator:

SQL ID: 2b96aa4tjcn8y Plan Hash: 4250307043

select /*+  full(t)    parallel(t,2) parallel_index(t,2) dbms_stats 
  cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring 
  xmlindex_sel_idx_tbl opt_param('optimizer_inmemory_aware' 'false') 
  no_substrb_pad  */to_char(count("CURRENCY_MONTH_ID")),
  substrb(dump(min("CURRENCY_MONTH_ID"),16,0,64),1,240),
  substrb(dump(max("CURRENCY_MONTH_ID"),16,0,64),1,240),
  to_char(count("DAY_ID")),substrb(dump(min("DAY_ID"),16,0,64),1,240),
  substrb(dump(max("DAY_ID"),16,0,64),1,240),to_char(count("SEQUENCE_NO")),
  substrb(dump(min("SEQUENCE_NO"),16,0,64),1,240),
  substrb(dump(max("SEQUENCE_NO"),16,0,64),1,240),
  to_char(count("PORTFOLIO_ID")),substrb(dump(min("PORTFOLIO_ID"),16,0,64),1,
  240),substrb(dump(max("PORTFOLIO_ID"),16,0,64),1,240),
  to_char(count("NET_INTEREST_BILLING_RATE")),
  substrb(dump(min("NET_INTEREST_BILLING_RATE"),16,0,64),1,240),
  substrb(dump(max("NET_INTEREST_BILLING_RATE"),16,0,64),1,240),
  to_char(count("CURRENT_GL_BALANCE")),substrb(dump(min("CURRENT_GL_BALANCE"),
  16,0,64),1,240),substrb(dump(max("CURRENT_GL_BALANCE"),16,0,64),1,240),
  to_char(count("ORIG_ESCROW_BAL")),substrb(dump(min("ORIG_ESCROW_BAL"),16,0,
  64),1,240),substrb(dump(max("ORIG_ESCROW_BAL"),16,0,64),1,240),
  to_char(count("CURR_ESCROW_BAL")),substrb(dump(min("CURR_ESCROW_BAL"),16,0,
  64),1,240),substrb(dump(max("CURR_ESCROW_BAL"),16,0,64),1,240),
  to_char(count("BILLED_AND_UNPAID_ESCROW")),
  substrb(dump(min("BILLED_AND_UNPAID_ESCROW"),16,0,64),1,240),
  substrb(dump(max("BILLED_AND_UNPAID_ESCROW"),16,0,64),1,240),
  to_char(count("PREPAID_ESCROW")),substrb(dump(min("PREPAID_ESCROW"),16,0,64)
  ,1,240),substrb(dump(max("PREPAID_ESCROW"),16,0,64),1,240),
  to_char(count("ESCROW_CHANGES_MEMO_POST_TODAY")),
  substrb(dump(min("ESCROW_CHANGES_MEMO_POST_TODAY"),16,0,64),1,240),
  substrb(dump(max("ESCROW_CHANGES_MEMO_POST_TODAY"),16,0,64),1,240),
  to_char(count("LOAN_CURRENCY_CODE")),substrb(dump(min("LOAN_CURRENCY_CODE"),
  16,0,64),1,240),substrb(dump(max("LOAN_CURRENCY_CODE"),16,0,64),1,240),
  to_char(count("ESCROW_SHORTAGE_BAL")),
  substrb(dump(min("ESCROW_SHORTAGE_BAL"),16,0,64),1,240),
  substrb(dump(max("ESCROW_SHORTAGE_BAL"),16,0,64),1,240),
  to_char(count("PROCESSING_GROUP_CODE")),
  substrb(dump(min("PROCESSING_GROUP_CODE"),16,0,64),1,240),
  substrb(dump(max("PROCESSING_GROUP_CODE"),16,0,64),1,240),
  to_char(count("LOAN_STATUS")),substrb(dump(min("LOAN_STATUS"),16,0,64),1,
  240),substrb(dump(max("LOAN_STATUS"),16,0,64),1,240),
  to_char(count("STANDARD_REFERENCE_ID")),
  substrb(dump(min("STANDARD_REFERENCE_ID"),16,0,64),1,240),
  substrb(dump(max("STANDARD_REFERENCE_ID"),16,0,64),1,240),
  to_char(count("BILLED_AND_UNPAID_PRINCIPAL")),
  substrb(dump(min("BILLED_AND_UNPAID_PRINCIPAL"),16,0,64),1,240),
  substrb(dump(max("BILLED_AND_UNPAID_PRINCIPAL"),16,0,64),1,240),
  to_char(count("PRODUCT_KEY")),substrb(dump(min("PRODUCT_KEY"),16,0,64),1,
  240),substrb(dump(max("PRODUCT_KEY"),16,0,64),1,240),
  to_char(count("CUSTOMER_KEY")),substrb(dump(min("CUSTOMER_KEY"),16,0,64),1,
  240),substrb(dump(max("CUSTOMER_KEY"),16,0,64),1,240),
  to_char(count("INVESTOR_KEY")),substrb(dump(min("INVESTOR_KEY"),16,0,64),1,
  240),substrb(dump(max("INVESTOR_KEY"),16,0,64),1,240),
  to_char(count("OWNER_KEY")),substrb(dump(min("OWNER_KEY"),16,0,64),1,240),
  substrb(dump(max("OWNER_KEY"),16,0,64),1,240),
  to_char(count("OWNERSHIP_PERCENT")),substrb(dump(min("OWNERSHIP_PERCENT"),
  16,0,64),1,240),substrb(dump(max("OWNERSHIP_PERCENT"),16,0,64),1,240),
  to_char(count("CURRENT_PRINCIPAL_BALANCE")),
  substrb(dump(min("CURRENT_PRINCIPAL_BALANCE"),16,0,64),1,240),
  substrb(dump(max("CURRENT_PRINCIPAL_BALANCE"),16,0,64),1,240),
  to_char(count("INTEREST_BALANCE")),substrb(dump(min("INTEREST_BALANCE"),16,
  0,64),1,240),substrb(dump(max("INTEREST_BALANCE"),16,0,64),1,240),
  to_char(count("FEE_BALANCE")),substrb(dump(min("FEE_BALANCE"),16,0,64),1,
  240),substrb(dump(max("FEE_BALANCE"),16,0,64),1,240),
  to_char(count("OTHER_BALANCES")),substrb(dump(min("OTHER_BALANCES"),16,0,64)
  ,1,240),substrb(dump(max("OTHER_BALANCES"),16,0,64),1,240),
  to_char(count("DAILY_INTEREST")),substrb(dump(min("DAILY_INTEREST"),16,0,64)
  ,1,240),substrb(dump(max("DAILY_INTEREST"),16,0,64),1,240),
  to_char(count("SCHEDULED_PRINCIPAL_BALANCE")),
  substrb(dump(min("SCHEDULED_PRINCIPAL_BALANCE"),16,0,64),1,240),
  substrb(dump(max("SCHEDULED_PRINCIPAL_BALANCE"),16,0,64),1,240),
  to_char(count("PAST_DUE_INTEREST_ACTUAL")),
  substrb(dump(min("PAST_DUE_INTEREST_ACTUAL"),16,0,64),1,240),
  substrb(dump(max("PAST_DUE_INTEREST_ACTUAL"),16,0,64),1,240),
  to_char(count("TOTAL_PAST_DUE_ACTUAL")),
  substrb(dump(min("TOTAL_PAST_DUE_ACTUAL"),16,0,64),1,240),
  substrb(dump(max("TOTAL_PAST_DUE_ACTUAL"),16,0,64),1,240),
  to_char(count("CA_CURRENCY_FX_RATE")),
  substrb(dump(min("CA_CURRENCY_FX_RATE"),16,0,64),1,240),
  substrb(dump(max("CA_CURRENCY_FX_RATE"),16,0,64),1,240),
  to_char(count("CA_CURRENCY_FX_OPERATOR")),
  substrb(dump(min("CA_CURRENCY_FX_OPERATOR"),16,0,64),1,240),
  substrb(dump(max("CA_CURRENCY_FX_OPERATOR"),16,0,64),1,240),
  to_char(count("CURRENT_USER_DEFINED_BALANCE_1")),
  substrb(dump(min("CURRENT_USER_DEFINED_BALANCE_1"),16,0,64),1,240),
  substrb(dump(max("CURRENT_USER_DEFINED_BALANCE_1"),16,0,64),1,240),
  to_char(count("CURRENT_USER_DEFINED_BALANCE_2")),
  substrb(dump(min("CURRENT_USER_DEFINED_BALANCE_2"),16,0,64),1,240),
  substrb(dump(max("CURRENT_USER_DEFINED_BALANCE_2"),16,0,64),1,240),
  to_char(count("CA_CURRENCY_CODE")),substrb(dump(min("CA_CURRENCY_CODE"),16,
  0,64),1,240),substrb(dump(max("CA_CURRENCY_CODE"),16,0,64),1,240),
  count(rowidtochar(rowid)) 
from
 "FOWNER"."FACTTBL1" t  /* TOPN,NIL,NIL,TOPN,NIL,NIL,NDV,NIL,NIL,
  TOPN,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,
  NDV,NIL,NIL,NDV,NIL,NIL,TOPN,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,TOPN,NIL,NIL,
  NDV,NIL,NIL,NDV,NIL,NIL,TOPN,NIL,NIL,TOPN,NIL,NIL,TOPN,NIL,NIL,TOPN,NIL,NIL,
  NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,
  NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,
  NIL,NDV,NIL,NIL,RWID,U254,U254,U254,U254,U254,U254,U254,U254,U254,U254,U254,
  U254,U254,U254,U254,U254,U254,U254,U254,U254,U254,U254,U254,U254,U254,U254,
  U254,U254,U254,U254,U254,U254,U254,U254,U254U*/


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0          0          0           0
Execute      1      0.00       0.00          0         15          0           0
Fetch        1      0.51    1225.03          0          0          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.53    1225.05          0         15          0           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 237     (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE (cr=15 pr=0 pw=0 time=1225041156 us starts=1)
         2          2          2   PX COORDINATOR  (cr=15 pr=0 pw=0 time=1225035891 us starts=1)
         0          0          0    PX SEND QC (RANDOM) :TQ10000 (cr=0 pr=0 pw=0 time=0 us starts=0)
         0          0          0     SORT AGGREGATE (cr=0 pr=0 pw=0 time=0 us starts=0)
         0          0          0      OPTIMIZER STATISTICS GATHERING  (cr=0 pr=0 pw=0 time=0 us starts=0 cost=90042 size=84033736291 card=832017191)
         0          0          0       PX BLOCK ITERATOR (cr=0 pr=0 pw=0 time=0 us starts=0 cost=90042 size=84033736291 card=832017191)
         0          0          0        TABLE ACCESS STORAGE FULL FACTTBL1 (cr=0 pr=0 pw=0 time=0 us starts=0 cost=90042 size=84033736291 card=832017191)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  PGA memory operation                           68        0.00          0.00
  PX Deq: Join ACK                                2        0.00          0.00
  PX Deq: Parse Reply                             2        0.00          0.00
  PX Deq: Execute Reply                         640       74.07       1224.69
  PX Deq: Signal ACK EXT                          2        0.00          0.00
  PX Deq: Slave Session Stats                     2        0.00          0.00
  enq: PS - contention                            1        0.00          0.00
********************************************************************************


PQ Slave 1:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1   1179.61    1225.03    5727391    5729191          0           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2   1179.61    1225.03    5727391    5729191          0           0

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 237     (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  SORT AGGREGATE (cr=0 pr=0 pw=0 time=0 us starts=0)
         0          0          0   PX COORDINATOR  (cr=0 pr=0 pw=0 time=0 us starts=0)
         0          0          0    PX SEND QC (RANDOM) :TQ10000 (cr=0 pr=0 pw=0 time=0 us starts=0)
         1          1          1     SORT AGGREGATE (cr=5729191 pr=5727391 pw=0 time=1225026443 us starts=1)
 415512630  415512630  415512630      OPTIMIZER STATISTICS GATHERING  (cr=5729191 pr=5727391 pw=0 time=640697179 us starts=1 cost=90042 size=84033736291 card=832017191)
 415569212  415569212  415569212       PX BLOCK ITERATOR (cr=5729191 pr=5727391 pw=0 time=212329319 us starts=1 cost=90042 size=84033736291 card=832017191)
 415569212  415569212  415569212        TABLE ACCESS STORAGE FULL FACTTBL1 (cr=5729191 pr=5727391 pw=0 time=156160696 us starts=50 cost=90042 size=84033736291 card=832017191)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  PX Deq: Execution Msg                         299        0.01          0.15
  cell smart table scan                        6157        0.00          7.29
  PX Deq Credit: send blkd                        2        0.00          0.00
********************************************************************************


PQ Slave 2:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1   1159.98    1203.73    5762303    5764139          0           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2   1159.98    1203.73    5762303    5764139          0           0

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 237     (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  SORT AGGREGATE (cr=0 pr=0 pw=0 time=0 us starts=0)
         0          0          0   PX COORDINATOR  (cr=0 pr=0 pw=0 time=0 us starts=0)
         0          0          0    PX SEND QC (RANDOM) :TQ10000 (cr=0 pr=0 pw=0 time=0 us starts=0)
         1          1          1     SORT AGGREGATE (cr=5764139 pr=5762303 pw=0 time=1203735937 us starts=1)
 416384735  416384735  416384735      OPTIMIZER STATISTICS GATHERING  (cr=5764139 pr=5762303 pw=0 time=638747715 us starts=1 cost=90042 size=84033736291 card=832017191)
 416447979  416447979  416447979       PX BLOCK ITERATOR (cr=5764139 pr=5762303 pw=0 time=211734492 us starts=1 cost=90042 size=84033736291 card=832017191)
 416447979  416447979  416447979        TABLE ACCESS STORAGE FULL FACTTBL1 (cr=5764139 pr=5762303 pw=0 time=155836616 us starts=51 cost=90042 size=84033736291 card=832017191)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  PX Deq: Execution Msg                         339       21.29         21.49
  cell smart table scan                        4333        0.00          7.71
  PX Deq Credit: send blkd                        2        0.00          0.00
********************************************************************************

Connor McDonald
January 15, 2024 - 5:09 am UTC

Looking at those numbers, we see roughly the same I/O numbers but the CPU cost has gone up in performing those IOs.

Question - is it possible that you do NOT have TDE on the on-premise system, but DO have TDE on the cloud system?

Re: Statistics gathering performance

Narendra, January 15, 2024 - 5:43 am UTC

Hello Connor,

Indeed it is...12c is on premise and no TDE and 19c is ExaC@C with TDE.
Could TDE be the cause? How can we find/verify this, without implementing TDE on 12c or disabling TDE on 19c ExaC@C?

Thanks in advance
Chris Saxon
January 15, 2024 - 2:58 pm UTC

There is some overhead to processing TDE.

Can you create a PDB or tablespace without TDE on your ExaC@C to measure this?

Re: Statistics Gathering Performance

Narendra, January 16, 2024 - 8:36 am UTC

Can you create a PDB or tablespace without TDE on your ExaC@C to measure this?

I can ask but is it allowed in ExaC@C?
Below are the relevant init parameters
NAME                                     TYPE   VALUE       
---------------------------------------- ------ ----------- 
_tablespace_encryption_default_algorithm string AES256      
encrypt_new_tablespaces                  string ALWAYS      
tablespace_encryption                    string AUTO_ENABLE 

Re: Statistics Gathering Performance

Narendra, January 16, 2024 - 8:40 am UTC

Hello Chris/Connor,

MOS note Oracle Database Tablespace Encryption Behavior in Oracle Cloud (Doc ID 2359020.1) appears to suggest that it is not possible to create unencrypted tablespace in cloud environment.

Details: Database release 12.2.0.1 through 19.15 and 21c
Use case: New tablespace creation in cloud: Any new tablespace created in the cloud is encrypted using AES128 algorithm. Unencrypted tablespace creation is not allowed. The operation returns an error.

Chris Saxon
January 16, 2024 - 3:11 pm UTC

Ah, sorry I wasn't aware of this! Thanks for letting me know.

To test the impact of TDE yourself ideally you want two tablespaces in the same database - one with it, one without.

Re: Statistics Gathering Performance

Narendra, February 06, 2024 - 10:53 pm UTC

Hello Chris/Connor,

Just wanted to let you know that we have still not managed to find out what is causing 19c ExaC@C database to take much longer than corresponding 12c standalone database.
We have managed to get a different engineer assigned to our SR last week and he appears to be talking all right things and asking all right questions. The focus is on analysing the output of perf on the session gathering stats on one of the big tables for now.
We have noticed that the OPTIMIZER STATISTICS GATHERING step inserted in the execution plan of the internal SQL by the optimizer, is where most, if not all, of the extra time is being spent but there does not seem to be a way to "look under the hood" of the same. I can't help but think the optimizer team at Oracle would probably know/find out the differences between 12.1.0.2 and 19.20 pretty quickly.
Looks like there is a scope for improvement in the form of providing ability to trace ALL the work done as part of GATHER*_STATS API calls and also ability to trace the impact of TDE on stats gathering, if any.
Chris Saxon
February 07, 2024 - 11:41 am UTC

How have you determined the OPTIMIZER STATISTICS GATHERING step is slower?

It looks to me like this is faster - it's the SORT AGGREGATE step that's roughly 2x slower.

In both 12.1 and 19c the stats operation is taking around 8 minutes (subtract the time=... us values from the corresponding time for the PX block below it).

Whereas the sort has gone from just under 5 minutes in 12.1 to 9-10 minutes in 19c. Which accounts for the extra 5 minutes you're seeing.

I'm not sure if there has been any database changes that would affect sorting - how similar is the data between the systems? Could different data distributions account for the extra time?

You can see what's changed yourself by using https://oradiff.oracle.com

To see what optimizer changes there are, go to System -> Optimizer Stats Settings and select your source and target versions to get a report.

Re: Statistics Gathering Performance

Narendra, February 07, 2024 - 2:12 pm UTC

How have you determined the OPTIMIZER STATISTICS GATHERING step is slower?

Hello Chris,

This is where things get interesting. I agree that 10046 trace appears to suggest that OPTIMIZER STATISTICS GATHERING step is not consuming a lot of time but here is a scenario that we tested

1. Ran and traced call to GATHER_TABLE_STATS
2. Ran and traced the above internal SQL manually

On 19c ExaC@C, the internal SQL took almost 80 minutes (in serial mode) and the manual run completed in under 30 minutes (in serial mode).
The only difference between both executions, as reported by trace files is, the OPTIMIZER STATISTICS GATHERING step in the execution plan, when we ran GATHER_TABLE_STATS.
We also ensured that data caching did not affect any executions.

Chris Saxon
February 08, 2024 - 3:23 pm UTC

Let's get back to the real business problem:

The biggest symptom is that at least one of the application jobs, is taking between 45 minutes to 1 hour on 12c but is taking anywhere between 3:5 hours to 4 hours on 19c ExaCC setup.

Are stats gathered as part of this process? If so, does this account for it taking so much longer? If not, what exactly is slower?

Re: Statistics Gathering Performance

Narendra, February 08, 2024 - 4:25 pm UTC

Let's get back to the real business problem:

The biggest symptom is that at least one of the application jobs, is taking between 45 minutes to 1 hour on 12c but is taking anywhere between 3:5 hours to 4 hours on 19c ExaCC setup.

Are stats gathered as part of this process? If so, does this account for it taking so much longer? If not, what exactly is slower?


Sure Chris.
The batch job loads data in bulk (INSERT...SELECT or MERGE) into multiple tables. The statistics gathering is triggered by the batch job using GATHER_TABLE_STATS API calls.
Multiple SQL statements are taking longer on 19c ExaC@C and all AWR reports consistently report that internal SQLs generated by GATHER_TABLE_STATS API, occupy almost 4-5 spots in "Top SQLs ordered by elapsed time" sections. This is why we are focussing on stats gathering as the "lowest hanging fruit" for the sake of investigation.

Happy to provide more details.

Thanks in advance,
Narendra
Chris Saxon
February 08, 2024 - 6:38 pm UTC

Have you traced the batch process itself to verify these are the slowest parts of that process?

Re: Statistics Gathering Performance

Narendra, February 08, 2024 - 9:11 pm UTC

Hello Chris,

We had tried tracing the batch process once few months ago but the trace file grew so large that file system almost filled up and we could not allow the process to finish.
One of the things that we have managed to figure out so far is that one of the differences between 12c database and 19c database is that 12c does not have flashback mode ON whereas 19c needs to have flashback ON (due to contractual obligations). We did try to run the batch in 12c database with flashback turned ON and that caused the run time to jump to around 1 hour 45 minutes. Despite this, 19c database still lagged behind 12c database in performance by taking 3 hours 30 minutes to 4 hours.
Due to the size of the workload and the time the batch is taking, we also tried to come up with a smaller version of the batch but could not due to challenges with application support.

I am curious about your question though. Is there a situation when AWR data can be misleading and trace is the only reliable way?
Chris Saxon
February 12, 2024 - 5:24 pm UTC

Is there a situation when AWR data can be misleading and trace is the only reliable way?

AWR captures information across the whole database. How do you know the "Top SQLs ordered by elapsed time" are initiated by the slow-running application job and not some other process?

AWR also only stores the "top" statements. Some statements in the job may be slower, but not consuming enough resources (time, executions, CPU, disk, ...) to appear in AWR. So the cause may be hidden, particularly if the job runs many quick statements that are all a bit slower.

By tracing the process itself you know you've captured

- All statements run by the job
- Only statements run by the job

Re: Statistics Gathering Performance

Narendra, February 12, 2024 - 10:03 pm UTC

AWR captures information across the whole database. How do you know the "Top SQLs ordered by elapsed time" are initiated by the slow-running application job and not some other process?


Hello Chris,

There are multiple things that we have checked to ensure that slow-running application is causing the "Top SQLs ordered by Elapsed Time"
1. The database is ONLY being used for exclusively performance testing of this specific batch process.
2. The batch process sets the MODULE and ACTION for the sessions doing the database work for the batch. The MODULE value is available in AWR report.
3. We have looked at real-time SQL monitoring data while the batch is running, to ensure that the stats gathering internal SQLs are running as part of the batch job.

The latest investigation appears to point to significant difference between the load on the server running 12c database and 19c database. 12c runs on physical server whose CPU is over 80% idle when the batch is running whereas 19c runs on VMs built on ExaC@C machine, hosting multiple databases with VM showing CPU utilisation around 80-95% during the batch run.
The strange thing (to me) though is that 19c database manages to put overwhelming load on the underlying server when the batch job is run with parallel degree of 2 (used by stats gathering APIs as well as multiple bulk DMLs) and takes 3.5 - 4 hours to complete. But when we increase the DOP to 4, the same job manages to complete in 1.5-2 hours, while the underlying servers are literally struggling.

Another unexpected behaviour that we have observed is that almost all of the PQ slaves end up on one of the nodes, despite the PQ coordinator being on another node, which is less loaded than the other node.
We had to set PARALLEL_FORCE_LOCAL to TRUE and rely on the fact that (somehow) the application creates 2 scheduler jobs on 2 different nodes, in order to distribute the workload evenly between 2 RAC nodes, thereby allowing the workload to better use computing resources allocated to both nodes.
Chris Saxon
February 14, 2024 - 1:37 pm UTC

OK, sounds like you've got it scoped to the process.

Clearly, there's something very different between the systems, but honestly I've no idea what it is. I've spoken with the optimizer folks - there's very little that's changed in the stats gathering process.

I've got no other suggestions at this point - keep working through this with support to get to the bottom of it.

Re: Statistics Gathering Performance

Narendra, February 14, 2024 - 2:58 pm UTC

Hello Chris,

Appreciate your help.
One (hopefully) final question from me on this one. If the server load is contributing to the slow performance then the only way that can happen is because the database session, that is gathering statistics, "thinks" that it is "ON CPU" but in reality the underlying corresponding OS process is effectively waiting for CPU in runqueue at times and hence while all database traces and reports appear to claim that database was busy "ON CPU" it is possible that sessions were actually delayed because they had to wait for CPU at OS level. Does that make sense?
If it does, is there a way we can find out how much time was spent on waiting for CPU by individual database sessions, in any way?

Thanks in advance,
Narendra
Connor McDonald
February 16, 2024 - 6:23 am UTC

The session event "resmgr:cpu quantum" can show you time lost waiting for CPU when the *database* was limiting the session.

If the server is being smashed so hard that we are on the OS runqueue, then from the database perspective, we will still see that as "ON CPU".

But if your server is being pushed that hard, then gathering stats is the least of your problem - there's a chance your instance will stall or worse, because critical processes might not get enough CPU attention

Re: Statistics Gathering Performance

Narendra, February 16, 2024 - 8:29 am UTC

Hello Connor,

Thank you for your input. I agree with your assessment.
I believe the challenge/key question that we are facing is about how to come up with amount of CPUs will be needed to be able to achieve the acceptable performance. When we started this investigation few months ago, we did identify CPUs being overloaded and one of the changes that we did was to more than double (from 7 oCPUs to 15 oCPUs, which translates to from 14 vCPUs to 30 vCPUs in ExaC@C world) the allocated CPU to underlying DB servers. While this did manage to reduce the CPU load significantly, our batch job still showed hardly any improvements in response timings (going from 3.5-4 hours to about 3 hours). We had to double the DOP in order to get comparable/acceptable performance.
This is what is baffling me because I was expecting to do no DOP changes. And yes, we did the usual checks to ensure that we had no plan regressions or unusual blocking or other wait events at the time.
Our intention is to be able to come up with requirement for extra CPUs that is backed by solid evidence instead of simply doing a trial-and-error approach. Another aspect to this story is that the server hosting 12c database, barely shows CPU utilization peaking to 20% during the batch run, on an 8 CPU server. This lead us to believe that the application process is not exactly CPU-bound (which is actually the case).
Chris Saxon
February 16, 2024 - 6:33 pm UTC

Have you tried running the process on a 19c installation on the same server as the 12c database?

Re: Statistics Gathering Performance

Narendra, February 19, 2024 - 4:00 pm UTC

Hello Chris,

We haven't tried running it on 19c database hosted on same server as 12c because there is no appetite to go through the motions to achieve that (don't ask me why please...).
I believe we have almost resigned to the fact that the difference in response time is due to app db sessions not getting as much slice of CPU on ExaC@C as it can get on standalone on-premise server, although it is all based on circumstantial evidence.
We may never be able to find out
a) how long app sessions spent waiting for CPU &
b) how much more CPU will be needed to eliminate/significantly reduce those CPU waits

I have to admit I am struggling to digest that something like this, which is effectively a "crowded/noisy neighbour" issue, can not be conclusively proven, especially in a Cloud environment.

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.