Skip to Main Content
  • Questions
  • SQL execution elapsed for a long time AND could not find the SQL statement in AWR Report

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Joe.

Asked: October 23, 2015 - 4:46 am UTC

Last updated: October 28, 2015 - 9:32 am UTC

Version: 11.2.0.4

Viewed 1000+ times

You Asked

Hi Tom,
Recently, we were troubled by weird problem that cause by SQL insert executions like this (step by step):
STEP1: insert into BANKCHK values (...), about 65,000 records inserted;
STEP2: insert into BANKTMP select ... , about 60,000 records inserted;
STEP3: insert into BANKSUC
(ChkDat,AgtOrg,ChkSts,TMercId,TTermNo,SRefNo,
TxnAmt,ActNo,CcyCod,TTxnDt,TTxnTm,CrdNo,Fee,
BankDate ...)
select /*+ use_hash(p,t) */
distinct '20151023',t.AgtOrg,t.ChkSts,t.TMercId,t.TTermNo,t.SRefNo,t.txnamt
,t.ActNo,t.CcyCod,trim(t.TTxnDt),trim(t.TTxnTm),trim(t.CrdNo), t.Fee ,
t.BankDate...
from BANKTMP p,BANKCHK t
where trim(p.TTxnDt)=trim(t.TTxnDt) and p.TTxnTm=t.TTxnTm
and t.Field1=p.TSRefNo and t.TxnAmt=p.TxnAmt

This STEP3 statment insert about 65,000 rows. And this STEP is what I said 'weird problem', because it worked much less effeciency than my laptop.
By the way, the BANKSUC is a partition table which partioned by months, and about 7500,000 records in this table.

The STEP3 statement takes as long as 90 seconds to accomplish.
Then, we exp/imp these tables to my laptop to have a test. Oops, the STEP3 finished just in 10 seconds without any data buffer.

The execution plan shows the STEP3 SQL will finished in 66 seconds in our sever and 68 seconds in my laptop. Both the execution plan are like this:
--------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 586 | 410K| | 5472 (1)| 00:01:06 |
| 1 | HASH UNIQUE | | 586 | 410K| | 5472 (1)| 00:01:06 |
|* 2 | HASH JOIN | | 586 | 410K| 7056K| 5471 (1)| 00:01:06 |
| 3 | TABLE ACCESS FULL| BANKCHK | 65025 | 6286K| | 1265 (1)| 00:00:16 |
| 4 | TABLE ACCESS FULL| BANKTMP | 58579 | 34M| | 2119 (1)| 00:00:26 |
--------------------------------------------------------------------------------------------
Moreover, The BANKTMP and BANKCHK only have less than 70,000 records, because our program will delete table BANKCHK and BANKTMP everyday, right after all the 3 SQL statements above finished(about at 5:20 AM). Since these 2 tables gather table states automatically during 22:00~02:00, we lock the states of these 2 tables with DBMS_STATS.LOCK_TABLE_STATS package, to ensure the 2 tables will not be '1 row' in execution plan. However, it just maked not sense.

Then we checked the AWR report, but the SQL statement(STEP 3) was not found in 'Top 10 SQL order by elapsed time', eventhought it takes more than 90 seconds and it should be ranked 1st in the Top 10 SQL. After that, we checked the gv$sqlarea and dba_hist_sql_text, nor could we find this STEP 3 SQL. However, other SQL statement executed during the same time by the same session which elapsed less than 90 seconds was recorded in AWR Report.
The DB Time is 24 minites during the 60 minites AWR Report.

Ain't it funny that the server environment is much less efficiency than my laptop, and the execute history was not record in this situation?
We had kept the BANKCHK and BANKTMP table into buffer pool, and all the indexes of BANKSUC too. Unfortunately, nothing was changed.

What shall we do next to find out the bottleneck of the STEP3 SQL statement?
Any help is going to be appreciated. :)


and Chris said...

The top SQL by elapsed time is based on all executions during the period, not the single longest. If you have lots of sub-second statements that you execute thousands of times, the cumulative time will be longer than a single 90s query.

If you want to ensure a SQL statement appears in AWR, then you can color it:

http://docs.oracle.com/database/121/ARPLS/d_workload_repos.htm#ARPLS69108

However, if you want to analyze the statement you should execute it using SQL trace or autotrace to get the stats for it.

https://oracle-base.com/articles/misc/sql-trace-10046-trcsess-and-tkprof

Rating

  (2 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Do you truncate?

Ian, October 27, 2015 - 10:16 am UTC

Hi

You said "our program will delete table BANKCHK and BANKTMP everyday, right after all the 3 SQL statements above finished".

Do you use truncate to do this? If so - the truncate will remove any SQL statements that reference these tables from the shared pool. Thus they are no longer there to be captured by AWR.

Just a thought.

Ian

Joe Huang, October 28, 2015 - 7:09 am UTC

Hi,Ian
Yes, We used 'delete' to delete table BANKCHK and BANKTMP, then we checked the AWR Report and nothing was found.
The cumulative time in AWR Report who ranked 2nd was 47s, thus we thought the 2 statements shoud be caught into AWR Report.
According to Chris' advice, we checked the sql trace and ADDM, and found that "User I/O" was consuming significant database time. This could be the reason why it took a long time to finish the executions, but we still wonder why the AWR Report had not caught the statements.
Connor McDonald
October 28, 2015 - 9:32 am UTC

Did the statement finish in the AWR period? If not, you may find it in the report for the next time period.

If you think statements are genuinely missing from AWR, then please raise this with support.

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library