You Asked
Hi Tom,
Recently, we were troubled by weird problem that cause by SQL insert executions like this (step by step):
STEP1: insert into BANKCHK values (...), about 65,000 records inserted;
STEP2: insert into BANKTMP select ... , about 60,000 records inserted;
STEP3: insert into BANKSUC
(ChkDat,AgtOrg,ChkSts,TMercId,TTermNo,SRefNo,
TxnAmt,ActNo,CcyCod,TTxnDt,TTxnTm,CrdNo,Fee,
BankDate ...)
select /*+ use_hash(p,t) */
distinct '20151023',t.AgtOrg,t.ChkSts,t.TMercId,t.TTermNo,t.SRefNo,t.txnamt
,t.ActNo,t.CcyCod,trim(t.TTxnDt),trim(t.TTxnTm),trim(t.CrdNo), t.Fee ,
t.BankDate...
from BANKTMP p,BANKCHK t
where trim(p.TTxnDt)=trim(t.TTxnDt) and p.TTxnTm=t.TTxnTm
and t.Field1=p.TSRefNo and t.TxnAmt=p.TxnAmt
This STEP3 statment insert about 65,000 rows. And this STEP is what I said 'weird problem', because it worked much less effeciency than my laptop.
By the way, the BANKSUC is a partition table which partioned by months, and about 7500,000 records in this table.
The STEP3 statement takes as long as 90 seconds to accomplish.
Then, we exp/imp these tables to my laptop to have a test. Oops, the STEP3 finished just in 10 seconds without any data buffer.
The execution plan shows the STEP3 SQL will finished in 66 seconds in our sever and 68 seconds in my laptop. Both the execution plan are like this:
--------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 586 | 410K| | 5472 (1)| 00:01:06 |
| 1 | HASH UNIQUE | | 586 | 410K| | 5472 (1)| 00:01:06 |
|* 2 | HASH JOIN | | 586 | 410K| 7056K| 5471 (1)| 00:01:06 |
| 3 | TABLE ACCESS FULL| BANKCHK | 65025 | 6286K| | 1265 (1)| 00:00:16 |
| 4 | TABLE ACCESS FULL| BANKTMP | 58579 | 34M| | 2119 (1)| 00:00:26 |
--------------------------------------------------------------------------------------------
Moreover, The BANKTMP and BANKCHK only have less than 70,000 records, because our program will delete table BANKCHK and BANKTMP everyday, right after all the 3 SQL statements above finished(about at 5:20 AM). Since these 2 tables gather table states automatically during 22:00~02:00, we lock the states of these 2 tables with DBMS_STATS.LOCK_TABLE_STATS package, to ensure the 2 tables will not be '1 row' in execution plan. However, it just maked not sense.
Then we checked the AWR report, but the SQL statement(STEP 3) was not found in 'Top 10 SQL order by elapsed time', eventhought it takes more than 90 seconds and it should be ranked 1st in the Top 10 SQL. After that, we checked the gv$sqlarea and dba_hist_sql_text, nor could we find this STEP 3 SQL. However, other SQL statement executed during the same time by the same session which elapsed less than 90 seconds was recorded in AWR Report.
The DB Time is 24 minites during the 60 minites AWR Report.
Ain't it funny that the server environment is much less efficiency than my laptop, and the execute history was not record in this situation?
We had kept the BANKCHK and BANKTMP table into buffer pool, and all the indexes of BANKSUC too. Unfortunately, nothing was changed.
What shall we do next to find out the bottleneck of the STEP3 SQL statement?
Any help is going to be appreciated. :)
and Chris said...
Rating
(2 ratings)
Is this answer out of date? If it is, please let us know via a Comment