Skip to Main Content
  • Questions
  • Query on dba_hist_active_sess_history query taking too long

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Ram.

Asked: October 03, 2018 - 1:46 pm UTC

Last updated: October 06, 2018 - 6:02 am UTC

Version: 12.1.0.2

Viewed 1000+ times

You Asked

Hi,

I'm using the below query to fetch details from dba_hist_active_sess_history which matches a specific wait event occurring at a specific hour of the day within the last 90 days:

select USER_ID, PROGRAM, MACHINE from dba_hist_active_sess_history where event='db file scattered read' and extract(hour from cast(sample_time as timestamp))= 15 and sample_time > SYSDATE - 90;

This query is taking > 20 mins to execute which is huge. Can you please help in minimizing it significantly?

The reason for querying dba_hist_active_sess_history => We are doing some data analysis on the frequency of wait events and their association with the hour of the day at which they appear most and trying to find patterns which we are successful at. Now, we are trying to obtain specific details about those wait events at that hour using the above query.

and Connor said...

The underlying tables are driven mainly by DBID and SNAP_ID so you could take advantage of that with a small initial query, eg

SQL> select dbid, min(snap_id)
  2  from  dba_hist_snapshot
  3  where begin_interval_time > sysdate-90
  4  group by dbid;

      DBID MIN(SNAP_ID)
---------- ------------
 872342268         9082

--
-- use that in the true query
--
SQL> set autotrace traceonly explain
SQL> select a.USER_ID, a.PROGRAM, a.MACHINE
  2  from dba_hist_active_sess_history  a
  3  where a.event='db file scattered read'
  4  and extract(hour from cast(a.sample_time as timestamp))= 15
  5  and a.sample_time > SYSDATE - 90
  6  and a.snap_id >=9082
  7  and a.dbid  = 872342268;

Execution Plan
----------------------------------------------------------
Plan hash value: 318490875

-------------------------------------------------------------------------------------------------------
| Id  | Operation                                    | Name                           | Rows  | Bytes |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                             |                                |     1 |   102 |
|*  1 |  FILTER                                      |                                |       |       |
|   2 |   NESTED LOOPS OUTER                         |                                |     1 |   102 |
|   3 |    PARTITION RANGE ITERATOR                  |                                |     1 |    62 |
|*  4 |     TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| WRH$_ACTIVE_SESSION_HISTORY    |     1 |    62 |
|*  5 |      INDEX RANGE SCAN                        | WRH$_ACTIVE_SESSION_HISTORY_PK |     1 |       |
|   6 |    TABLE ACCESS BY INDEX ROWID BATCHED       | WRH$_EVENT_NAME                |     1 |    40 |
|*  7 |     INDEX RANGE SCAN                         | WRH$_EVENT_NAME_PK             |     1 |       |
-------------------------------------------------------------------------------------------------------


That *might* be better - it depends on how much data you've got, how much of it you're retrieving etc

Rating

  (3 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Follow up

Ram, October 05, 2018 - 7:38 am UTC

Brilliant! But, why do we still need to use the condition sample_time>sysdate-90 when we have snap_id condition?
Connor McDonald
October 06, 2018 - 5:59 am UTC

because the begin_interval may be close but not exactly equal to sysdate-90

Follow up

Ram, October 05, 2018 - 8:48 am UTC

I quickly checked and found that dba_hist_snapshot doesn't go far. In my database, the min snap_id is associated to a day which sysdate-45 whereas I need all the info from sysdate-90.

I think this is because of the default snapshot retention which is 45 days.
Connor McDonald
October 06, 2018 - 5:59 am UTC

In which case, you are querying the entire table anyway, no?

on Dictionary stats

Rajeshwaran, Jeyabal, October 05, 2018 - 1:57 pm UTC

By any chance, gathering a dictionary stats ( dbms_stats.gather_dictionary_stats) will help here? kindly advice.


Connor McDonald
October 06, 2018 - 6:02 am UTC

Quite possibly. In the absence of any execution plan or any other information...its hard to diagnose

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.