Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Connor McDonald

Thanks for the question.

Asked: June 22, 2016 - 1:30 am UTC

Last updated: July 02, 2016 - 1:29 am UTC

Version: 11

Viewed 10K+ times! This question is

You Asked

Hi Tom,

I'm trying to retrieve the last 20 rows by time for users with more than 100K rows in a table with more than 1 million records. The query performance is fine (in ms) when the user has a small number of records. But takes more than 2 minutes from the application to fetch 20 records for users with about 10K-100K records.

Here's the query:

select * from ( select * from TABLE1
where USER_ID= 41063660 and COL1=0 and COL2 = 0
order by LAST_EVENT_DATE desc)
where rownum <= 20 ;

There is an index (I_LASTEVENTDT) on (USER_ID, COL1, COL2, LAST_EVENT_DATE DESC)

Here is the explain plan:

------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 20 | 38960 | | 66959 (1)| 00:13:24 |
|* 1 | COUNT STOPKEY | | | | | | |
| 2 | VIEW | | 65500 | 121M| | 66959 (1)| 00:13:24 |
|* 3 | SORT ORDER BY STOPKEY | | 65500 | 96M| 102M| 66959 (1)| 00:13:24 |
| 4 | TABLE ACCESS BY INDEX ROWID| TABLE1 | 65500 | 96M| | 47280 (1)| 00:09:28 |
|* 5 | INDEX RANGE SCAN | I_LASTEVENTDT | 65500 | | | 309 (0)| 00:00:04 |
------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(ROWNUM<=20)
3 - filter(ROWNUM<=20)
5 - access("USER_ID"=41063660 AND "COL1"=0 AND
"COL2"=0)


I have tried creating a separate index on (USER_ID, COL1, COL2) and (LAST_EVENT_DT DESC) too and also tried index on (USER_ID,LAST_EVENT_DT DESC) . The performance was worse for both.

In one of your reference article http://www.oracle.com/technetwork/issue-archive/2006/06-sep/o56asktom-086197.html which shows a similar query, I see that the sort order by stop key filters down the results to 10, but in my query plan, the sort is still done on 65K rows.
I'm not sure why.

How to I get better performance from this query?

Thanks in advance.

and Connor said...

Sorry - you'll need to send me a test case. I couldn't reproduce what you are seeing - I'm getting the expected index scan of just the appropriate 20 rows

SQL> alter session set optimizer_features_enable  = '11.2.0.4';

Session altered.

SQL> create table t
  2   ( user_id int,
  3     col1 int,
  4     col2 int,
  5     last_event_date date );

Table created.

SQL>
SQL> insert /*+ APPEND */ into t
  2  select mod(rownum,1000),
  3   mod(rownum,1000), mod(rownum,1000),
  4    date '2010-01-01' + rownum * 2000 / 100000000
  5  from
  6   ( select 1 from dual connect by level <= 10000 ),
  7   ( select 1 from dual connect by level <= 10000 );

100000000 rows created.

SQL>
SQL> create index IX on t ( user_id, col1, col2, last_event_date desc ) ;

Index created.

SQL> select /*+ gather_plan_statistics */ *
  2  from (
  3    select * from t
  4    where USER_ID= 416 and COL1=416 and COL2 = 416
  5    order by LAST_EVENT_DATE desc)
  6  where rownum <= 20 ;

   USER_ID       COL1       COL2 LAST_EVEN
---------- ---------- ---------- ---------
       416        416        416 23-JUN-15
       416        416        416 23-JUN-15
       416        416        416 23-JUN-15
       416        416        416 23-JUN-15
       416        416        416 23-JUN-15
       416        416        416 23-JUN-15
       416        416        416 23-JUN-15
       416        416        416 23-JUN-15
       416        416        416 23-JUN-15
       416        416        416 23-JUN-15
       416        416        416 23-JUN-15
       416        416        416 23-JUN-15
       416        416        416 23-JUN-15
       416        416        416 23-JUN-15
       416        416        416 23-JUN-15
       416        416        416 23-JUN-15
       416        416        416 23-JUN-15
       416        416        416 23-JUN-15
       416        416        416 23-JUN-15
       416        416        416 23-JUN-15

20 rows selected.

SQL> @exp

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------
SQL_ID  b532d7wawh362, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ * from (   select * from t   where
USER_ID= 416 and COL1=416 and COL2 = 416   order by LAST_EVENT_DATE
desc) where rownum <= 20

Plan hash value: 3186571263

-------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |     20 |00:00:00.01 |       7 |
|*  1 |  COUNT STOPKEY     |      |      1 |        |     20 |00:00:00.01 |       7 |
|   2 |   VIEW             |      |      1 |  65259 |     20 |00:00:00.01 |       7 |
|*  3 |    INDEX RANGE SCAN| IX   |      1 |  65259 |     20 |00:00:00.01 |       7 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<=20)
   3 - access("USER_ID"=416 AND "COL1"=416 AND "COL2"=416)

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
SQL>
SQL>


Rating

  (10 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Scenario for the slow query

Kash, June 24, 2016 - 12:10 am UTC

I'm not sure how to go about creating a testcase for this. If I create 100K rows for the user in one go, like you've done in your example, the query performance is OK. I see the slowness of this query in our prod environment where the 100K rows have been created for a user over the last few years. I'm not sure how I can create a testcase for this to post here.
We cleared the histograms and ran the explain plan using the hint you provided and here is the output for another user with a large number of rows:

---------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
----------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 20 |00:01:18.63 |112K| 109K| | | |

|* 1 | COUNT STOPKEY | | 1 | | 20 |00:01:18.63 |112K| 109K| | | |

| 2 | VIEW | | 1 | 2 | 20 |00:01:18.63 |112K| 109K| | | |

|* 3 | SORT ORDER BY STOPKEY | | 1 | 2 | 20 |00:01:18.63 |112K| 109K| 90M| 4923K| 80M (0)|

| 4 | TABLE ACCESS BY INDEX ROWID| TABLE1 | 1 | 2 | 112K|00:01:14.21 |112K| 109K| | | |

|* 5 | INDEX RANGE SCAN | I_LASTEVENTDT | 1 | 2 | 112K|00:00:00.12 |505 | 492 | 1025K| 1025K| |

------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(ROWNUM<=20)
3 - filter(ROWNUM<=20)
5 - access("USER_ID"=41063660 AND "COL1"=0 AND
"COL2"=0)
------------
Now I see that the actual rows in the stop key is 20, but the query still takes over a minute and half to execute.
Is this the best performance I can expect to get out it?
Is this performance issue related to the way the data has been loaded? If so, can I do anything about it?
Connor McDonald
June 24, 2016 - 2:16 am UTC

But yours is a different plan - notice mine has no sorting, because I'm walking down the index and stopping. Yours is collecting *all* values for the user/col1/col2 and then sorting.

So something is different here.

Can you paste in the DDL for the table and the indexes.

Scenario for testing query

Kash, June 24, 2016 - 12:14 am UTC

I'm not sure how to go about creating a testcase for this. If I create 100K rows for the user in one go, like you've done in your example, the query performance is OK. I see the slowness of this query in our prod environment where the 100K rows have been created for a user over the last few years. I'm not sure how I can create a testcase for this to post here.
We cleared the histograms and ran the explain plan using the hint you provided and here is the output for another user with a large number of rows:

---------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
----------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 20 |00:01:18.63 |112K| 109K| | | |

|* 1 | COUNT STOPKEY | | 1 | | 20 |00:01:18.63 |112K| 109K| | | |

| 2 | VIEW | | 1 | 2 | 20 |00:01:18.63 |112K| 109K| | | |

|* 3 | SORT ORDER BY STOPKEY | | 1 | 2 | 20 |00:01:18.63 |112K| 109K| 90M| 4923K| 80M (0)|

| 4 | TABLE ACCESS BY INDEX ROWID| TABLE1 | 1 | 2 | 112K|00:01:14.21 |112K| 109K| | | |

|* 5 | INDEX RANGE SCAN | I_LASTEVENTDT | 1 | 2 | 112K|00:00:00.12 |505 | 492 | 1025K| 1025K| |

------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(ROWNUM<=20)
3 - filter(ROWNUM<=20)
5 - access("USER_ID"=41063660 AND "COL1"=0 AND
"COL2"=0)
------------

Now I see that the actual rows in the stop key is 20, but the query still takes over a minute and half to execute.
Is this the best performance I can expect to get out it?
Is this performance issue related to the way the data has been loaded? If so, can I do anything about it?

DDL and indexes

Kash, June 24, 2016 - 3:50 am UTC

Here you go:

CREATE INDEX I_LASTEVENTDT ON TABLE1 (USER_ID, COL1, COL2, COL3, LAST_EVENT_DATE)
LOGGING
TABLESPACE TA_IDX_TS
PCTFREE 10
INITRANS 2
STORAGE
(
INITIAL 65536
NEXT 1048576
MINEXTENTS 1
MAXEXTENTS UNLIMITED
BUFFER_POOL DEFAULT
)
NOPARALLEL
CREATE INDEX I_LAST_ACT_TS ON TABLE1 (LAST_ACT_TS)
LOGGING
TABLESPACE TA_IDX_TS
PCTFREE 10
INITRANS 2
STORAGE
(
INITIAL 65536
NEXT 1048576
MINEXTENTS 1
MAXEXTENTS UNLIMITED
BUFFER_POOL DEFAULT
)
NOPARALLEL
CREATE UNIQUE INDEX SYS_C0028727 ON TABLE1 (CL_ID ASC)
LOGGING
TABLESPACE TA_IDX_TS
PCTFREE 10
INITRANS 2
STORAGE
(
INITIAL 65536
NEXT 1048576
MINEXTENTS 1
MAXEXTENTS UNLIMITED
BUFFER_POOL DEFAULT
)
NOPARALLEL
CREATE UNIQUE INDEX UK_CLU_DATA_USER_ID_CLU_HASH ON TABLE1 (USER_ID ASC, CL_HASH ASC)
LOGGING
TABLESPACE TA_IDX_TS
PCTFREE 10
INITRANS 2
STORAGE
(
INITIAL 65536
NEXT 1048576
MINEXTENTS 1
MAXEXTENTS UNLIMITED
BUFFER_POOL DEFAULT
)

CREATE TABLE TABLE1
(
P_ID NUMBER(20, 0) NOT NULL
, CL_ID NUMBER(20, 0) NOT NULL
, COL3 NUMBER(10, 0) NOT NULL
, COL12 NUMBER(20, 0) NOT NULL
, LAST_EVENT_DATE DATE NOT NULL
, FIRST_EVENT_DATE DATE
, CL_HASH VARCHAR2(512 BYTE) NOT NULL
, COL1 NUMBER(1, 0) DEFAULT 0 NOT NULL
, USER_ID NUMBER(20, 0) NOT NULL
, COL5 NUMBER(20, 0)
, COL6 VARCHAR2(512 BYTE)
, COL7 VARCHAR2(512 BYTE)
, COL8 VARCHAR2(2048 BYTE)
, LAST_ACT_TS NUMBER(15, 0)
, COL11 VARCHAR2(128 BYTE)
, COL2 NUMBER(1, 0) DEFAULT 0 NOT NULL
, COL9 NUMBER(10, 7)
, COL10 NUMBER(10, 7)
, CONSTRAINT SYS_C0028727 PRIMARY KEY
(
CL_ID
)
ENABLE
)
LOGGING
TABLESPACE IA_DAT_TS
PCTFREE 10
INITRANS 1
STORAGE
(
INITIAL 65536
NEXT 1048576
MINEXTENTS 1
MAXEXTENTS UNLIMITED
BUFFER_POOL DEFAULT
)
NOCOMPRESS
NOPARALLELALTER TABLE TABLE1
ADD CONSTRAINT UK_CLU_DATA_USER_ID_CLU_HASH UNIQUE
(
USER_ID
, CL_HASH
)
ENABLEALTER TABLE TABLE1
ADD CONSTRAINT SYS_C0050410 CHECK
(COL2 in (0,1))
ENABLE

Connor McDonald
June 25, 2016 - 6:17 am UTC

Aha !

Your question says:

"There is an index (I_LASTEVENTDT) on (USER_ID, COL1, COL2, LAST_EVENT_DATE DESC)"

but your ddl suggests

"CREATE INDEX I_LASTEVENTDT ON TABLE1 (USER_ID, COL1, COL2, COL3, LAST_EVENT_DATE)"

and since you dont specify COL3 in your query, then we are limited to just access by col1 and col2, so we can't walk down the index and avoid the sort.

Difference

Stew Ashton, June 24, 2016 - 7:12 am UTC

Kash is asking for columns that are not in the index, but Connor is asking only for columns that are in the index.

Kash, could you run the following statement and post the resulting plan?
select * from (
  select user_id, col1, col2, last_event_date from TABLE1 
  where USER_ID= 41063660 and COL1=0 and COL2 = 0 
  order by LAST_EVENT_DATE desc
) 
where rownum <= 20;

Connor McDonald
June 25, 2016 - 6:15 am UTC

Man, that is careless of me :-(

But even with another column, I can still get a plan that our poster is not getting:

SQL>
SQL> create table t
  2       ( user_id int,
  3         col1 int,
  4         col2 int,
  5         last_event_date date ,
  6         other int) nologging;

Table created.

SQL>  insert /*+ APPEND */ into t
  2      select mod(rownum,1000),
  3       mod(rownum,1000), mod(rownum,1000),
  4        date '2010-01-01' + rownum * 2000 / 100000000, rownum
  5      from
  6       ( select 1 from dual connect by level <= 10000 ),
  7       ( select 1 from dual connect by level <= 10000 );

100000000 rows created.

SQL>
SQL> create index IX on t ( user_id, col1, col2, last_event_date desc ) nologging;

Index created.

SQL> select /*+ gather_plan_statistics */ *
  2      from (
  3        select * from t
  4        where USER_ID= 416 and COL1=416 and COL2 = 416
  5        order by LAST_EVENT_DATE desc)
  6      where rownum <= 20 ;

   USER_ID       COL1       COL2 LAST_EVEN      OTHER
---------- ---------- ---------- --------- ----------
       416        416        416 23-JUN-15   99999416
       416        416        416 23-JUN-15   99998416
       416        416        416 23-JUN-15   99997416
       416        416        416 23-JUN-15   99996416
       416        416        416 23-JUN-15   99995416
       416        416        416 23-JUN-15   99994416
       416        416        416 23-JUN-15   99993416
       416        416        416 23-JUN-15   99992416
       416        416        416 23-JUN-15   99991416
       416        416        416 23-JUN-15   99990416
       416        416        416 23-JUN-15   99989416
       416        416        416 23-JUN-15   99988416
       416        416        416 23-JUN-15   99987416
       416        416        416 23-JUN-15   99986416
       416        416        416 23-JUN-15   99985416
       416        416        416 23-JUN-15   99984416
       416        416        416 23-JUN-15   99983416
       416        416        416 23-JUN-15   99982416
       416        416        416 23-JUN-15   99981416
       416        416        416 23-JUN-15   99980416

20 rows selected.

SQL> @exp

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  4wwn679dca433, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ *     from (       select * from t
      where USER_ID= 416 and COL1=416 and COL2 = 416       order by
LAST_EVENT_DATE desc)     where rownum <= 20

Plan hash value: 1619504182

---------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |      |      1 |        |     20 |00:00:00.01 |      27 |     65 |
|*  1 |  COUNT STOPKEY                |      |      1 |        |     20 |00:00:00.01 |      27 |     65 |
|   2 |   VIEW                        |      |      1 |      1 |     20 |00:00:00.01 |      27 |     65 |
|   3 |    TABLE ACCESS BY INDEX ROWID| T    |      1 |      1 |     20 |00:00:00.01 |      27 |     65 |
|*  4 |     INDEX RANGE SCAN          | IX   |      1 |      1 |     20 |00:00:00.01 |       7 |      9 |
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<=20)
   4 - access("USER_ID"=416 AND "COL1"=416 AND "COL2"=416)


24 rows selected.

SQL>


I get that with opt_feat_enab set to 12,11,10 or 9.


Select with cols only in index

Kash, June 24, 2016 - 5:56 pm UTC

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------
SQL_ID byqzfttzppdv4, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ * from ( select USER_ID,
COL1, COL2, last_event_date from
TABLE1 where USER_ID= 41063660 and COL1=0 and
COL2= 0 order by LAST_EVENT_DATE desc ) where
rownum <= 20

Plan hash value: 1545200935

--------------------------------------------------------------------------------------------------------------
--------------------------------------

| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers
| Reads | OMem | 1Mem | Used-Mem |

--------------------------------------------------------------------------------------------------------------
--------------------------------------

| 0 | SELECT STATEMENT | | 1 | | 20 |00:00:01.29 | 521
| 520 | | | |

|* 1 | COUNT STOPKEY | | 1 | | 20 |00:00:01.29 | 521
| 520 | | | |

| 2 | VIEW | | 1 | 1 | 20 |00:00:01.29 | 521
| 520 | | | |

|* 3 | SORT ORDER BY STOPKEY| | 1 | 1 | 20 |00:00:01.29 | 521
| 520 | 11264 | 11264 |10240 (0)|

|* 4 | INDEX RANGE SCAN | I_LASTEVENTDT | 1 | 1 | 115K|00:00:00.03 | 521
| 520 | 1025K| 1025K| |

--------------------------------------------------------------------------------------------------------------
--------------------------------------


Predicate Information (identified by operation id):

1 - filter(ROWNUM<=20)
3 - filter(ROWNUM<=20)
4 - access("USER_ID"=41063660 AND "COL1"=0 AND "COL2"=0)


27 rows selected.

Now try this

Stew Ashton, June 24, 2016 - 8:38 pm UTC

This will not solve the problem of why your DB is doing an extra sort step, but it should reduce the time wasted:
select * from table1 where rowid in (
  select * from (
    select rowid rid from TABLE1 
    where USER_ID= 41063660 and COL1=0 and COL2 = 0 
    order by LAST_EVENT_DATE desc
  ) 
  where rownum <= 20
);
Again, please post execution plan (and use the </> button for fixed width font puleause! )
Connor McDonald
June 25, 2016 - 3:07 am UTC

"(and use the </> button for fixed width font puleause!"

Welcome to our world :-)

Optimize current query

Kash, June 24, 2016 - 9:57 pm UTC

Thanks, but I would like to understand why the current query is still running a sort. The application uses JPA/hibernate and in order to use rowid, we would need to re-write it in native sql. So, before I go that route, I would like to see if we can optimize the current query.

That worked! But...

Kash, June 29, 2016 - 10:41 pm UTC

Thank you Connor! I was able to get the right plan without a sort after removing COL3 from the index. Now the query is quite fast.
That introduced another problem though. There's another query which uses COL3 in the where clause along with COL1, COL2 AND COL3. Now this query takes a long time. The interesting this is that this other query does not show a sort in the plan. The plan and time in explain plan for this query looks good too, but when I run the query from the application it takes a long time.

select /*+ gather_plan_statistics */ * from ( select
* from TABLE1 where
USER_ID=117 and COL1=0 and
COL2=0 and COL3>=10
order by LAST_EVENT_DATE desc ) where rownum <= 10

-------------------------
 | Id  | Operation                      | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
OMem |  1Mem | Used-Mem |
 
------------------------------------------------------------

 
|   0 | SELECT STATEMENT               |               |      1 |        |      2 |00:00:00.82 |   11137 |  11067 |
     |       |          |
 
|*  1 |  COUNT STOPKEY                 |               |      1 |        |      2 |00:00:00.82 |   11137 |  11067 |
     |       |          |
 
|   2 |   VIEW                         |               |      1 |      1 |      2 |00:00:00.82 |   11137 |  11067 |
     |       |          |
 
|*  3 |    TABLE ACCESS BY INDEX ROWID | TABLE1        |      1 |      1 |      2 |00:00:00.82 |   11137 |  11067 |
     |       |          |
 
|*  4 |     INDEX RANGE SCAN DESCENDING| I_LASTEVENTDT |      1 |      5 |  11093 |00:00:00.01 |      48 |     44 |
1025K|  1025K|          |
 
------------------------------------------------------------
 
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter(ROWNUM<=10)
   3 - filter("COL3">=10)
   4 - access("USER_ID"=11742400 AND "COL1"=0 AND "COL2"=0)
 
 
36 rows selected.
 
Elapsed: 00:00:00.02


If I run the same sql from the application or from sql plus it takes about 8 seconds.
select * from ( select * from TABLE1 where
USER_ID=117 and COL1=0 and
COL2=0 and COL3>=10
order by LAST_EVENT_DATE desc ) where rownum <= 10

2 rows selected.
 
Elapsed: 00:00:08.86


If I look at the AWR report and find the plan this query is using during the application run, it does seem to be executing with this plan. Why is the query taking 8 seconds to return just 2 rows?


Connor McDonald
June 30, 2016 - 1:47 am UTC

"Why is the query taking 8 seconds to return just 2 rows? "

The issue (with any SQL) is not how many rows to be *returned*, its how much work I had to do to *find* those rows.


In this case, look at line 4 in the plan, the 'index range descending'. You had to do 11093 reads through that index.

If *none* of those blocks were in the buffer cache, then at (say) 1ms per read, it would be 11 seconds of time to scan them. Since *some* of the blocks are likely to be in the buffer cache, the time would be a little better than that.

And similarly, repeated executions are probably going to look 'fast' because once most of those blocks are in the cache, then things will get quite zippy (it's now predominantly a CPU thing rather than disk reads)

And you can from line 4 in the *predicates* section, you are basically scanning the index via user/col1/col2...so if that selectivity is good, you *might* be able to survive with just the one index...But if not, and you need both these query types to be super fast...then you might need both indexes.

Still using sub-optimal index

Kash, June 30, 2016 - 10:02 pm UTC

Thanks again Connor. So I tried putting an index on the (USER_ID, COL1, COL2, COL3, LAST_EVENT_DT) as well.
So I have two indices: (USER_ID, COL1, COL2, COL3, LAST_EVENT_DT) and
(USER_ID, COL1, COL2, LAST_EVENT_DT)
It looks like the query with USER_ID, COL1, COL2, COL3 in it's where clause still uses the other index with no COL3. How do I force it to use the new index?

select /*+ gather_plan_statistics */ * from ( select
* from TABLE1  where
USER_ID=41079430 and COL1=0 and
COL2=0 and COL3>=10
order by LAST_EVENT_DATE desc ) where rownum <= 10
 
 
------------------------------------------------------------
 
| Id  | Operation                      | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
 
------------------------------------------------------------
 
|   0 | SELECT STATEMENT               |               |      1 |        |     10 |00:00:00.22 |
425 |    353 |       |       |          |
 
|*  1 |  COUNT STOPKEY                 |               |      1 |        |     10 |00:00:00.22 |
425 |    353 |       |       |          |
 
|   2 |   VIEW                         |               |      1 |     11 |     10 |00:00:00.22 |
425 |    353 |       |       |          |
 
|*  3 |    TABLE ACCESS BY INDEX ROWID | TABLE1        |      1 |    630 |     10 |00:00:00.22 |
425 |    353 |       |       |          |
 
|*  4 |     INDEX RANGE SCAN DESCENDING| I_LASTEVENTDT |      1 |    110 |    526 |00:00:00.01 |
  6 |      4 |  1025K|  1025K|          |
 
------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter(ROWNUM<=10)
   3 - filter("COL3">=10)
   4 - access("USER_ID"=41079430 AND "COL1"=0 AND "COL2"=0)



Connor McDonald
July 01, 2016 - 2:25 am UTC

There is a difference between an index created as:

(USER_ID, COL1, COL2, COL3, LAST_EVENT_DT)

and

(USER_ID, COL1, COL2, COL3, LAST_EVENT_DT desc)

Index desc

Kash, July 01, 2016 - 3:37 am UTC

I'm sorry I didn't quite understand your suggestion.
I had changed the original index to create a normal ASC index after the original post to keep things simple.

"CREATE INDEX I_LASTEVENTDT ON TABLE1 (USER_ID, COL1, COL2, LAST_EVENT_DATE)" 


So, the second index is now:
"CREATE INDEX I_LASTEVENTDT_2 ON TABLE1 (USER_ID, COL1, COL2, COL3, LAST_EVENT_DATE)"


The first query is fast with the I_LASTEVENTDT index.

Are you suggesting I create
"CREATE INDEX I_LASTEVENTDT_2 ON TABLE1 (USER_ID, COL1, COL2, COL3, LAST_EVENT_DATE DESC)"

so that the second query plan will pick this index ?

Connor McDonald
July 02, 2016 - 1:29 am UTC

Let me summarise as I understand it:

You have 2 queries

where user_id = ...
and col1 = ...
and col2 = ...
order by last_event_date desc

where user_id = ...
and col1 = ...
and col2 = ...
and col3 >= ...
order by last_event_date desc

To satisfy the first (and avoid large sorts) you'll probably need an index on:

(userid,col1,col2,last_event_date DESC)

To satisfy the second, you'll probably need an index

(userid,col1,col2,col3,last_event_date)

In this second case, the greater-than predicate on col3 probably means you're limited to that 'depth' into the index, and thus the last_event_date may or may not be beneficial in there.

Hope this helps.

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.